[go: up one dir, main page]

WO2024243364A1 - Subarachnoid hemorrhage detection and risk stratification with machine learning-based analysis of medical images - Google Patents

Subarachnoid hemorrhage detection and risk stratification with machine learning-based analysis of medical images Download PDF

Info

Publication number
WO2024243364A1
WO2024243364A1 PCT/US2024/030654 US2024030654W WO2024243364A1 WO 2024243364 A1 WO2024243364 A1 WO 2024243364A1 US 2024030654 W US2024030654 W US 2024030654W WO 2024243364 A1 WO2024243364 A1 WO 2024243364A1
Authority
WO
WIPO (PCT)
Prior art keywords
data
sah
patient
machine learning
learning model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
PCT/US2024/030654
Other languages
French (fr)
Inventor
William D. Freeman
Bradley J. Erickson
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Mayo Foundation for Medical Education and Research
Mayo Clinic in Florida
Original Assignee
Mayo Foundation for Medical Education and Research
Mayo Clinic in Florida
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Mayo Foundation for Medical Education and Research, Mayo Clinic in Florida filed Critical Mayo Foundation for Medical Education and Research
Publication of WO2024243364A1 publication Critical patent/WO2024243364A1/en
Anticipated expiration legal-status Critical
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/20ICT specially adapted for the handling or processing of medical images for handling medical images, e.g. DICOM, HL7 or PACS
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/02Detecting, measuring or recording for evaluating the cardiovascular system, e.g. pulse, heart rate, blood pressure or blood flow
    • A61B5/02042Determining blood loss or bleeding, e.g. during a surgical procedure
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/40Detecting, measuring or recording for evaluating the nervous system
    • A61B5/4058Detecting, measuring or recording for evaluating the nervous system for evaluating the central nervous system
    • A61B5/4064Evaluating the brain
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/7264Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
    • A61B5/7267Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems involving training the classification device
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7271Specific aspects of physiological measurement analysis
    • A61B5/7275Determining trends in physiological measurement data; Predicting development of a medical condition based on physiological measurements, e.g. determining a risk factor
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • G06N20/20Ensemble learning
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/20ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B2576/00Medical imaging apparatus involving image processing or analysis
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/30ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for calculating health indices; for individual health risk assessment

Definitions

  • Aneurysmal subarachnoid hemorrhage is bleeding in the space between the brain and the surrounding membrane (i.e., the subarachnoid space and subarachnoid lymphatic-like membrane (SLYM)).
  • Aneurysmal SAH is a medical/neurosurgical emergency that historically carried a 30-40% one-month mortality.
  • SAH is currently diagnosed using noncontrast CT (NCCT), followed by neurosurgical interventions such as aneurysm clipping or coiling, and external ventricular drains to reduce intracranial pressure.
  • NCCT noncontrast CT
  • the method includes accessing medical imaging data with a computer system, where the medical imaging data have been acquired from a patient.
  • a machine learning model is also accessed with the computer system, where the machine learning model has been trained on training data to detect and assess subarachnoid hemorrhage based on medical images.
  • the medical imaging data are input to the machine learning model with the computer system, generating classified feature data as an output.
  • the classified feature data indicate at least one of SAH detection, SAH risk stratification, or SAH prognosis for the patient.
  • a report is generated with the computer system using the classified feature data, where the report indicates one or more of the SAH detection, SAH risk stratification, or SAH prognosis based on the medical imaging data.
  • FIG. 1 is a flowchart seting forth the steps of an example method for quantifying subarachnoid hemorrhage (SAH) blood volume from medical images using a neural network model.
  • SAH subarachnoid hemorrhage
  • FIG. 2 is a flowchart seting forth the steps of an example method for training a neural network model to quantify SAH blood volume from medical images.
  • FIGS. 3 A and 3B show examples of manually annotated and automatically segmented medical images indicating probable SAH regions.
  • FIG. 4 is a flowchart seting forth the steps of an example method for assessing SAH in a patient using a machine learning model.
  • FIG. 5 is a flowchart seting forth the steps of an example method for training a machine learning model to assess SAH in a patient based on quantified SAH blood volume (qvSAH) data and patient health data.
  • qvSAH quantified SAH blood volume
  • FIG. 6 shows (a) Multiple NCCT-imaging slices of a patient with aSAH. There are visible hyperdensities in (b) multiple cisternal compartments, marked with lines to determine length and width of the respected hemorrhage in mm. Measurement of width, thickness and vertical extension (number of CT-slices with visible hemorrhage) was done in consideration of cisternal anatomy. These metric variables were then imputed in (c) a simplified quantitative volumetric formula (simplified volume equation), to measure the hemorrhagic volume in each anatomical structure. Each volume was then summed to a cumulative total cisternal subarachnoid hemorrhage volume (CHV). If intraparenchymal hematoma or intraventricular hemorrhage was present it was summated to the CHV and referred to as external cisternal hemorrhage volume (eCHV).
  • CHV cumulative total cisternal subarachnoid hemorrhage volume
  • FIG. 8 shows the end-result of an example manual method in 2-dimensional (a, b, c) and 3 -dimensional (d. e.
  • S AH V-3D Brain Map subarachnoid hemorrhage volume, S AH V-3D Brain Map. This SAHV-3D Brain Map is from the first case on day 0 (incident day of SAH). Segmentation of eight spaces (five cisternal spaces, Intraparenchymal Hemorrhage (IPH). Intraventricular Hemorrhage (IVH), and gyral/sulcal spaces) in red equals blood. Planes: axial (a); sagittal (b); coronal (c); View from axial (d); sagittal (e); coronal (f).
  • FIG. 9 shows the end-result SAHVAI of SAHV in 2-dimensional (a, b, c) and 3-dimensional (d, e, I SAHV-3D Brain Map formats.
  • This SAHVAI-3D Brain Map is from the first case on day 0 (incident day of SAH). Segmentation of five cisternal spaces in red equals blood. Planes: axial (a); sagittal (b); coronal (c); View from axial (d); sagittal (e); coronal (f).
  • FIG. 11 shows 2-dimensional SAHV Brain Map of three cases (Case 1, Case 5, and Case 8) of manual (a, b, c) versus SAHVAI (d, e, f) methods from an example study. All planes are axial. The MM segmented eight spaces are colored red as SAH blood, whereas the SAHVAI method labeled five cisternal spaces in red, which equals the SAH blood. Each NCCT scan is labeled with an overall opacity of 50%.
  • Al artificial intelligence
  • CTA Computed tomography angiography
  • FIG. 13 is a block diagram of an example system for SAH detection, risk stratification, and prognosis.
  • FIG. 14 is a block diagram of example components that can implement the system of FIG. 13.
  • FIG. 15 is a schematic showing the current state of SAH patient care with delays in SAH recognition, lack of quantified precision measurement of SAHV blood, and early activation of neurosurgical interventions.
  • the bottom part of the image shows the integrated SAHV Al system which integrates a SAH detection system for presence or absence of SAH blood, an automated segmentation of the qv-SAH (SAHV) in mL, integration with Electronic Medical Record (EMR) variables, a reporting and communication platform among stroke teams to rapidly accelerate patient care interventions for this extremely time-sensitive stroke disease.
  • EMR Electronic Medical Record
  • FIG 16 is a diagram showing how multimodal Al can use SAHV Al with other multi-omics data available in the EMR such as clinical notes data, physiologic (blood pressure and ECG) with phenotypic data, and pathological data to make predictive analytics to generate models on clinical outcomes, delayed cerebral ischemia, and when combined with existing pharmacogenomics data and physiological and EMR data predictions about drug responsiveness.
  • EMR electronic medical record
  • FIG 16 shows how multimodal Al can use SAHV Al with other multi-omics data available in the EMR such as clinical notes data, physiologic (blood pressure and ECG) with phenotypic data, and pathological data to make predictive analytics to generate models on clinical outcomes, delayed cerebral ischemia, and when combined with existing pharmacogenomics data and physiological and EMR data predictions about drug responsiveness.
  • the disclosed systems and methods provide for the automatic detection and/or recognition of SAH.
  • the disclosed systems and methods may segment and quantify hemorrhage for risk stratification.
  • the disclosed systems and methods may provide clinical lab values and/or automated segmentation (e.g., volume segmentation) for risk stratification, determining severity of illness, and so on.
  • the disclosed systems and methods may predict future clinical outcomes (e g., prognosis) at the point-of-care and may suggest clinical-decision support (CDS) and/or interventions based on the determined severity of illness.
  • CDS clinical-decision support
  • the machine learning models utilized by the disclosed systems and methods may provide a threshold for unfavorable patient outcomes or future predicted patient central nervous system (CNS) events and outcomes.
  • the threshold for such possible future unfavorable patient outcomes overall and/or CNS-specific outcomes may be when CHV (and/or CHV and eCHV) is more than 10 mL for total hemorrhage volume.
  • a non-machine learning-based model may quantify SAH blood (qvSAH) using linear measurements for low resource areas (e.g., non-stroke centers).
  • This model may be validated by comparing to a more time-intensive segmentation model, such as a segmentation model based on using RIL-Contour (or similar segmentation software methods) processing of CT images.
  • This second model approach may be considered a “ground truth” for actual qv-SAH blood volume in milliliters (mL) compared to the simplified linear model or estimates of volume.
  • a machine learning model may automate determining qvSAH volumes.
  • the machine learning model may also be linked to clinical outcomes by 30 days.
  • the machine learning model described in the present disclosure may segment portions of the brain from medical imaging data (e.g., CT images), quantify SAH blood volume, and correlate the estimated qvSAH data that are associated with eventual, or otherwise probable, patient outcomes.
  • the estimated qvSAH data may be correlated to a modified Rankin scale, radiographic and/or symptomatic vasospasm, delayed cerebral ischemia (DCI) outcomes, or other such scores or outcomes.
  • This machine learning model may be referred to as a subarachnoid hemorrhage volumetric artificial intelligence (SAH-VAI) model.
  • SAH-VAI subarachnoid hemorrhage volumetric artificial intelligence
  • FIG. 1 a flowchart is illustrated as setting forth the steps of an example method for generating quantified volumetric SAH data (e.g.. qvSAH data, or alternatively SAHV or SAH volume) using a suitably trained neural network or other machine learning model.
  • the neural network or other machine learning model takes medical imaging data (e g., CT imaging data, MRI imaging data) as input data and generates qvSAH data as output data.
  • the qvSAH data may include segmented regions of the medical imaging data that are associated with a detected SAH. Additionally or alternatively, the qvSAH data may include quantified volume measurements for each detected SAH region.
  • the method includes accessing medical imaging data with a computer system, as indicated at step 102.
  • Accessing the medical imaging data may include retrieving such data from a memory or other suitable data storage device or medium, such as a picture archiving and communication system (PACS), or the like.
  • accessing the medical imaging data may include acquiring such data with a medical imaging system and transferring or otherwise communicating the data to the computer system, which may be a part of the medical imaging system.
  • the medical imaging data may be CT imaging data acquired with a CT system.
  • the CT imaging data may include CT images acquired from a subject.
  • the medical imaging data may be MRI imaging data acquired with an MRI system.
  • the MRI imaging data may include MRI images acquired from a subject.
  • the medical imaging data may be acquired with other imaging systems or cloud-based imaging platforms, such as other platforms with imaging information similar to CT imaging, MRI imaging, or other biomedical imaging.
  • the medical imaging data may include medical images, which may be in a DICOM format.
  • a trained neural network (or other suitable machine learning algorithm) is then accessed with the computer system, as indicated at step 104.
  • the neural network is trained, or has been trained, on training data in order to generate qvSAH data from medical imaging data.
  • Accessing the trained neural network may include accessing network parameters (e.g., weights, biases, or both) that have been optimized or otherwise estimated by training the neural network on training data.
  • retrieving the neural network can also include retrieving, constructing, or otherwise accessing the particular neural network architecture to be implemented. For instance, data pertaining to the layers in the neural network architecture (e.g., number of layers, type of layers, ordering of layers, connections between layers, hyperparameters for layers) may be retrieved, selected, constructed, or otherwise accessed.
  • An artificial neural network generally includes an input layer, one or more hidden layers (or nodes), and an output layer.
  • the input layer includes as many nodes as inputs provided to the artificial neural network.
  • the number (and the type) of inputs provided to the artificial neural network may vary based on the particular task for the artificial neural network.
  • the input layer connects to one or more hidden layers.
  • the number of hidden layers varies and may depend on the particular task for the artificial neural network. Additionally, each hidden layer may have a different number of nodes and may be connected to the next layer differently. For example, each node of the input layer may be connected to each node of the first hidden layer. The connection between each node of the input layer and each node of the first hidden layer may be assigned a weight parameter. Additionally, each node of the neural network may also be assigned a bias value. In some configurations, each node of the first hidden layer may not be connected to each node of the second hidden layer. That is, there may be some nodes of the first hidden layer that are not connected to all of the nodes of the second hidden layer.
  • Each node of the hidden layer is generally associated with an activation function.
  • the activation function defines how the hidden layer is to process the input received from the input layer or from a previous input or hidden layer. These activation functions may van- and be based on the type of task associated with the artificial neural network and also on the specific type of hidden layer implemented.
  • Each hidden layer may perform a different function.
  • some hidden layers can be convolutional hidden layers which can. in some instances, reduce the dimensionality of the inputs.
  • Other hidden layers can perform statistical functions such as max pooling, which may reduce a group of inputs to the maximum value; an averaging layer; batch normalization; and other such functions.
  • max pooling which may reduce a group of inputs to the maximum value
  • an averaging layer which may be referred to then as dense layers.
  • Some neural networks including more than, for example, three hidden layers may be considered deep neural networks.
  • the last hidden layer in the artificial neural network is connected to the output layer. Similar to the input layer, the output layer typically has the same number of nodes as the possible outputs.
  • the output layer may include, for example, a number of different nodes, where each different node corresponds to a different region of the medical imaging data that has been identified as being consistent with a detected or probable SAH.
  • the output layer may include outputting a single SAH map that indicates multiple spatial locations having been identified as detected or probable SAH.
  • the output layer may also output a quantified volume measurement for each detected SAH region.
  • qvSAH data may include one or more detected regions of SAH in the medical imaging data in addition to quantified volume measurements of each detected SAH region.
  • the qvSAH data may include an SAH map that indicates regions in the medical imaging data that are consistent with SAH.
  • the SAH map may include a quantified volume measurement of each detected SAH.
  • the quantified SAH volume can be used as additional information by a clinician to risk stratify the subject and assess potential CDS and/or interventions.
  • the disclosed systems and methods are capable of detecting both subarachnoid hemorrhage bleeding and rebleeding. In this way, subtle rebleeding can also be detected in the qvSAH data before patients clinically deteriorate.
  • the qvSAH data generated by inputting the medical imaging data to the trained neural network can then be used to generate a report that is displayed to a user, stored for later use or further processing, or both, as indicated at step 108.
  • the report may include one or more images or maps.
  • the report may include overlaying an SAH map on the medical imaging data to identify the regions of probable SAH in the medical imaging data.
  • the report may include the quantified volume measurement of each detected SAH.
  • Such data or maps could be used to create 3D reconstructions of qvSAH topography for associated overlaying with other biomedical imaging data, such as brain and functional neuroanatomy mapping.
  • the data or maps could be , and correlation with additional neuroimaging data such as non-contrast CT, CT angiogram, CT perfusion, MRI, diffusion MRI, diffusion tensor imaging (DTI) and/or tractography, perfusion MRI.
  • MR angiogram, and/or MR vessel wall-imaging data are examples of additional neuroimaging data.
  • the qvSAH data can be used to generate one or more score values for the subject.
  • an enhanced SAH (eSAH) score can be calculated using the qvSAH data and other patient health data, as described below in more detail.
  • the eSAH score can be provided as part of the report generated in step 108.
  • FIG. 2 a flowchart is illustrated as setting forth the steps of an example method for training one or more neural networks (or other suitable machine learning algorithms) on training data, such that the one or more neural networks are trained to receive medical imaging data as input data in order to generate qvSAH as output data.
  • the neural network(s) can implement any number of different neural network architectures.
  • the neural network(s) could implement a convolutional neural network, a residual neural network, or the like.
  • Use of recurrent neural networks and multimodal methods of deep learning and machine learning methods with different architectures that use artificial neural networks may also be utilized.
  • the neural network(s) could be replaced with other suitable machine learning or artificial intelligence algorithms, such as those based on supervised learning, unsupervised learning, deep learning, ensemble learning, dimensionality reduction, and so on.
  • the method includes accessing training data with a computer system, as indicated at step 202.
  • Accessing the training data may include retrieving such data from a memory or other suitable data storage device or medium.
  • accessing the training data may include acquiring such data with a medical imaging system and transferring or otherwise communicating the data to the computer system.
  • the training data can include medical imaging data (e.g., CT images, MRI images) that have been annotated to identify regions associated with SAH. Additionally, the training data may include annotations that indicate a quantified volume measurement of each labeled SAH region.
  • FIGS. 3A and 3B show examples of CT images that have been annotated to identify probable SAH regions (left) and output SAH maps (right) that indicate probable SAH regions identified by the systems and methods described in the present disclosure.
  • the method can include assembling training data from medical imaging data using a computer system. This step may include assembling the medical imaging data into an appropriate data structure on which the neural network or other machine learning algorithm can be trained. Assembling the training data may include assembling medical images, segmented medical images, and other relevant data. For instance, assembling the training data may include generating labeled data and including the labeled data in the training data. Labeled data may include medical images, segmented medical images, or other relevant data that have been labeled as belonging to, or otherwise being associated with, one or more different classifications or categories. For instance, labeled data may include medical images and/or segmented medical images that have been labeled as containing one or more SAH regions, labeled with a quantified volume measurement of labeled SAH regions, and so on.
  • One or more neural networks are trained on the training data, as indicated at step 204.
  • the neural network can be trained by optimizing network parameters (e.g., weights, biases, or both) based on minimizing a loss function.
  • the loss function may be a mean squared error loss function.
  • Training a neural network may include initializing the neural network, such as by computing, estimating, or otherwise selecting initial network parameters (e.g.. weights. biases, or both).
  • initial network parameters e.g.. weights. biases, or both.
  • an artificial neural network receives the inputs for a training example and generates an output using the bias for each node, and the connections between each node and the corresponding weights.
  • training data can be input to the initialized neural network, generating output as qvSAH data.
  • the artificial neural network compares the generated output with the actual output of the training example in order to evaluate the quality of the qvSAH data.
  • the qvSAH data can be passed to a loss function to compute an error.
  • the current neural network can then be updated based on the calculated error (e.g., using backpropagation methods based on the calculated error). For instance, the current neural network can be updated by updating the network parameters (e.g., weights, biases, or both) in order to minimize the loss according to the loss function.
  • the training continues until a training condition is met.
  • the training condition may correspond to, for example, a predetermined number of training examples being used, a minimum accuracy threshold being reached during training and validation, a predetermined number of validation iterations being completed, and the like.
  • the training condition has been met (e.g.. by determining whether an error threshold or other stopping criterion has been satisfied)
  • the current neural network and its associated network parameters represent the trained neural network.
  • the training processes may include, for example, root mean squared error, information loss (e.g., entropy) methods, crossentropy loss, gradient descent, optimal mass transport methods, Newton’s method, conjugate gradient, quasi-Newton, Levenberg-Marquardt, among others.
  • information loss e.g., entropy
  • crossentropy loss e.g., gradient descent
  • optimal mass transport methods e.g., Newton’s method
  • conjugate gradient e.g., quasi-Newton, Levenberg-Marquardt, among others.
  • the artificial neural network can be constructed or otherwise trained based on training data using one or more different learning techniques, such as supervised learning, unsupervised learning, reinforcement learning, ensemble learning, active learning, transfer learning, or other suitable learning techniques for neural networks.
  • supervised learning involves presenting a computer system with example inputs and their actual outputs (e.g., categorizations).
  • the artificial neural network is configured to leam a general rule or model that maps the inputs to the outputs based on the provided example inputoutput pairs.
  • the one or more trained neural networks are then stored for later use, as indicated at step 206.
  • Storing the neural network(s) may include storing network parameters (e.g., weights, biases, or both), which have been computed or otherwise estimated by training the neural network(s) on the training data.
  • Storing the trained neural network(s) may also include storing the particular neural network architecture to be implemented. For instance, data pertaining to the layers in the neural network architecture (e.g., number of layers, type of layers, ordering of layers, connections between layers, hyperparameters for layers) may be stored.
  • the machine learning model(s) take qvSAH data and other patient health data as input data and generate classified feature data as output data.
  • the classified feature data can be indicative of detecting SAH, which may include subarachnoid hemorrhage bleeding and/or rebleeding. Additionally or alternatively, the classified feature data may include risk scores that indicate a risk stratification of detected SAH.
  • the classified feature data may indicate a severity of illness for the subject.
  • the classified feature data may also indicate a prognosis for a subject based on a detected SAH.
  • the classified feature data may indicate a probability of one or more future outcomes for the subject based on the detected SAH, the estimated SAH risk scores, and/or the estimated severity of the SAH.
  • one or more CDS and/or interventions based on the identified and risk stratified SAH may be determined.
  • the method includes accessing qvSAH data and other patient health data with a computer system, as indicated at step 402. Accessing the qvSAH and other multi-modal patient health data may include retrieving such data from a memory or other suitable data storage device or medium. Additionally or alternatively, accessing the qvSAH data may include generating such data with the computer system (e.g., using the methods described above) and transferring or otherwise communicating the data to the computer system. In some examples, the qvSAH data may be estimated by the machine learning model, such that separate qvSAH data need not be accessed by the computer system. In these instances, the patient health data may include medical imaging data, as described below, and the machine learning model may be trained to estimate qvSAH data from the medical imaging data contained in the patient health data.
  • the patient health data may include unstructured data and/or structured data such as patient demographics, diagnoses, procedures, lab results, histopathology data, medications, vital signs, genetic sequencing, medical imaging, and other clinical observations.
  • the lab results may include blood test results, such as blood test results measured from complete blood count (CBC) or the like.
  • the lab results in the patent health data may include neutrophil to lymphocyte ratio (NLR) measured from CBC, other surrogate peripheral blood markers of SAH and/or systemic inflammation, or other markers that indicate SAH and/or systemic inflammation.
  • NLR neutrophil to lymphocyte ratio
  • clinical laboratory data and/or histopathology data can include genetic testing and laboratory information, such as performance scores, lab tests, pathology' results, prognostic indicators, date of genetic testing, testing method used, and so on.
  • the lab results may also include measures of cerebrospinal fluid (CSF) output into an external ventricular drain (EVD) system, which can be visually inspected for the density or darkness of red color of bloody effluent and can also be assessed to cross-correlate with the SAHV/qvSAH blood volume being drained.
  • CSF SAHV blood output can be visually estimated, or may be measured using spectrophotometry or chromatographic methods similar to measurement of xanthochromia.
  • the patient health data may also include other clinical severity' of illness scales and observations commonly used and documented in the medical record, such as measurements of the Glasgow Coma Scale, FOUR score. World Federation of Neurologic Surgeons scale, and the like.
  • Features derived from structured, curated, and/or EHR data may include clinical features such as diagnoses; symptoms; therapies; outcomes; patient demographics, such as patient name, date of birth, gender, and/or ethnicity; diagnosis dates for cancer, illness, disease, or other physical or mental conditions; personal medical history; family medical history; clinical diagnoses, such as date of initial diagnosis, date of metastatic diagnosis, cancer staging, tumor characterization, and tissue of origin; and the like.
  • patient health data may also include features such as treatments and outcomes, such as line of therapy, therapy groups, clinical trials, medications prescribed or taken, surgeries, radiotherapy, imaging, adverse effects, and associated outcomes.
  • Patient health data can include a set of clinical features associated with information derived from clinical records of a patient, which can include records from family members of the patient. These clinical features and data may be abstracted from unstructured clinical documents, EHR, or other sources of patient history. Such data may include patient symptoms, diagnosis, treatments, medications, therapies, responses to treatments, laboratory testing results, medical history, geographic locations of each, demographics, or other features of the patient which may be found in the patient’s EHR.
  • patient health data can include medical imaging data, which may include images of the patient obtained with one or more different medical imaging modalities, including magnetic resonance imaging (MRI), computed tomography (CT), x-ray imaging, positron emission tomography (PET), ultrasound, and so on.
  • the medical imaging data may also include parameters or features computed or derived from such images.
  • Medical imaging data may also include digital pathology’ images, such as H&E slides, IHC slides, and the like.
  • the medical imaging data may also include data and/or information from pathology and radiology reports, which may be ordered by a physician during the course of diagnosis and treatment of various illnesses and diseases.
  • the patient health data can include one or more ty pes of omics data and/or multimodal omics data, such as genomics data, pharmacogenomics data, proteomics data, transcriptomics data, epigenomics data, metabolomics data, microbiomics data, and other multiomics data types.
  • the patient health data can additionally or alternatively include patient geographic data, demographic data, and the like.
  • the patient health data can include information pertaining to diagnoses, responses to treatment regimens, genetic profiles, clinical and phenotypic characteristics, and/or other medical, geographic, demographic, clinical, molecular, or genetic features of the patient.
  • epigenomics data may include data associated with information derived from DNA modifications that are not changes to the DNA sequence and regulate the gene expression. These modifications can be a result of environmental factors based on what the patient may breathe, eat, or drink. These features may include DNA methylation, histone modification, or other factors which deactivate a gene or cause alterations to gene function without altering the sequence of nucleotides in the gene.
  • Microbiomics data may include, for example, data derived from the viruses and bacteria of a patient. These features may include viral infections which may affect treatment and diagnosis of certain illnesses as well as the bacteria present in the patient's gastrointestinal tract which may affect the efficacy of medicines ingested by the patient.
  • Metabolomics data may include molecules obtained from the blood, CSF, and body compartments in patients.
  • the metabolomics data may include such data obtained from patients that are associated with SAH physiology and correlated with the qvSAH data and other SAHVAI datasets.
  • Proteomics data may include data associated with information derived from the proteins produced in the patient. These features may include protein composition, structure, and activity; when and where proteins are expressed; rates of protein production, degradation, and steady-state abundance; how proteins are modified, for example, post-translational modifications such as phosphorylation; the movement of proteins between subcellular compartments; the involvement of proteins in metabolic pathways; how proteins interact with one another; or modifications to the protein after translation from the RNA such as phosphorylation, ubiquitination, methylation, acetylation, glycosylation, oxidation, or nitrosylation.
  • Genomics data may include genomic information that can be, or have been, correlated with the symptoms and medication effect, tolerance, and/or side effect information that may be received from a patient as responses to a questionnaire and stored as questionnaire response and/or phenotypic data.
  • genomics data can be extracted from blood or saliva samples collected from individuals who have also completed one or more questionnaires such that corresponding questionnaire response data is available for the individuals.
  • a deep phenotypic characterization of these individuals can be assembled.
  • prospectively determined patterns of treatment response after protocoled titrations in vanous different drugs from distinct classes of treatments have been assembled. For instance, an analysis of Verapamil, (an L-type calcium channel blocker) using whole exome sequencing (WES) can be completed following genotyping in a confirmatory cohort.
  • Verapamil an L-type calcium channel blocker
  • WES whole exome sequencing
  • the patient health data can include a collection of data and/or features including all of the data types disclosed above.
  • the patient health data may include a selection of few er data and/or features.
  • a trained machine learning model is then accessed with the computer system, as indicated at step 404.
  • the machine learning model is trained, or has been trained, on training data in order to generate classified feature data indicative of a SAH diagnosis, SAH risk stratification, SAH severity, and/or SAH prognosis.
  • Accessing the trained machine learning model may include accessing model parameters (e.g., decision criteria for each feature at each split in a tree-based model) that have been optimized or otherwise estimated by training the machine learning model on training data.
  • retrieving the machine learning model can also include retrieving, constructing, or otherwise accessing the particular model architecture to be implemented. For instance, data pertaining to a tree-based model architecture (e.g., root node, features to evaluate for the root node, number of leaf nodes, features to evaluate at each leaf node, number of branches) may be retrieved, selected, constructed, or otherwise accessed.
  • the output may include risk scores, severity scores, or the like.
  • the output may be a risk score and/or severity score (or other severity classification) for each detected SAH region, or for the subject as a whole.
  • the output may include prognostic data.
  • the output may be a probability of one or more future outcomes for the subject based on the detected SAH, the estimated risk scores, and/or the estimated severity scores (or other severity classifications) of the SAH.
  • the classified feature data may include a risk score.
  • the risk score can provide physicians or other clinicians with a recommendation to consider additional monitoring for subjects whose qvSAH data and patient health data indicate the likelihood of the subject having SAH.
  • the risk score may be an eSAH score.
  • the classified feature data may indicate the severity for a particular classification of SAH (i.e., the probability that the qvSAH data and/or patient health data include patterns, features, or characteristics indicative of detecting, differentiating, and/or determining the severity of SAH).
  • the classified feature data may indicate a prognosis for a subject based on a detected SAH.
  • the classified feature data may indicate a probability of one or more future outcomes for the subject based on the detected SAH. the estimated SAH risk scores, and/or the estimated severity of the SAH.
  • one or more CDS and/or interventions based on the identified and risk stratified SAH may be determined.
  • FIG. 15 illustrates an example workflow for accelerated detection and segmentation using the systems and methods described in the present disclosure relative to earlier treatments.
  • the classified feature data generated by inputting the qvSAH data and patient health data to the trained machine learning model(s) can then be used to generate a report that is displayed to a user, stored for later use or further processing, or both, as indicated at step 408.
  • FIG. 16 illustrates an example workflow for integrating multimodal data, such as from the patient health data described above, in addition to qvSAH data to provide outputs of predictive modeling, outcome measures, and the like.
  • the report may include one or more images or maps.
  • the report may include overlaying an SAH map on the medical imaging data to identify the regions of probable SAH in the medical imaging data.
  • the report may include the quantified volume of each detected SAH.
  • the quantified SAH volumes may be correlated with clinical outcome measures.
  • the quantified SAH volumes may be correlated with a modified Rankin scale as indicated in the classified feature data, which may be provided as part of the generated report.
  • the classified feature data include risk scores, severity scores, and/or other severity classifications
  • these scores or classifications may also be stored in the generated report.
  • risk scores or severity scores may be displayed together with each detected SAH such that the user can identify’ SAH volumes that are at greater risk, or are of greater severity, to the subject.
  • the report may include identifying or highlighting SAH volumes that are above a risk threshold.
  • the risk threshold may be 10 mL, such that SAH volumes that are greater than 10 mL are highlighted in the report as being riskier for the subject.
  • the classified feature data may include one or more outcome measures, clinical score estimates, or the like.
  • the classified feature data may include estimates of modified Rankin scale (mRS) values.
  • mRS modified Rankin scale
  • the report may also include prognostic data, as described above.
  • the report may include the probability of one or more future outcomes for the subject based on the detected SAH, the estimated SAH risk scores, and/or the estimated severity of the SAH.
  • one or more CDS and/or interventions based on the identified and risk stratified SAH may be determined and stored in the generated report.
  • the classified feature data may indicate a probability of delayed cerebral ischemia (DCI) in the subject. Additionally or alternatively, the classified feature data may indicate a probability of shunt dependency in the subject.
  • DCI delayed cerebral ischemia
  • the classified feature data may indicate a probability of adverse drug effects in a subject. For instance, higher blood volumes can create medication sensitivity and with certain genotypes this can lead to dose reduction of neuroprotective drugs, such as nimodipine.
  • the classified feature data can indicate individualized dose recommendations for a subject early to prevent hypotension or other adverse events.
  • the classified feature data may also be collected over a period of time to monitor efficacy of the recommended dose, such that the dose can be later adjusted once the blood and physiology improves.
  • the report generated by the systems and methods described in the present disclosure allows for rapid triaging of subjects with suspected SAH.
  • SAH volumes When one or more SAH volumes are detected, they may be risk stratified to highlight issues that require urgent attention by a clinician.
  • the report may also provide prognostic information for the clinician, including a list of potential CDS and/or interventions to be considered by the clinician.
  • the triaging provided by the systems and methods described in the present disclosure can help optimize transfers in need of comprehensive stroke centers (CSCs) and/or thrombectomy capable centers.
  • the generated report may be integrated with a triage communication system, such that critical ICU bed status can be monitored and cases that need intervention can be selected or otherwise highlighted based on the SAH risk stratification provided by the classified feature data. Additionally, the generated report can help triage futile care cases (e.g., massive pontine or brainstem hemorrhage) to avoid unnecessary non-operative neurosurgical transfers.
  • FIG. 5 a flowchart is illustrated as setting forth the steps of an example method for training one or more machine learning models on training data, such that the one or more machine learning models are trained to receive qvSAH data and patient data as input data in order to generate classified feature data as output data, where the classified feature data are indicative of detecting SAH, risk stratifying SAH, and/or the prognosis of detected SAH.
  • the machine learning model(s) can implement any number of different model architectures.
  • the machine learning model (s) may implement a tree-based model, such as a decision tree model, a random forest model, a boosting model, a gradient boosting model, or the like.
  • the machine learning model(s) may implement an artificial neural network, such as a convolutional neural network, a residual neural network, or the like.
  • the machine learning model(s) could be replaced with other suitable machine learning or artificial intelligence algorithms, such as those based on supervised learning, unsupervised learning, deep learning, ensemble learning, dimensionality reduction, and so on.
  • the method includes accessing training data with a computer system, as indicated at step 502. Accessing the training data may include retrieving such data from a memory or other suitable data storage device or medium.
  • the training data can include qvSAH data and patient health data collected from a group of subjects.
  • One or more machine learning models are trained on the training data, as indicated at step 504.
  • the machine learning model can be trained by optimizing model parameters.
  • optimizing the model parameters may include optimizing the features to be analyzed at each decision point in the tree-based model, as well as the criteria used to determine how to split the data. Training may proceed based on evaluating one or more metrics, such as estimate of positive correctness, Gini impurity, information gain, variance reduction, measure of “goodness,” and so on.
  • Storing the machine learning model(s) may include storing model parameters, which have been computed or otherwise estimated by training the machine learning model(s) on the training data. Storing the machine learning model(s) may also include storing the particular model architecture to be implemented. For instance, data pertaining to a treebased model architecture (e.g., number of leaf nodes, ordering of leaf nodes, connections between nodes, hyperparameters for the nodes and/or tree-based model) may be stored.
  • a treebased model architecture e.g., number of leaf nodes, ordering of leaf nodes, connections between nodes, hyperparameters for the nodes and/or tree-based model
  • analytical and other computational models may also be used to estimate qvSAH data.
  • these models may be used to estimate qvSAH data as a separate input to the machine learning models described above with respect to FIG. 4.
  • a cohort of 277 patients in electronic health medical record (EHR) with SAH was analyzed. Demographics, medical history, clinical assessment, CT imaging, and hospital course were analyzed. Inclusion criteria for the study required initial non-contrast CT head imaging and SAH diagnosis to be done within 24 hours of SAH ictus.
  • CT imaging data had 5 mm imaging slices or less or at least equivalent reformatting to measure data quantitatively.
  • Neuroimaging was performed to definitively diagnose and further relativize the extent of DCI, including delayed non-contrast CT (NCCT), computed tomography angiography (CTA), magnetic resonance imaging (MRI). magnetic resonance angiography (MRA) or digital subtraction angiography (DSA).
  • NCCT delayed non-contrast CT
  • CTA computed tomography angiography
  • MRI magnetic resonance imaging
  • MRA magnetic resonance angiography
  • DSA digital subtraction angiography
  • Criteria for exclusion included the absence of NCCT imaging, no diagnosis of SAH or associated disease (only intracerebral hemorrhage in the medical record, IVH or subdural hemorrhage without SAH) and traumatic SAH. Further, patients that died within 72 hours of admission were excluded from DCI analysis since they did not have imaging between 4-14 days post-SAH.
  • NCCT non-contrast cranial CT-images
  • SAH blood volume was measured in each of the 5 compartments of its maximal width/thickness (A) and length (B) for each cistem, (FIG. 6) and for (C) the vertical number of slices with visible SAH blood present were counted and then multiplied with slice thickness in mm of the NCCT. These three variables were then imputed into ABC/2, as seen in FIG. 6, and simplified volumetric estimates were measured. Subsequent estimated SAH volumes were then summed to a total cisternal subarachnoid blood volume (CHV) of the 5 compartments.
  • CHV total cisternal subarachnoid blood volume
  • Cisternal SAHV (CHV) in all patients was estimated using the sQV-SAH method.
  • Average CHV measured via sQV-SAH is 11.3 mL (95 % CI, 9.58-12.85 mL), and average eCVH is 13.5 mL (95 % CI, 11.51-15.55 mL).
  • the respective range is 125 mL, with a minimum of 0 mL (when no blood is present at the level of the cistern) and a maximum of 125 mL.
  • MM manual method to measure SAH volume
  • SAHV SAHV
  • NCCT scan images were segmented and analyzed by a human investigator.
  • the MM of SAHV was considered the ‘'ground truth” or ‘'gold standard” to compare and evaluate against qvSAH measurements.
  • the volume measurements based on the CT data (Al-based method and MM) were linked and compared with available clinical data.
  • a cohort of 10 patients with aneurysmal SAH (aS AH) were examined. NCCT scans of the head performed for standard of care indications were utilized for analysis. When more than one CT scan was performed in a single day, the earliest NCCT performed that day was utilized. Among this cohort of 10 patients there were 7 females and 3 males, the average age was 55 years (35 to 65 years) at SAH. Admission weight was reported as an average of 92 kg (kilogram) (minimum 55 kg, maximum 120 kg).
  • Physiologic Derangement Scale at admission, external ventricular drainage placed, aneurysm surgery, angiographic VSP (Transcranial doppler ultrasound (TCD), computed tomography angiography (CTA) or both), day of VSP, severity of VSP, location of VSP if obtainable, management of VSP, symptomatic clinical VSP, new cerebral infarction on imaging, description of infarct, DCI, day of DCI, re-rupture/rebleeding, pre-listed modified Rankin Scale (mRS). mRS at discharge, mRS at 30 days, mRS at 90 days. SAH associated disease, length of stay in hospital (LOS), length of stay in intensive care unit, hypertension, diabetes mellitus, heart disease, pure motor hemiparesis stroke, smoker, alcoholic, and family history of aneurysm.
  • TCD Transcranial doppler ultrasound
  • CTA computed tomography angiography
  • mRS modified Rankin Scale
  • the pre-segmented files that marked every voxel between a specified Hounsfield Units (HU) range of 60 to 120 for each slice of the CT scan were programmed.
  • a range of HU between 60 and 120 was chosen as the threshold because this is the range where SAH blood is visible on an NCCT scan. With this, the bleeding and artifacts could be visualized in every voxel between HU 60 to 120. which was marked red.
  • the contrast was set as minimum 0 and maximum 120 in ITK-Snap.
  • the NCCT scans were manually refined with ITK-Snap to exclude segmentations that are omitted in 8 defined neuroanatomical spaces (5 Cisternal spaces: suprasellar cistern, perimesencephalic cistem, prepontine cistern, sylvian cistem, interhemispheric cistern), and 3 additional neuroanatomical spaces relevant to SAH disease: intraparenchymal hemorrhage (IPH), intraventricular hemorrhage (IVH), and finally brain g ral /sulcal spaces on each slice.
  • IPH intraparenchymal hemorrhage
  • IVH intraventricular hemorrhage
  • the manual method therefore required manual labeling within these 8 pre-defined neuroanatomic spaces to the MM-SAHV. of which included all 92 NCCT scans.
  • the final result - for instance, of the first CT scan of case one - is shown in FIG. 8 as a 2-dimensions (upper images) and 3D (lower images) called the SAHV-3D. also called “SAHV-3D Brain Map’ 7 .
  • These 3D images can be rotated like any other reconstruction image used in neurosurgery or neurointerventional to visualize patterns in 3-dimensions which is hard to visualize in standard 2D axial planes.
  • SAHVAI techniques described in the present disclosure were also performed on the cohort. Using both data sets of manual method (MM-SAHV) and SAHVAI volumes, the results were compared for measurement differences. Additionally, 2-dimensional and 3 -dimensional SAHV brain maps (FIG. 9) were compared to visualize SAH blood in 3 dimensions similar to Maximum Intensity Projection (MIPS) maps in radiology. In comparison to the manual method, which measured 8 spaces, the SAHVAI method visualized focused on 7 major SAH anatomical blood spaces. This was based on an internal preliminary comparison of the two methods that showed that the vast majority of SAH blood volume is contained within the basal cisterns and compartments.
  • MIPS Maximum Intensity Projection
  • the mean SAHV of day 0 or day of the SAH ictus aneurysm rupture was measured as 44.59 ml by SAHVAI whereas as 58.78ml by manual method.
  • the maximum SAHV measured for day 0 by SAHVAI was 99.01ml and by SAHV manual method was 141.5ml for both of the same case (case 5).
  • the minimum SAHV on day 0 measured by SAHVAI was 9.69ml (case 9) and manual method as 12.8ml (case 10)).
  • the standard deviation (SD) of day 1 of the MM exceeds the selected scale unit.
  • the SD of day 1 of the MM was calculated with 85.61.
  • case 2 had, on day 69 after the incident, a SAHV of Al 0.7ml and Manual 1.3ml, and case 3 had, on day 47 after the SAH, a SAHV of Al 1.89ml and Manual 10.75ml.
  • FIG. 11 shows the 2-dimensional SAHV Brain Map of cases 1, 5. and 8.
  • SAHV When SAHV is visualized in 3 dimensions, and over time (4 dimensions, or 4D SAHV) is has even greater predictive value. For example, as illustrated in the example SAHV- 4D plot in FIG. 12, a case that had thick SAHV layered over the right cerebral convexity predicted major symptomatic vasospasm over the same hemisphere about 10-11 days later. Insights such as this can be highly advantageous to clinicians, since SAHV can be used as a “brain map” in which surgeons, neurologists, interventionalists, and radiologists can use to make 3D-4D risk prediction models and target interventions to remove SAH volume (using drains, lumbar and CSF irrigation, etc.).
  • Aneurysmal SAH (aSAH) is a devastating hemorrhagic stroke subtype that occurs in about 30.000 patients per year in the United States and is with an estimated 30-40% one-month historical mortality.
  • the high morbidity and mortality of SAH are due to both primary neurological injury as well as a cascade of secondary neurological injury that ensues from inflammation cascade in response to blood in the subarachnoid space, including cerebral vasospasm, delayed cerebral ischemia (DCI), hydrocephalus.
  • DCI delayed cerebral ischemia
  • hydrocephalus Although several SAH grading and scoring systems have been proposed to predict outcomes for aSAH, they have limited predictive capability due to an imprecise, semi-quantitative Fisher scale measurements of SAH blood volume.
  • mRS modified Rankin Scale
  • NCCT images were stored either as DICOM or NlfTI format.
  • SAHV total cisternal hemorrhagic blood volume
  • Discharge modified Rankin scale was used as outcome data and was dichotomized with mRS 0-3 as favorable outcome and 4-6 as unfavorable outcome.
  • independent variables were compared using x2, Student t-test (as appropriate).
  • the outcome model was developed using multivariate logarithmic regression analysis with all possible prediction variables that would be available at the time of initial presentation (including gender, age, SAH volume, Glasgow Coma Scale (GCS), modified Fisher's score (mFS), Hunt and Hess scale, presence of intraparenchymal hemorrhage, presence of intraventricular hemorrhage). The analysis was then followed by stepwise elimination of variables not contributing to the model (0.05 significance level for entry into the model). First order interactions were tested in the final model.
  • eSAH score An outcome stratification model for volumetrically enhanced Subarachnoid Hemorrhage Score, called eSAH score, was created with the variables in the final outcome model. Cut off points of variables were chosen to produce a simple and intuitive model.
  • a DCI subscore was similarly calculated for risk stratification model for DCI prediction based on labeled DCI outcomes in the dataset using consensus DCI criteria.
  • Nonparametric two-Sample Kolmogorov-Smirnov Test was performed to test the distribution of outcome and in-hospital mortality with eSAH score, and DCI with DCI subscore. Discriminative accuracy of the score was examined using receiver operator characteristics curve and subsequent area under the curve analysis.
  • the eSAH score was created with cutoffs in age, GCS, and SAH cisternal hemorrhagic volume to create a simple risk stratification tool for prediction outcome and mortality'.
  • DCI was only associated with two variables GCS and SAHV.
  • the eSAH DCI subscore was calculated and derived with these variables of GCS and SAHV.
  • eSAH score is calculated by scoring each category points allotted by variable and summing them up.
  • Total eSAH score GCS score + Age score + SAH Volume (SAHV) score.
  • the eSAH score therefore was calculated as a summation of individual points for each variable.
  • the eSAH score ranged from 0 to 5 and the eSAH DCI subscore ranged from 0 to 4 for subsequent risk of developing DCI.
  • the eSAH score has potential to triage and risk stratify SAH patients similar to Hemphill’s ICH score for ICH patient mortality and stroke systems of care for these patients.
  • the eSAH score can be leveraged as a relative strength given the novel quantitative SAHV score which measures a ’dose-response" relationship of blood volume compared to GCS with age.
  • the SAHV is a more precise way to quantify in milliliters (ml) the amount of blood compared to the older, Likert-like modified Fisher scale.
  • the methods described in the present disclosure can be used to generate accurate measurements of SAHV (e.g., qvSAH data), which can then be used to calculate an eSAH score for a subject.
  • the eSAH score data could be used as a risk stratification and SAH severity tool that could aid in the decision-making processes in tandem with clinician judgement.
  • the eSAH score could also aid the triage and transport of SAH patients from primary stroke center hospitals to comprehensive stroke center hospitals that have dedicated neuro-intensive care unit for complex SAH management as defined by the recent AHA SAH guideline.
  • eSAH score could be used in emergency departments similar to the ICH score by Hemphill to stage and document severity during the initial SAH presentation. Such eSAH score triage in the emergency department could lead to expeditious transfer to a higher-level stroke center with dedicated neurosurgical vascular and neuro-intensive care unit teams for SAH management.
  • eSAH score could therefore help achieves a more equitable allocation of stroke-center resources among stroke networks of care and as recommended by the current AHA SAH guidelines.
  • the eSAH score could also potentially benefit future translational research and targeted interventions (e.g., neuroprotective drugs or minimally invasive approaches such as intraventricular calcium channel blocker drug injection) for SAH patients.
  • SAHV can be segmented and quantified from non-contrast CT (NCCT) scans using a machine learning model, which can then output a 3D brain volumetric map that depicts the three-dimensional spatial distribution of SAHV.
  • NCCT non-contrast CT
  • 4D brain volumetric data can be generated.
  • measurement of SAHV can be used to evaluate or otherwise monitor the use of ventricular irrigation and drainage systems that can expedite removal of SAH blood products. For instance, utilizing the SAHV Al model described in the present disclosure, the course of SAHV resolution over time can be generated for a subject receiving ventricular irrigation or drainage.
  • SAHVAI-3D brain maps can be generated to help visualize significant SAHV resolution patterns and predict vasospasm (e.g., by inputting the SAHVA1-3D maps to atrained machine learning model to generate classified feature data indicating the detection, prediction, and/or classification of vasospasm).
  • the SAHV Al framework described in the present disclosure was applied to SAH cases with mFS(3-4) using the NCCT scans among three groups.
  • Group A included 1 SAH patient treated with a ventricular irrigation system.
  • Group B included one SAH patient who presented with GCS 15 two days after ictus with no requirement for EVD.
  • Group C included 10 patients who underwent regular EVD placement per standard of care.
  • Group A showed expedited resolution of SAHV (1.87mL/day) with an mRS of 0 on discharge and minimal vasospasm.
  • Group B showed 16mL increase in SAHV suspected for aneury smal rebleeding days (5-9), and the patient later died (mRS of 6).
  • Group C showed reduction of SAHV of - 0.5ml/day. Further, the resultant 3D brain maps revealed that areas with the highest density of blood concentration were correlated with the severity and location of the vasospasm in all groups.
  • SAHVAI, SAHVAI-3D, and SAHVAI- 4D are techniques capable of reliably quantifying S AHV blood volume and changes over time, including SAH blood resolution or rebleeding events.
  • SAHV expediting resolution (SAHVER) is a framework that shows how interventions such as ventricular irrigation can expedite SAHV resolution compared to passive EVD and non-CSF drainage groups.
  • the SAHVAI framework described in the present disclosure can also be used to evaluate patient with acute hydrocephalus requiring ventriculoperitoneal (VP) shunt placement, which is a known complication after aneurysmal SAH.
  • VP ventriculoperitoneal
  • CHECKMATE a predictive model utilizing a mathematical model of SAH volumetric (SAHV) blood on the initial CT scan, the CHESS Score, Glasgow Coma Scale (GCS) and other variables to predict the dichotomous outcome of VPS placement by hospital discharge was evaluated.
  • a combination model using GCS, CHESS score, and SAHV on initial CT scan called CHECKMATE may be a useful tool for predicting admission severity of illness and future need for EVD and ultimate VP shunt dependency during hospitalization.
  • GCS and CHESS score can be data included in the patient health data used in some embodiments described in the present disclosure (e.g., the method described with respect to FIG. 4).
  • the genoty pe-phenoty pe pattern of nimodipine was defined based on allelic variants (particularly for CYP3A4 and CYP3A5 subtypes) that convey the metabolic properties.
  • allelic combinations categorized the metabolizers into the following groups: extensive, intermediate, average, and poor.
  • Clinical outcome in this example study was defined as the patient's modified Rankin scale (mRS) score at hospital discharge.
  • the disclosed SAHVAI framework can be used as a tool to monitor the efficacy of drug such as nimodipine, to recommend individualized dose based on patient pharmacogenomics, and to adjust doses based on patient response.
  • drug such as nimodipine
  • the SAHVAI framework can be used to generate classified feature data that can indicate a probability of adverse drug effects. For instance, higher blood volumes can create medication sensitivity and with certain genotypes this can lead to dose reduction of neuroprotective drugs, such as nimodipine.
  • FIG. 13 shows an example of a system 1300 for detecting, risk stratifying, and determining prognostic data for SAH in accordance with some embodiments of the systems and methods described in the present disclosure.
  • a computing device 1350 can receive one or more types of data (e.g., medical imaging data, qvSAH data, patient health data) from data source 1302.
  • computing device 1350 can execute at least a portion of an SAH detection, risk stratification, and/or prognosis system 1304 to detect SAH, quantify SAH volume, risk stratify detected SAH volumes, and/or provide prognostic data for detected SAH volumes from data received from the data source 1302.
  • the computing device 1350 can communicate information about data received from the data source 1302 to a server 1352 over a communication network 1354, which can execute at least a portion of the SAH detection, risk stratification, and/or prognosis system 1304.
  • the server 1352 can return information to the computing device 1350 (and/or any other suitable computing device) indicative of an output of the SAH detection, risk stratification, and/or prognosis system 1304.
  • computing device 1350 and/or server 1352 can be any suitable computing device or combination of devices, such as a desktop computer, a laptop computer, a smartphone, a tablet computer, a wearable computer, a server computer, a virtual machine being executed by a physical computing device, and so on.
  • the computing device 1350 and/or server 1352 can also reconstruct images from the data.
  • data source 1302 can be any suitable source of data (e.g., measurement data, images reconstructed from measurement data, processed image data), such as a medical imaging system, another computing device (e.g., a server storing measurement data, images reconstructed from measurement data, processed image data), and so on.
  • data source 1302 can be local to computing device 1350.
  • data source 1302 can be incorporated with computing device 1350 (e.g., computing device 1350 can be configured as part of a device for measuring, recording, estimating, acquiring, or otherwise collecting or storing data).
  • data source 1302 can be connected to computing device 1350 by a cable, a direct wireless link, and so on.
  • data source 1302 can be located locally and/or remotely from computing device 1350. and can communicate data to computing device 1350 (and/or server 1352) via a communication network (e.g., communication network 1354).
  • a communication network e.g., communication network 1354
  • communication network 1354 can be any suitable communication network or combination of communication networks.
  • communication network 1354 can include a Wi-Fi network (which can include one or more wireless routers, one or more switches, etc.), a peer-to-peer network (e.g., a Bluetooth network), a cellular network (e.g., a 3G network, a 4G network, etc., complying with any suitable standard, such as CDMA, GSM, LTE, LTE Advanced, WiMAX, etc.), other ty pes of wireless network, a wired network, and so on.
  • communication network 1354 can be a local area network, a wide area network, a public network (e.g..
  • Communications links shown in FIG. 13 can each be any suitable communications link or combination of communications links, such as wired links, fiber optic links, Wi-Fi links. Bluetooth links, cellular links, and so on.
  • FIG. 14 an example of hardware 1400 that can be used to implement data source 1302, computing device 1350, and server 1352 in accordance with some embodiments of the systems and methods described in the present disclosure is shown.
  • computing device 1350 can include a processor 1402. a display 1404, one or more inputs 1406. one or more communication systems 1408, and/or memory 1410.
  • processor 1402 can be any suitable hardware processor or combination of processors, such as a central processing unit (CPU), a graphics processing unit (GPU), and so on.
  • display 1404 can include any suitable display devices, such as a liquid crystal display (LCD) screen, a light-emitting diode (LED) display, an organic LED (OLED) display, an electrophoretic display (e.g., an “e- ink” display), a computer monitor, a touchscreen, a television, and so on.
  • inputs 1406 can include any suitable input devices and/or sensors that can be used to receive user input, such as a keyboard, a mouse, a touchscreen, a microphone, and so on.
  • communications systems 1408 can include any suitable hardware, firmware, and/or software for communicating information over communication network 1354 and/or any other suitable communication networks.
  • communications systems 1408 can include one or more transceivers, one or more communication chips and/or chip sets, and so on.
  • communications systems 1408 can include hardware, firmware, and/or software that can be used to establish a Wi-Fi connection, a Bluetooth connection, a cellular connection, an Ethernet connection, and so on.
  • memory' 1410 can include any suitable storage device or devices that can be used to store instructions, values, data, or the like, that can be used, for example, by processor 1402 to present content using display 1404, to communicate with server 1352 via communications system(s) 1408, and so on.
  • Memory 1410 can include any suitable volatile memory, non-volatile memory, storage, or any suitable combination thereof.
  • memory 1410 can include random-access memory’ (RAM), read-only memory (ROM), electrically programmable ROM (EPROM), electrically erasable ROM (EEPROM), other forms of volatile memory, other forms of non-volatile memory, one or more forms of semi-volatile memory, one or more flash drives, one or more hard disks, one or more solid state drives, one or more optical drives, and so on.
  • RAM random-access memory
  • ROM read-only memory
  • EPROM electrically programmable ROM
  • EEPROM electrically erasable ROM
  • other forms of volatile memory other forms of non-volatile memory, one or more forms of semi-volatile memory, one or more flash drives, one or more hard disks, one or more solid state drives, one or more optical drives, and so on.
  • memory 1410 can have encoded thereon, or otherwise stored therein, a computer program for controlling operation of computing device 1350.
  • processor 1402 can execute at least a portion of the computer program to present content (e.g., images, user interfaces, graphics, tables), receive content from server 1352, transmit information to server 1352, and so on.
  • content e.g., images, user interfaces, graphics, tables
  • the processor 1402 and the memory’ 1410 can be configured to perform the methods described herein (e.g., the method of FIG. 1. the method of FIG. 2, the method of FIG. 4, the method of FIG. 5).
  • server 1352 can include a processor 1412, a display 1414, one or more inputs 1416, one or more communications systems 1418, and/or memory 1420.
  • processor 1412 can be any suitable hardware processor or combination of processors, such as a CPU, a GPU, and so on.
  • display 1414 can include any suitable display devices, such as an LCD screen, LED display, OLED display, electrophoretic display, a computer monitor, a touchscreen, a television, and so on.
  • inputs 1416 can include any suitable input devices and/or sensors that can be used to receive user input, such as a keyboard, a mouse, a touchscreen, a microphone, and so on.
  • communications sy stems 1418 can include any suitable hardware, firmware, and/or software for communicating information over communication network 1354 and/or any other suitable communication networks.
  • communications systems 1418 can include one or more transceivers, one or more communication chips and/or chip sets, and so on.
  • communications systems 1418 can include hardware, firmware, and/or software that can be used to establish a Wi-Fi connection, a Bluetooth connection, a cellular connection, an Ethernet connection, and so on.
  • memory’ 1420 can include any suitable storage device or devices that can be used to store instructions, values, data, or the like, that can be used, for example, by processor 1412 to present content using display 1414, to communicate with one or more computing devices 1350, and so on.
  • Memory 7 1420 can include any suitable volatile memory, non-volatile memory 7 , storage, or any suitable combination thereof.
  • memory 1420 can include RAM, ROM. EPROM. EEPROM, other types of volatile memory, other ty pes of non-volatile memory, one or more types of semi-volatile memory, one or more flash drives, one or more hard disks, one or more solid state drives, one or more optical drives, and so on.
  • memory 1420 can have encoded thereon a server program for controlling operation of server 1352.
  • processor 1412 can execute at least a portion of the server program to transmit information and/or content (e.g., data, images, a user interface) to one or more computing devices 1350, receive information and/or content from one or more computing devices 1350, receive instructions from one or more devices (e.g., a personal computer, a laptop computer, a tablet computer, a smartphone), and so on.
  • information and/or content e.g., data, images, a user interface
  • the server 1352 is configured to perform the methods described in the present disclosure.
  • the processor 1412 and memory 7 1420 can be configured to perform the methods described herein (e.g., the method of FIG. 1, the method of FIG. 2, the method of FIG. 4, the method of FIG. 5).
  • data source 1302 can include a processor 1422. one or more data acquisition systems 1424, one or more communications systems 1426, and/or memory 1428.
  • processor 1422 can be any suitable hardware processor or combination of processors, such as a CPU, a GPU, and so on.
  • the one or more data acquisition systems 1424 are generally configured to acquire data, images, or both, and can include medical imaging system (e.g., a CT system, an MRI system). Additionally or alternatively, in some embodiments, the one or more data acquisition systems 1424 can include any suitable hardware, firmware, and/or software for coupling to and/or controlling operations of a medical imaging system (e g., a CT system, an MRI system).
  • one or more portions of the data acquisition system(s) 1424 can be removable and/or replaceable.
  • data source 1302 can include any suitable inputs and/or outputs.
  • data source 1302 can include input devices and/or sensors that can be used to receive user input, such as a keyboard, a mouse, a touchscreen, a microphone, a trackpad, a trackball, and so on.
  • data source 1302 can include any suitable display devices, such as an LCD screen, an LED display, an OLED display, an electrophoretic display, a computer monitor, a touchscreen, a television, etc., one or more speakers, and so on.
  • communications systems 1426 can include any suitable hardware, firmware, and/or software for communicating information to computing device 1350 (and.
  • communications systems 1426 can include one or more transceivers, one or more communication chips and/or chip sets, and so on.
  • communications systems 1426 can include hardware, firmware, and/or software that can be used to establish a wired connection using any suitable port and/or communication standard (e.g., VGA, DVI video, USB, RS-232, etc.), Wi-Fi connection, a Bluetooth connection, a cellular connection, an Ethernet connection, and so on.
  • memory' 1428 can include any suitable storage device or devices that can be used to store instructions, values, data, or the like, that can be used, for example, by processor 1422 to control the one or more data acquisition systems 1424. and/or receive data from the one or more data acquisition systems 1424; to generate images from data; present content (e.g., data, images, a user interface) using a display; communicate with one or more computing devices 1350; and so on.
  • Memory 1428 can include any suitable volatile memory, non-volatile memory, storage, or any suitable combination thereof.
  • memory 1428 can include RAM, ROM, EPROM, EEPROM, other types of volatile memory, other ty pes of non-volatile memory, one or more ty pes of semi-volatile memory, one or more flash drives, one or more hard disks, one or more solid state drives, one or more optical drives, and so on.
  • memory 1428 can have encoded thereon, or otherwise stored therein, a program for controlling operation of data source 1302.
  • processor 1422 can execute at least a portion of the program to generate images, transmit information and/or content (e.g., data, images, a user interface) to one or more computing devices 1350, receive information and/or content from one or more computing devices 1350, receive instructions from one or more devices (e.g., a personal computer, a laptop computer, a tablet computer, a smartphone, etc.), and so on.
  • information and/or content e.g., data, images, a user interface
  • processor 1422 can execute at least a portion of the program to generate images, transmit information and/or content (e.g., data, images, a user interface) to one or more computing devices 1350, receive information and/or content from one or more computing devices 1350, receive instructions from one or more devices (e.g., a personal computer, a laptop computer, a tablet computer, a smartphone, etc.), and so on.
  • devices e.g., a personal computer, a laptop computer, a tablet computer, a smartphone, etc
  • any suitable computer-readable media can be used for storing instructions for performing the functions and/or processes described herein.
  • computer-readable media can be transitory' or non-transitory.
  • non-transitory computer-readable media can include media such as magnetic media (e.g., hard disks, floppy disks), optical media (e.g., compact discs, digital video discs, Blu-ray discs), semiconductor media (e.g., RAM, flash memory, EPROM, EEPROM), any suitable media that is not fleeting or devoid of any semblance of permanence during transmission, and/or any suitable tangible media.
  • transitory computer-readable media can include signals on networks, in wires, conductors, optical fibers, circuits, or any suitable media that is fleeting and devoid of any semblance of permanence during transmission, and/or any suitable intangible media.
  • a component may be, but is not limited to being, a processor device, a process being executed (or executable) by a processor device, an object, an executable, a thread of execution, a computer program, or a computer.
  • a component may be, but is not limited to being, a processor device, a process being executed (or executable) by a processor device, an object, an executable, a thread of execution, a computer program, or a computer.
  • an application running on a computer and the computer can be a component.
  • One or more components may reside within a process or thread of execution, may be localized on one computer, may be distributed between two or more computers or other processor devices, or may be included within another component (or system, module, and so on).
  • devices or systems disclosed herein can be utilized or installed using methods embodying aspects of the disclosure.
  • description herein of particular features, capabilities, or intended purposes of a device or system is generally intended to inherently include disclosure of a method of using such features for the intended purposes, a method of implementing such capabilities, and a method of installing disclosed (or otherwise known) components to support these purposes or capabilities.
  • discussion herein of any method of manufacturing or using a particular device or system, including installing the device or system is intended to inherently include disclosure, as embodiments of the disclosure, of the utilized features and implemented capabilities of such device or system.

Landscapes

  • Health & Medical Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Medical Informatics (AREA)
  • Public Health (AREA)
  • General Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Biomedical Technology (AREA)
  • Artificial Intelligence (AREA)
  • Pathology (AREA)
  • Surgery (AREA)
  • Biophysics (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Physiology (AREA)
  • Molecular Biology (AREA)
  • Animal Behavior & Ethology (AREA)
  • Veterinary Medicine (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Neurology (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Primary Health Care (AREA)
  • Software Systems (AREA)
  • Epidemiology (AREA)
  • Psychiatry (AREA)
  • Signal Processing (AREA)
  • Evolutionary Computation (AREA)
  • Theoretical Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • Cardiology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Fuzzy Systems (AREA)
  • Databases & Information Systems (AREA)
  • Neurosurgery (AREA)
  • Psychology (AREA)
  • Radiology & Medical Imaging (AREA)
  • Image Analysis (AREA)
  • Measuring And Recording Apparatus For Diagnosis (AREA)

Abstract

Subarachnoid hemorrhage (SAH) is detected using a suitably trained machine learning model. The machine learning model may take medical images as an input. Additionally or alternatively, the machine learning model may take quantified SAH blood volume (qvSAH) data and patient health data as an input. In some examples, the machine learning model may generate classified feature data indicating a risk stratification, severity of illness, and/or prognosis. A report may be generated, which may include suggestions for clinical decision support, interventions based on the severity of illness, or triaging for the patient.

Description

SUBARACHNOID HEMORRHAGE DETECTION AND RISK STRATIFICATION WITH MACHINE LEARNING-BASED ANALYSIS OF MEDICAL IMAGES
BACKGROUND
[0001] Aneurysmal subarachnoid hemorrhage (SAH) is bleeding in the space between the brain and the surrounding membrane (i.e., the subarachnoid space and subarachnoid lymphatic-like membrane (SLYM)). Aneurysmal SAH is a medical/neurosurgical emergency that historically carried a 30-40% one-month mortality. SAH is currently diagnosed using noncontrast CT (NCCT), followed by neurosurgical interventions such as aneurysm clipping or coiling, and external ventricular drains to reduce intracranial pressure. There is an urgent unmet patient need for a faster SAH NCCT diagnosis that can accelerate earlier clinical and neurosurgical interventions proven to improve patient outcomes, and a model that also adds value to both diagnostic NCCT head data using contemporary machine-learning (ML), deep learning, and multi-modal ML data.
[0002] Despite decades of research, there remains several unmet patient needs for aneurysmal SAH patients, including an accelerated, time-based, diagnostic-to-therapeutic intervention(s) approach based on a characteristic head CT volumetric signal; and there is lack of prognostic precision for patients using the admission NCCT head data compared to the antiquated modified Fisher scale (0-4) which does predict clinical outcomes of modified Rankin scale, delayed cerebral ischemia (DCI), nor predict future need for permanent indwelling ventriculoperitoneal shunt (VPS)-dependency after external ventricular drain (EVD) placement.
SUMMARY OF THE DISCLOSURE
[0003] It is an aspect of the present disclosure to provide a method for assessing subarachnoid hemorrhage in a patient based on medical imaging data. The method includes accessing medical imaging data with a computer system, where the medical imaging data have been acquired from a patient. A machine learning model is also accessed with the computer system, where the machine learning model has been trained on training data to detect and assess subarachnoid hemorrhage based on medical images. The medical imaging data are input to the machine learning model with the computer system, generating classified feature data as an output. The classified feature data indicate at least one of SAH detection, SAH risk stratification, or SAH prognosis for the patient. A report is generated with the computer system using the classified feature data, where the report indicates one or more of the SAH detection, SAH risk stratification, or SAH prognosis based on the medical imaging data. BRIEF DESCRIPTION OF THE DRAWINGS
[0004] FIG. 1 is a flowchart seting forth the steps of an example method for quantifying subarachnoid hemorrhage (SAH) blood volume from medical images using a neural network model.
[0005] FIG. 2 is a flowchart seting forth the steps of an example method for training a neural network model to quantify SAH blood volume from medical images.
[0006] FIGS. 3 A and 3B show examples of manually annotated and automatically segmented medical images indicating probable SAH regions.
[0007] FIG. 4 is a flowchart seting forth the steps of an example method for assessing SAH in a patient using a machine learning model.
[0008] FIG. 5 is a flowchart seting forth the steps of an example method for training a machine learning model to assess SAH in a patient based on quantified SAH blood volume (qvSAH) data and patient health data.
[0009] FIG. 6 shows (a) Multiple NCCT-imaging slices of a patient with aSAH. There are visible hyperdensities in (b) multiple cisternal compartments, marked with lines to determine length and width of the respected hemorrhage in mm. Measurement of width, thickness and vertical extension (number of CT-slices with visible hemorrhage) was done in consideration of cisternal anatomy. These metric variables were then imputed in (c) a simplified quantitative volumetric formula (simplified volume equation), to measure the hemorrhagic volume in each anatomical structure. Each volume was then summed to a cumulative total cisternal subarachnoid hemorrhage volume (CHV). If intraparenchymal hematoma or intraventricular hemorrhage was present it was summated to the CHV and referred to as external cisternal hemorrhage volume (eCHV).
[0010] FIG. 7 shows (a) and (d) mean volumes based on dichotomization into good outcome (mRS 0-3) and poor outcome (mRS 4-6) and the respective method (M1/M2) applied to estimate these volumes; (b) shows ROC curve of CHV on outcome at discharge with AUC = 0.78 (95 % CI, 0.72-0.84) and ROC curve of eCHV on outcome at discharge with AUC = 0.82 (95 % CI. 0.78-0.89); (c) shows ROC curve of modified Fisher scale (mFS) on outcome at discharge with AUC = 0.69 (95 % CI, 0.58-0.81) (e) shows ROC curve of CHV on DO with AUC = 0.71 (95 % CI, 0.62-0.79) and eCHV on DCI with AUC = 0.71 (95 % CI, 0.63-0.79); f.) shows ROC cun e of mFS on presence of DCI with AUC = 0.69 (95 % CI, 0.58-0.81) [0011] FIG. 8 shows the end-result of an example manual method in 2-dimensional (a, b, c) and 3 -dimensional (d. e. f) subarachnoid hemorrhage volume, S AH V-3D Brain Map. This SAHV-3D Brain Map is from the first case on day 0 (incident day of SAH). Segmentation of eight spaces (five cisternal spaces, Intraparenchymal Hemorrhage (IPH). Intraventricular Hemorrhage (IVH), and gyral/sulcal spaces) in red equals blood. Planes: axial (a); sagittal (b); coronal (c); View from axial (d); sagittal (e); coronal (f).
[0012] FIG. 9 shows the end-result SAHVAI of SAHV in 2-dimensional (a, b, c) and 3-dimensional (d, e, I SAHV-3D Brain Map formats. This SAHVAI-3D Brain Map is from the first case on day 0 (incident day of SAH). Segmentation of five cisternal spaces in red equals blood. Planes: axial (a); sagittal (b); coronal (c); View from axial (d); sagittal (e); coronal (f).
[0013] FIG. 10 shows a plot of the mean and standard deviation of SAHVAI (dark blue) and manual (light blue) methods used to measured the quantitative SAH volume (SAHV) over time of an example patient cohort (n=10) day-by-day.
[0014] FIG. 11 shows 2-dimensional SAHV Brain Map of three cases (Case 1, Case 5, and Case 8) of manual (a, b, c) versus SAHVAI (d, e, f) methods from an example study. All planes are axial. The MM segmented eight spaces are colored red as SAH blood, whereas the SAHVAI method labeled five cisternal spaces in red, which equals the SAH blood. Each NCCT scan is labeled with an overall opacity of 50%.
[0015] FIG. 12 shows an example SAHV-4D graph. Quantitative SAH volume (left y- axis) measured with SAHVAI and manual methods over time (x-axis). Right y-axis Modified Rankin Scale. Neurological Complications and vasospasms are displayed with diagnostic procedures, grading, and intervention in this graph as straight lines (=4-dimensional graph). Abbreviations: Al = artificial intelligence; CTA = Computed tomography angiography; TCD = Transcranial doppler ultrasound (Grading of the severity of vasospasm using TCD - Middle cerebral artery: Normal = MFV (=Mean flow velocity) <120 cm/s, Lindegaard Ratio <3. Mild vasospasm = MFV 120-150 cm/s, Lindegaard Ratio 3-4.5, Moderate vasospasm = MFV 150- 200 cm/s, Lindegaard Ratio 4.5-6.0, Severe vasospasm = MFV >200 cm/s, Lindegaard Ratio >6); DSA = Digital subtraction angiography; x = no vasospasm; * = intra-arterial infusion of 15 mg of Verapamil in the right internal carotid artery; *2 = Intrathecal (IT)/intraventricular Nicardipine 4mg every 12 hours (3/7/22 till 3/9/22).
[0016] FIG. 13 is a block diagram of an example system for SAH detection, risk stratification, and prognosis.
[0017] FIG. 14 is a block diagram of example components that can implement the system of FIG. 13. [0001] FIG. 15 is a schematic showing the current state of SAH patient care with delays in SAH recognition, lack of quantified precision measurement of SAHV blood, and early activation of neurosurgical interventions. The bottom part of the image shows the integrated SAHV Al system which integrates a SAH detection system for presence or absence of SAH blood, an automated segmentation of the qv-SAH (SAHV) in mL, integration with Electronic Medical Record (EMR) variables, a reporting and communication platform among stroke teams to rapidly accelerate patient care interventions for this extremely time-sensitive stroke disease.
[0002] FIG 16 is a diagram showing how multimodal Al can use SAHV Al with other multi-omics data available in the EMR such as clinical notes data, physiologic (blood pressure and ECG) with phenotypic data, and pathological data to make predictive analytics to generate models on clinical outcomes, delayed cerebral ischemia, and when combined with existing pharmacogenomics data and physiological and EMR data predictions about drug responsiveness.
DETAILED DESCRIPTION
[0003] Described here are systems and methods for automatically detecting aneurysmal subarachnoid hemorrhage (SAH) using a suitably trained machine learning model. In this way, the disclosed systems and methods provide for the automatic detection and/or recognition of SAH. Additionally or alternatively, the disclosed systems and methods may segment and quantify hemorrhage for risk stratification. For instance, the disclosed systems and methods may provide clinical lab values and/or automated segmentation (e.g., volume segmentation) for risk stratification, determining severity of illness, and so on. In still other examples, the disclosed systems and methods may predict future clinical outcomes (e g., prognosis) at the point-of-care and may suggest clinical-decision support (CDS) and/or interventions based on the determined severity of illness.
[0004] In some implementations, the machine learning models utilized by the disclosed systems and methods may provide a threshold for unfavorable patient outcomes or future predicted patient central nervous system (CNS) events and outcomes. As one non-limiting example, the threshold for such possible future unfavorable patient outcomes overall and/or CNS-specific outcomes may be when CHV (and/or CHV and eCHV) is more than 10 mL for total hemorrhage volume.
[0005] It is an advantage of the disclosed systems and methods to provide a significant reduction in the total time required for a clinical workflow from SAH onset to neurosurgical intervention. For example, the existing total time for such a clinical workflow may be reduced by up to about 50%.
[0006] In one aspect, a non-machine learning-based model may quantify SAH blood (qvSAH) using linear measurements for low resource areas (e.g., non-stroke centers). This model may be validated by comparing to a more time-intensive segmentation model, such as a segmentation model based on using RIL-Contour (or similar segmentation software methods) processing of CT images. This second model approach may be considered a “ground truth” for actual qv-SAH blood volume in milliliters (mL) compared to the simplified linear model or estimates of volume.
[0007] In another aspect, a machine learning model may automate determining qvSAH volumes. The machine learning model may also be linked to clinical outcomes by 30 days. Thus, the machine learning model described in the present disclosure may segment portions of the brain from medical imaging data (e.g., CT images), quantify SAH blood volume, and correlate the estimated qvSAH data that are associated with eventual, or otherwise probable, patient outcomes. For example, the estimated qvSAH data may be correlated to a modified Rankin scale, radiographic and/or symptomatic vasospasm, delayed cerebral ischemia (DCI) outcomes, or other such scores or outcomes. This machine learning model may be referred to as a subarachnoid hemorrhage volumetric artificial intelligence (SAH-VAI) model.
[0008] Referring now to FIG. 1. a flowchart is illustrated as setting forth the steps of an example method for generating quantified volumetric SAH data (e.g.. qvSAH data, or alternatively SAHV or SAH volume) using a suitably trained neural network or other machine learning model. As will be described, the neural network or other machine learning model takes medical imaging data (e g., CT imaging data, MRI imaging data) as input data and generates qvSAH data as output data. As an example, the qvSAH data may include segmented regions of the medical imaging data that are associated with a detected SAH. Additionally or alternatively, the qvSAH data may include quantified volume measurements for each detected SAH region.
[0009] The method includes accessing medical imaging data with a computer system, as indicated at step 102. Accessing the medical imaging data may include retrieving such data from a memory or other suitable data storage device or medium, such as a picture archiving and communication system (PACS), or the like. Additionally or alternatively, accessing the medical imaging data may include acquiring such data with a medical imaging system and transferring or otherwise communicating the data to the computer system, which may be a part of the medical imaging system. As one example, the medical imaging data may be CT imaging data acquired with a CT system. For instance, the CT imaging data may include CT images acquired from a subject. As another example, the medical imaging data may be MRI imaging data acquired with an MRI system. For instance, the MRI imaging data may include MRI images acquired from a subject. Additionally or alternatively, the medical imaging data may be acquired with other imaging systems or cloud-based imaging platforms, such as other platforms with imaging information similar to CT imaging, MRI imaging, or other biomedical imaging. In some examples, the medical imaging data may include medical images, which may be in a DICOM format.
[0010] A trained neural network (or other suitable machine learning algorithm) is then accessed with the computer system, as indicated at step 104. In general, the neural network is trained, or has been trained, on training data in order to generate qvSAH data from medical imaging data.
[0011] Accessing the trained neural network may include accessing network parameters (e.g., weights, biases, or both) that have been optimized or otherwise estimated by training the neural network on training data. In some instances, retrieving the neural network can also include retrieving, constructing, or otherwise accessing the particular neural network architecture to be implemented. For instance, data pertaining to the layers in the neural network architecture (e.g., number of layers, type of layers, ordering of layers, connections between layers, hyperparameters for layers) may be retrieved, selected, constructed, or otherwise accessed.
[0012] An artificial neural network generally includes an input layer, one or more hidden layers (or nodes), and an output layer. Typically, the input layer includes as many nodes as inputs provided to the artificial neural network. The number (and the type) of inputs provided to the artificial neural network may vary based on the particular task for the artificial neural network.
[0013] The input layer connects to one or more hidden layers. The number of hidden layers varies and may depend on the particular task for the artificial neural network. Additionally, each hidden layer may have a different number of nodes and may be connected to the next layer differently. For example, each node of the input layer may be connected to each node of the first hidden layer. The connection between each node of the input layer and each node of the first hidden layer may be assigned a weight parameter. Additionally, each node of the neural network may also be assigned a bias value. In some configurations, each node of the first hidden layer may not be connected to each node of the second hidden layer. That is, there may be some nodes of the first hidden layer that are not connected to all of the nodes of the second hidden layer. The connections between the nodes of the first hidden layer and the second hidden layer are each assigned different weight parameters. Each node of the hidden layer is generally associated with an activation function. The activation function defines how the hidden layer is to process the input received from the input layer or from a previous input or hidden layer. These activation functions may van- and be based on the type of task associated with the artificial neural network and also on the specific type of hidden layer implemented.
[0014] Each hidden layer may perform a different function. For example, some hidden layers can be convolutional hidden layers which can. in some instances, reduce the dimensionality of the inputs. Other hidden layers can perform statistical functions such as max pooling, which may reduce a group of inputs to the maximum value; an averaging layer; batch normalization; and other such functions. In some of the hidden layers each node is connected to each node of the next hidden layer, which may be referred to then as dense layers. Some neural networks including more than, for example, three hidden layers may be considered deep neural networks.
[0015] The last hidden layer in the artificial neural network is connected to the output layer. Similar to the input layer, the output layer typically has the same number of nodes as the possible outputs.
[0016] In an example in which the artificial neural network generates qvSAH data that indicate the detection and/or quantified volumes of SAH in medical imaging data, the output layer may include, for example, a number of different nodes, where each different node corresponds to a different region of the medical imaging data that has been identified as being consistent with a detected or probable SAH. Alternatively, the output layer may include outputting a single SAH map that indicates multiple spatial locations having been identified as detected or probable SAH. In some instances, the output layer may also output a quantified volume measurement for each detected SAH region.
[0017] The medical imaging data are then input to the trained neural network, or other machine learning model, generating output as qvSAH data, as indicated at step 106. For example, as described above, qvSAH data may include one or more detected regions of SAH in the medical imaging data in addition to quantified volume measurements of each detected SAH region. In these instances, the qvSAH data may include an SAH map that indicates regions in the medical imaging data that are consistent with SAH. Additionally or alternatively, the SAH map may include a quantified volume measurement of each detected SAH. As described in the present disclosure, the quantified SAH volume can be used as additional information by a clinician to risk stratify the subject and assess potential CDS and/or interventions. Advantageously, the disclosed systems and methods are capable of detecting both subarachnoid hemorrhage bleeding and rebleeding. In this way, subtle rebleeding can also be detected in the qvSAH data before patients clinically deteriorate.
[0018] The qvSAH data generated by inputting the medical imaging data to the trained neural network can then be used to generate a report that is displayed to a user, stored for later use or further processing, or both, as indicated at step 108.
[0019] In some examples, the report may include one or more images or maps. For instance, the report may include overlaying an SAH map on the medical imaging data to identify the regions of probable SAH in the medical imaging data. Additionally or alternatively, the report may include the quantified volume measurement of each detected SAH. Such data or maps could be used to create 3D reconstructions of qvSAH topography for associated overlaying with other biomedical imaging data, such as brain and functional neuroanatomy mapping. Additionally or alternatively, the data or maps could be , and correlation with additional neuroimaging data such as non-contrast CT, CT angiogram, CT perfusion, MRI, diffusion MRI, diffusion tensor imaging (DTI) and/or tractography, perfusion MRI. MR angiogram, and/or MR vessel wall-imaging data.
[0020] In some other examples, the qvSAH data can be used to generate one or more score values for the subject. As an example, an enhanced SAH (eSAH) score can be calculated using the qvSAH data and other patient health data, as described below in more detail. The eSAH score can be provided as part of the report generated in step 108.
[0021] Referring now to FIG. 2, a flowchart is illustrated as setting forth the steps of an example method for training one or more neural networks (or other suitable machine learning algorithms) on training data, such that the one or more neural networks are trained to receive medical imaging data as input data in order to generate qvSAH as output data.
[0022] In general, the neural network(s) can implement any number of different neural network architectures. For instance, the neural network(s) could implement a convolutional neural network, a residual neural network, or the like. Use of recurrent neural networks and multimodal methods of deep learning and machine learning methods with different architectures that use artificial neural networks may also be utilized. Alternatively, the neural network(s) could be replaced with other suitable machine learning or artificial intelligence algorithms, such as those based on supervised learning, unsupervised learning, deep learning, ensemble learning, dimensionality reduction, and so on.
[0023] The method includes accessing training data with a computer system, as indicated at step 202. Accessing the training data may include retrieving such data from a memory or other suitable data storage device or medium. Alternatively, accessing the training data may include acquiring such data with a medical imaging system and transferring or otherwise communicating the data to the computer system.
[0024] In general, the training data can include medical imaging data (e.g., CT images, MRI images) that have been annotated to identify regions associated with SAH. Additionally, the training data may include annotations that indicate a quantified volume measurement of each labeled SAH region. FIGS. 3A and 3B show examples of CT images that have been annotated to identify probable SAH regions (left) and output SAH maps (right) that indicate probable SAH regions identified by the systems and methods described in the present disclosure.
[0025] The method can include assembling training data from medical imaging data using a computer system. This step may include assembling the medical imaging data into an appropriate data structure on which the neural network or other machine learning algorithm can be trained. Assembling the training data may include assembling medical images, segmented medical images, and other relevant data. For instance, assembling the training data may include generating labeled data and including the labeled data in the training data. Labeled data may include medical images, segmented medical images, or other relevant data that have been labeled as belonging to, or otherwise being associated with, one or more different classifications or categories. For instance, labeled data may include medical images and/or segmented medical images that have been labeled as containing one or more SAH regions, labeled with a quantified volume measurement of labeled SAH regions, and so on.
[0026] One or more neural networks (or other suitable machine learning algorithms) are trained on the training data, as indicated at step 204. In general, the neural network can be trained by optimizing network parameters (e.g., weights, biases, or both) based on minimizing a loss function. As one non-limiting example, the loss function may be a mean squared error loss function.
[0027] Training a neural network may include initializing the neural network, such as by computing, estimating, or otherwise selecting initial network parameters (e.g.. weights. biases, or both). During training, an artificial neural network receives the inputs for a training example and generates an output using the bias for each node, and the connections between each node and the corresponding weights. For instance, training data can be input to the initialized neural network, generating output as qvSAH data. The artificial neural network then compares the generated output with the actual output of the training example in order to evaluate the quality of the qvSAH data. For instance, the qvSAH data can be passed to a loss function to compute an error. The current neural network can then be updated based on the calculated error (e.g., using backpropagation methods based on the calculated error). For instance, the current neural network can be updated by updating the network parameters (e.g., weights, biases, or both) in order to minimize the loss according to the loss function. The training continues until a training condition is met. The training condition may correspond to, for example, a predetermined number of training examples being used, a minimum accuracy threshold being reached during training and validation, a predetermined number of validation iterations being completed, and the like. When the training condition has been met (e.g.. by determining whether an error threshold or other stopping criterion has been satisfied), the current neural network and its associated network parameters represent the trained neural network. Different types of training processes can be used to adjust the bias values and the weights of the node connections based on the training examples. The training processes may include, for example, root mean squared error, information loss (e.g., entropy) methods, crossentropy loss, gradient descent, optimal mass transport methods, Newton’s method, conjugate gradient, quasi-Newton, Levenberg-Marquardt, among others.
[0028] The artificial neural network can be constructed or otherwise trained based on training data using one or more different learning techniques, such as supervised learning, unsupervised learning, reinforcement learning, ensemble learning, active learning, transfer learning, or other suitable learning techniques for neural networks. As an example, supervised learning involves presenting a computer system with example inputs and their actual outputs (e.g., categorizations). In these instances, the artificial neural network is configured to leam a general rule or model that maps the inputs to the outputs based on the provided example inputoutput pairs.
[0029] The one or more trained neural networks are then stored for later use, as indicated at step 206. Storing the neural network(s) may include storing network parameters (e.g., weights, biases, or both), which have been computed or otherwise estimated by training the neural network(s) on the training data. Storing the trained neural network(s) may also include storing the particular neural network architecture to be implemented. For instance, data pertaining to the layers in the neural network architecture (e.g., number of layers, type of layers, ordering of layers, connections between layers, hyperparameters for layers) may be stored.
[0030] Referring now to FIG. 4, a flowchart is illustrated as setting forth the steps of an example method for generating classified feature data using one or more suitably trained machine learning models, such as tree-based models (e.g., a decision tree model, a random forest model, a boosting model, a gradient boosting model). As will be described, the machine learning model(s) take qvSAH data and other patient health data as input data and generate classified feature data as output data. As an example, the classified feature data can be indicative of detecting SAH, which may include subarachnoid hemorrhage bleeding and/or rebleeding. Additionally or alternatively, the classified feature data may include risk scores that indicate a risk stratification of detected SAH. Accordingly, the classified feature data may indicate a severity of illness for the subject. The classified feature data may also indicate a prognosis for a subject based on a detected SAH. For example, the classified feature data may indicate a probability of one or more future outcomes for the subject based on the detected SAH, the estimated SAH risk scores, and/or the estimated severity of the SAH. Based on the prognostic data, one or more CDS and/or interventions based on the identified and risk stratified SAH may be determined.
[0031] The method includes accessing qvSAH data and other patient health data with a computer system, as indicated at step 402. Accessing the qvSAH and other multi-modal patient health data may include retrieving such data from a memory or other suitable data storage device or medium. Additionally or alternatively, accessing the qvSAH data may include generating such data with the computer system (e.g., using the methods described above) and transferring or otherwise communicating the data to the computer system. In some examples, the qvSAH data may be estimated by the machine learning model, such that separate qvSAH data need not be accessed by the computer system. In these instances, the patient health data may include medical imaging data, as described below, and the machine learning model may be trained to estimate qvSAH data from the medical imaging data contained in the patient health data.
[0032] The patient health data may include unstructured data and/or structured data such as patient demographics, diagnoses, procedures, lab results, histopathology data, medications, vital signs, genetic sequencing, medical imaging, and other clinical observations. [0033] The lab results may include blood test results, such as blood test results measured from complete blood count (CBC) or the like. As one non-limiting example, the lab results in the patent health data may include neutrophil to lymphocyte ratio (NLR) measured from CBC, other surrogate peripheral blood markers of SAH and/or systemic inflammation, or other markers that indicate SAH and/or systemic inflammation. For example, an NLR equal to or greater than 12.5 at admission may predict a higher inpatient mortality in patients with aneurysmal SAH. Other examples of clinical laboratory data and/or histopathology data can include genetic testing and laboratory information, such as performance scores, lab tests, pathology' results, prognostic indicators, date of genetic testing, testing method used, and so on. In some instances, the lab results may also include measures of cerebrospinal fluid (CSF) output into an external ventricular drain (EVD) system, which can be visually inspected for the density or darkness of red color of bloody effluent and can also be assessed to cross-correlate with the SAHV/qvSAH blood volume being drained. The trend or decrease of SAHV on NCCT over time can be measured by SAHV Al and can be correlated with a reciprocal change in the density of blood products drained in the EVD or similar CSF irrigation systems. Such CSF SAHV blood output can be visually estimated, or may be measured using spectrophotometry or chromatographic methods similar to measurement of xanthochromia.
[0034] The patient health data may also include other clinical severity' of illness scales and observations commonly used and documented in the medical record, such as measurements of the Glasgow Coma Scale, FOUR score. World Federation of Neurologic Surgeons scale, and the like.
[0035] Features derived from structured, curated, and/or EHR data may include clinical features such as diagnoses; symptoms; therapies; outcomes; patient demographics, such as patient name, date of birth, gender, and/or ethnicity; diagnosis dates for cancer, illness, disease, or other physical or mental conditions; personal medical history; family medical history; clinical diagnoses, such as date of initial diagnosis, date of metastatic diagnosis, cancer staging, tumor characterization, and tissue of origin; and the like. Additionally, the patient health data may also include features such as treatments and outcomes, such as line of therapy, therapy groups, clinical trials, medications prescribed or taken, surgeries, radiotherapy, imaging, adverse effects, and associated outcomes.
[0036] Patient health data can include a set of clinical features associated with information derived from clinical records of a patient, which can include records from family members of the patient. These clinical features and data may be abstracted from unstructured clinical documents, EHR, or other sources of patient history. Such data may include patient symptoms, diagnosis, treatments, medications, therapies, responses to treatments, laboratory testing results, medical history, geographic locations of each, demographics, or other features of the patient which may be found in the patient’s EHR.
[0037] In some instances, patient health data can include medical imaging data, which may include images of the patient obtained with one or more different medical imaging modalities, including magnetic resonance imaging (MRI), computed tomography (CT), x-ray imaging, positron emission tomography (PET), ultrasound, and so on. The medical imaging data may also include parameters or features computed or derived from such images. Medical imaging data may also include digital pathology’ images, such as H&E slides, IHC slides, and the like. The medical imaging data may also include data and/or information from pathology and radiology reports, which may be ordered by a physician during the course of diagnosis and treatment of various illnesses and diseases.
[0038] In some instances, the patient health data can include one or more ty pes of omics data and/or multimodal omics data, such as genomics data, pharmacogenomics data, proteomics data, transcriptomics data, epigenomics data, metabolomics data, microbiomics data, and other multiomics data types. The patient health data can additionally or alternatively include patient geographic data, demographic data, and the like. In some instances, the patient health data can include information pertaining to diagnoses, responses to treatment regimens, genetic profiles, clinical and phenotypic characteristics, and/or other medical, geographic, demographic, clinical, molecular, or genetic features of the patient.
[0039] As a non-limiting example, epigenomics data may include data associated with information derived from DNA modifications that are not changes to the DNA sequence and regulate the gene expression. These modifications can be a result of environmental factors based on what the patient may breathe, eat, or drink. These features may include DNA methylation, histone modification, or other factors which deactivate a gene or cause alterations to gene function without altering the sequence of nucleotides in the gene.
[0040] Microbiomics data may include, for example, data derived from the viruses and bacteria of a patient. These features may include viral infections which may affect treatment and diagnosis of certain illnesses as well as the bacteria present in the patient's gastrointestinal tract which may affect the efficacy of medicines ingested by the patient.
[0041] Metabolomics data may include molecules obtained from the blood, CSF, and body compartments in patients. As anon-limiting example, the metabolomics data may include such data obtained from patients that are associated with SAH physiology and correlated with the qvSAH data and other SAHVAI datasets.
[0042] Proteomics data may include data associated with information derived from the proteins produced in the patient. These features may include protein composition, structure, and activity; when and where proteins are expressed; rates of protein production, degradation, and steady-state abundance; how proteins are modified, for example, post-translational modifications such as phosphorylation; the movement of proteins between subcellular compartments; the involvement of proteins in metabolic pathways; how proteins interact with one another; or modifications to the protein after translation from the RNA such as phosphorylation, ubiquitination, methylation, acetylation, glycosylation, oxidation, or nitrosylation.
[0043] Genomics data may include genomic information that can be, or have been, correlated with the symptoms and medication effect, tolerance, and/or side effect information that may be received from a patient as responses to a questionnaire and stored as questionnaire response and/or phenotypic data. As a non-limiting example, genomics data can be extracted from blood or saliva samples collected from individuals who have also completed one or more questionnaires such that corresponding questionnaire response data is available for the individuals. A deep phenotypic characterization of these individuals can be assembled. As an example, in one large subset, prospectively determined patterns of treatment response after protocoled titrations in vanous different drugs from distinct classes of treatments have been assembled. For instance, an analysis of Verapamil, (an L-type calcium channel blocker) using whole exome sequencing (WES) can be completed following genotyping in a confirmatory cohort.
[0044] In some embodiments, the patient health data can include a collection of data and/or features including all of the data types disclosed above. Alternatively, the patient health data may include a selection of few er data and/or features.
[0045] A trained machine learning model is then accessed with the computer system, as indicated at step 404. In general, the machine learning model is trained, or has been trained, on training data in order to generate classified feature data indicative of a SAH diagnosis, SAH risk stratification, SAH severity, and/or SAH prognosis.
[0046] Accessing the trained machine learning model may include accessing model parameters (e.g., decision criteria for each feature at each split in a tree-based model) that have been optimized or otherwise estimated by training the machine learning model on training data. In some instances, retrieving the machine learning model can also include retrieving, constructing, or otherwise accessing the particular model architecture to be implemented. For instance, data pertaining to a tree-based model architecture (e.g., root node, features to evaluate for the root node, number of leaf nodes, features to evaluate at each leaf node, number of branches) may be retrieved, selected, constructed, or otherwise accessed.
[0047] In an example in which the machine learning model generates classified feature data that indicate a risk score or severity of a detected SAH, the output may include risk scores, severity scores, or the like. For instance, the output may be a risk score and/or severity score (or other severity classification) for each detected SAH region, or for the subject as a whole.
[0048] In an example in which the machine learning model generates classified feature data that indicate a prognosis for a detected SAH. the output may include prognostic data. For instance the output may be a probability of one or more future outcomes for the subject based on the detected SAH, the estimated risk scores, and/or the estimated severity scores (or other severity classifications) of the SAH.
[0049] The qvS AH data and patient health data are then input to the one or more trained machine learning models, generating output as classified feature data, as indicated at step 406. For example, the classified feature data may include a risk score. The risk score can provide physicians or other clinicians with a recommendation to consider additional monitoring for subjects whose qvSAH data and patient health data indicate the likelihood of the subject having SAH. In some embodiments, the risk score may be an eSAH score. Additionally or alternatively, the classified feature data may indicate the severity for a particular classification of SAH (i.e., the probability that the qvSAH data and/or patient health data include patterns, features, or characteristics indicative of detecting, differentiating, and/or determining the severity of SAH). Additionally or alternatively, the classified feature data may indicate a prognosis for a subject based on a detected SAH. For example, the classified feature data may indicate a probability of one or more future outcomes for the subject based on the detected SAH. the estimated SAH risk scores, and/or the estimated severity of the SAH. Based on the prognostic data, one or more CDS and/or interventions based on the identified and risk stratified SAH may be determined. As an example, FIG. 15 illustrates an example workflow for accelerated detection and segmentation using the systems and methods described in the present disclosure relative to earlier treatments.
[0050] The classified feature data generated by inputting the qvSAH data and patient health data to the trained machine learning model(s) can then be used to generate a report that is displayed to a user, stored for later use or further processing, or both, as indicated at step 408. FIG. 16 illustrates an example workflow for integrating multimodal data, such as from the patient health data described above, in addition to qvSAH data to provide outputs of predictive modeling, outcome measures, and the like.
[0051] In some examples, the report may include one or more images or maps. For instance, the report may include overlaying an SAH map on the medical imaging data to identify the regions of probable SAH in the medical imaging data. Additionally or alternatively, the report may include the quantified volume of each detected SAH. In some implementations, the quantified SAH volumes may be correlated with clinical outcome measures. For example, the quantified SAH volumes may be correlated with a modified Rankin scale as indicated in the classified feature data, which may be provided as part of the generated report.
[0052] When the classified feature data include risk scores, severity scores, and/or other severity classifications, these scores or classifications may also be stored in the generated report. For instance, risk scores or severity scores may be displayed together with each detected SAH such that the user can identify’ SAH volumes that are at greater risk, or are of greater severity, to the subject. In some examples, the report may include identifying or highlighting SAH volumes that are above a risk threshold. As a non-limiting example, the risk threshold may be 10 mL, such that SAH volumes that are greater than 10 mL are highlighted in the report as being riskier for the subject. As another non-limiting example, the classified feature data may include one or more outcome measures, clinical score estimates, or the like. For instance, the classified feature data may include estimates of modified Rankin scale (mRS) values.
[0053] The report may also include prognostic data, as described above. For instance, the report may include the probability of one or more future outcomes for the subject based on the detected SAH, the estimated SAH risk scores, and/or the estimated severity of the SAH. Based on the prognostic data, one or more CDS and/or interventions based on the identified and risk stratified SAH may be determined and stored in the generated report. As a non-limiting example, the classified feature data may indicate a probability of delayed cerebral ischemia (DCI) in the subject. Additionally or alternatively, the classified feature data may indicate a probability of shunt dependency in the subject.
[0054] As yet another example, the classified feature data may indicate a probability of adverse drug effects in a subject. For instance, higher blood volumes can create medication sensitivity and with certain genotypes this can lead to dose reduction of neuroprotective drugs, such as nimodipine. Thus, the classified feature data can indicate individualized dose recommendations for a subject early to prevent hypotension or other adverse events. The classified feature data may also be collected over a period of time to monitor efficacy of the recommended dose, such that the dose can be later adjusted once the blood and physiology improves.
[0055] In this way, the report generated by the systems and methods described in the present disclosure allows for rapid triaging of subjects with suspected SAH. When one or more SAH volumes are detected, they may be risk stratified to highlight issues that require urgent attention by a clinician. The report may also provide prognostic information for the clinician, including a list of potential CDS and/or interventions to be considered by the clinician.
[0056] The triaging provided by the systems and methods described in the present disclosure can help optimize transfers in need of comprehensive stroke centers (CSCs) and/or thrombectomy capable centers. In some implementations, the generated report may be integrated with a triage communication system, such that critical ICU bed status can be monitored and cases that need intervention can be selected or otherwise highlighted based on the SAH risk stratification provided by the classified feature data. Additionally, the generated report can help triage futile care cases (e.g., massive pontine or brainstem hemorrhage) to avoid unnecessary non-operative neurosurgical transfers.
[0057] Referring now to FIG. 5, a flowchart is illustrated as setting forth the steps of an example method for training one or more machine learning models on training data, such that the one or more machine learning models are trained to receive qvSAH data and patient data as input data in order to generate classified feature data as output data, where the classified feature data are indicative of detecting SAH, risk stratifying SAH, and/or the prognosis of detected SAH.
[0058] In general, the machine learning model(s) can implement any number of different model architectures. For instance, the machine learning model (s) may implement a tree-based model, such as a decision tree model, a random forest model, a boosting model, a gradient boosting model, or the like. Additionally or alternatively, the machine learning model(s) may implement an artificial neural network, such as a convolutional neural network, a residual neural network, or the like. Additionally or alternatively, the machine learning model(s) could be replaced with other suitable machine learning or artificial intelligence algorithms, such as those based on supervised learning, unsupervised learning, deep learning, ensemble learning, dimensionality reduction, and so on. [0059] The method includes accessing training data with a computer system, as indicated at step 502. Accessing the training data may include retrieving such data from a memory or other suitable data storage device or medium. In general, the training data can include qvSAH data and patient health data collected from a group of subjects.
[0060] One or more machine learning models are trained on the training data, as indicated at step 504. In general, the machine learning model can be trained by optimizing model parameters. For instance, when the machine learning model is a tree-based model, optimizing the model parameters may include optimizing the features to be analyzed at each decision point in the tree-based model, as well as the criteria used to determine how to split the data. Training may proceed based on evaluating one or more metrics, such as estimate of positive correctness, Gini impurity, information gain, variance reduction, measure of “goodness,” and so on.
[0061] The one or more trained machine learning model(s) are then stored for later use, as indicated at step 506. Storing the machine learning model(s) may include storing model parameters, which have been computed or otherwise estimated by training the machine learning model(s) on the training data. Storing the machine learning model(s) may also include storing the particular model architecture to be implemented. For instance, data pertaining to a treebased model architecture (e.g., number of leaf nodes, ordering of leaf nodes, connections between nodes, hyperparameters for the nodes and/or tree-based model) may be stored.
[0062] In addition to the machine learning-based methods for estimating qvSAH described in the present disclosure, analytical and other computational models may also be used to estimate qvSAH data. For instance, these models may be used to estimate qvSAH data as a separate input to the machine learning models described above with respect to FIG. 4.
[0063] In an example study, an easy-to-use and quantifiable method to estimate the total blood volume in patients with aneurysmal subarachnoid hemorrhage was developed. The volumetric extent of the method was assessed based on the incidence of delayed cerebral ischemia (DCI) and poor outcomes.
[0064] A cohort of 277 patients in electronic health medical record (EHR) with SAH was analyzed. Demographics, medical history, clinical assessment, CT imaging, and hospital course were analyzed. Inclusion criteria for the study required initial non-contrast CT head imaging and SAH diagnosis to be done within 24 hours of SAH ictus. CT imaging data had 5 mm imaging slices or less or at least equivalent reformatting to measure data quantitatively. Neuroimaging was performed to definitively diagnose and further relativize the extent of DCI, including delayed non-contrast CT (NCCT), computed tomography angiography (CTA), magnetic resonance imaging (MRI). magnetic resonance angiography (MRA) or digital subtraction angiography (DSA). Criteria for exclusion included the absence of NCCT imaging, no diagnosis of SAH or associated disease (only intracerebral hemorrhage in the medical record, IVH or subdural hemorrhage without SAH) and traumatic SAH. Further, patients that died within 72 hours of admission were excluded from DCI analysis since they did not have imaging between 4-14 days post-SAH.
[0065] All admission non-contrast cranial CT-images (NCCT) were analyzed for measuring SAH volume determined in the following five neuroanatomic regions: 1.) prepontine cistem. 2.) perimesencephalic cistern, 3.) suprasellar cistern. 4.) left and right sylvian cisternal complex and 5.) anterior interhemispheric fissure (FIG. 6). After several mathematical equations were attempted to estimate SAH blood volume in these compartments, a simplified quantitative volumetric approach (sQV-SAH, Model 1) or ellipsoid formula was determined as effective as compared against a segmentation model (sQV-SAH Model 2). For Model 1, SAH hyperdense was measured in these 5 compartments using an ellipsoid configuration, where cylindrical volumetries could be estimated by calculating volume through application of ABC/2 (with A = width/thickness in mm, B = length, C = vertical extension) similar to ABC/2 for intracerebral hemorrhage. ABC/2 is a simplified derivation of the mathematical ellipsoid formula, where V = n * r2 * h (where V = Volume in [mL], r = radius in [mm], h = height slices in [mm]), also used in estimations in intraparenchymal hemorrhage. SAH blood volume was measured in each of the 5 compartments of its maximal width/thickness (A) and length (B) for each cistem, (FIG. 6) and for (C) the vertical number of slices with visible SAH blood present were counted and then multiplied with slice thickness in mm of the NCCT. These three variables were then imputed into ABC/2, as seen in FIG. 6, and simplified volumetric estimates were measured. Subsequent estimated SAH volumes were then summed to a total cisternal subarachnoid blood volume (CHV) of the 5 compartments. If intraparenchymal hemorrhage (IPH) was present, the ABC/2 mathematical approach was used to measure these as well and they were added as a sixth compartment to create a total extended volume (eCHV). To allow for comparison of the estimated volumetric data, the same NCCT images were separately analyzed using a manual segmentation approach using open-source RIL Contour and Anaconda. Region-of-interest (ROI) technique was applied, and these voxels were “painted"' and thereby marked the SAH blood volume areas which were then translated into metric scales of mm (msQV-SAH, Model 2). The same 5 cisternal or compartmental anatomic definitions were used in this segmentation process and cumulative CHV and eCHV wi th or without IPH applied to summate all blood for comparison.
[0066] Outcome was quantified via modified Rankin Scale at discharge and dichotomized into good outcome (mRS 0-3) and poor outcome (mRS 4-6), respectively.
[0067] Cisternal SAHV (CHV) in all patients was estimated using the sQV-SAH method. Average CHV measured via sQV-SAH is 11.3 mL (95 % CI, 9.58-12.85 mL), and average eCVH is 13.5 mL (95 % CI, 11.51-15.55 mL). The respective range is 125 mL, with a minimum of 0 mL (when no blood is present at the level of the cistern) and a maximum of 125 mL.
[0068] The volumetric relationship between predictor variables and dichotomized target variables is shown in FIG. 7. For sQV-SAH average CHV of good outcome at discharge was 6.99 mL (95 % CI, 5,89 - 8,09 mL) and average eCHV was 7.35 mL (95 % CI, 6.21 - 8.499 mL). In contrast, the poor outcome group shows higher volumes with an average of 16.71 mL (95 % CI, 13.59 - 19.83 mL) for CHV and 21.46 (95 % CI, 17.7- 25.22 mL) for eCHV. These differences in volume were statistically significant with p-values <0.05. Volumetric analysis for patients that suffered from DCI had significantly higher volumes (p <0.05) of CHV as well as eCHV.
[0069] In an example study evaluating the systems and methods described in the present disclosure, a manual method (MM) to measure SAH volume (SAHV) was developing, in which NCCT scan images were segmented and analyzed by a human investigator. The MM of SAHV was considered the ‘'ground truth” or ‘'gold standard” to compare and evaluate against qvSAH measurements. The volume measurements based on the CT data (Al-based method and MM) were linked and compared with available clinical data.
[0070] A cohort of 10 patients with aneurysmal SAH (aS AH) were examined. NCCT scans of the head performed for standard of care indications were utilized for analysis. When more than one CT scan was performed in a single day, the earliest NCCT performed that day was utilized. Among this cohort of 10 patients there were 7 females and 3 males, the average age was 55 years (35 to 65 years) at SAH. Admission weight was reported as an average of 92 kg (kilogram) (minimum 55 kg, maximum 120 kg).
[0071] In all ten cases, the following data had been collected and were analyzed about SAHV over time: age at SAH, gender, ethnicity, race, type of SAH, admission weight, modified Fisher score, GCS on admission, Hunt and Hess on admission, World Federation of Neurological Surgeons, National Institutes of Health Stroke Scale (NIHSS) at admission. Physiologic Derangement Scale at admission, external ventricular drainage placed, aneurysm surgery, angiographic VSP (Transcranial doppler ultrasound (TCD), computed tomography angiography (CTA) or both), day of VSP, severity of VSP, location of VSP if obtainable, management of VSP, symptomatic clinical VSP, new cerebral infarction on imaging, description of infarct, DCI, day of DCI, re-rupture/rebleeding, pre-listed modified Rankin Scale (mRS). mRS at discharge, mRS at 30 days, mRS at 90 days. SAH associated disease, length of stay in hospital (LOS), length of stay in intensive care unit, hypertension, diabetes mellitus, heart disease, pure motor hemiparesis stroke, smoker, alcoholic, and family history of aneurysm.
[0072] Based on the patient cohort, a manual method to measure the SAH blood volume in each CT scan was developed, which was considered to be the gold standard reference value (“ground truth”) compared with the results of the Al-based qvSAH method (SAHVAI). To apply this new manual method: first, all CT scans of interest were collected and uploaded onto an anonymizing platform called ITK-SNAP (4.0). With this platform, anonymized NCCT scans were imported as DICOM (Digital Imaging and Communication in Medicine) files into a secure research drive. The program ITK-Snap was then used to manually segment (measure) 2D slices of SAHV on each CT scan using NIfTI (Neuroimaging Informatics Technology Initiative) format. Therefore. DICOM files were translated into labeled NIfTI files.
[0073] After converting all main images as NIfTI files, the pre-segmented files that marked every voxel between a specified Hounsfield Units (HU) range of 60 to 120 for each slice of the CT scan were programmed. A range of HU between 60 and 120 was chosen as the threshold because this is the range where SAH blood is visible on an NCCT scan. With this, the bleeding and artifacts could be visualized in every voxel between HU 60 to 120. which was marked red.
[0074] In order to have the best visual quality for manual refinement, the contrast was set as minimum 0 and maximum 120 in ITK-Snap. In the final step, the NCCT scans were manually refined with ITK-Snap to exclude segmentations that are omitted in 8 defined neuroanatomical spaces (5 Cisternal spaces: suprasellar cistern, perimesencephalic cistem, prepontine cistern, sylvian cistem, interhemispheric cistern), and 3 additional neuroanatomical spaces relevant to SAH disease: intraparenchymal hemorrhage (IPH), intraventricular hemorrhage (IVH), and finally brain g ral /sulcal spaces on each slice. The manual method therefore required manual labeling within these 8 pre-defined neuroanatomic spaces to the MM-SAHV. of which included all 92 NCCT scans. The final result - for instance, of the first CT scan of case one - is shown in FIG. 8 as a 2-dimensions (upper images) and 3D (lower images) called the SAHV-3D. also called “SAHV-3D Brain Map’7. These 3D images can be rotated like any other reconstruction image used in neurosurgery or neurointerventional to visualize patterns in 3-dimensions which is hard to visualize in standard 2D axial planes.
[0075] The SAHVAI techniques described in the present disclosure were also performed on the cohort. Using both data sets of manual method (MM-SAHV) and SAHVAI volumes, the results were compared for measurement differences. Additionally, 2-dimensional and 3 -dimensional SAHV brain maps (FIG. 9) were compared to visualize SAH blood in 3 dimensions similar to Maximum Intensity Projection (MIPS) maps in radiology. In comparison to the manual method, which measured 8 spaces, the SAHVAI method visualized focused on 7 major SAH anatomical blood spaces. This was based on an internal preliminary comparison of the two methods that showed that the vast majority of SAH blood volume is contained within the basal cisterns and compartments.
[0076] In total, for 3D visualization 92 CT scans were analyzed to measure SAHV blood. A minimum of 5 CT scans/case were analyzed and a maximum of 13 CT scans/case observed in the 10 patient 3D, 4D cohort. On average, 9.2 CT scans were studied for each patient.
[0077] We analyzed the SAH Volume over time of all 10 patients and their NCCT. A plot of the mean and standard deviation of manual SAHV and SAHVAI methods (FIG. 10) shows the natural decline of SAHV over time with both methods. The decrease of SAHV over time seems likely consistent with clinical observations of SAH blood decline on NCCT. However, current methods typically use the Fisher scale which is semi-quantitative or similar semi-quantitative methods like the Hijdra scale. Therefore, both SAHV methods demonstrate important data of maximal SAH blood happening within the first few days up until about 10 days post-ictus before flattening out. Around post-S AH day 15, the MM showed a larger amount SAHV than the SAHVAI automated method.
[0078] The mean SAHV of day 0 or day of the SAH ictus aneurysm rupture was measured as 44.59 ml by SAHVAI whereas as 58.78ml by manual method. The maximum SAHV measured for day 0 by SAHVAI was 99.01ml and by SAHV manual method was 141.5ml for both of the same case (case 5). Whereas the minimum SAHV on day 0 measured by SAHVAI was 9.69ml (case 9) and manual method as 12.8ml (case 10)). In FIG. 10, the standard deviation (SD) of day 1 of the MM exceeds the selected scale unit. The SD of day 1 of the MM was calculated with 85.61. as the lowest measured SAHV was 10 ml (case 2), and the highest SAHV was 276.7 ml (case 9) on day 1. In general, one has to notice that the SD is high, as all ten cases have different severity of SAH bleeding. Taking the average standard deviation of each calculated mean SAHV of all CT scans, including the ten patients, Al has a mean SD of 9.094049864, and Manual has a mean SD of 9.317960286. On average, the last CT scans were 29.44 days after the aSAH rupture. However, two NCCT scans (case 2 with 69 days and case 3 with 47 days post-ictus) were included but are not displayed in FIG. 10. For the sake of completeness, case 2 had, on day 69 after the incident, a SAHV of Al 0.7ml and Manual 1.3ml, and case 3 had, on day 47 after the SAH, a SAHV of Al 1.89ml and Manual 10.75ml.
[0079] To illustrate the difference between the two methods (Al and MM), FIG. 11 shows the 2-dimensional SAHV Brain Map of cases 1, 5. and 8.
[0080] Based on the SAHV Al model performance data, the AI-ML-driven approach demonstrated superior speed (seconds) compared to > 1 hr for manual measurement of each slice of a SAH NCCT patient, which is not practical outside of research using segmentation methods. Therefore, SAHV Al has been demonstrated as potentially enabling rapid evaluation of these stroke patients similar to established Al models for penumbral brain “salvage” in large vessel occlusion detection for mechanical thrombectomy. SAHV Al not only measures SAHV blood quantitatively, but it does so very fast similar to existing commercial models. Rapid SAHV appears to infer severity of SAH illness via the Glasgow Coma Scale (GCS) since SAHV and GCS are inversely correlated. Similar to Fisher's original observation of his scale tied to future occurrence of symptomatic vasospasm events, higher SAHV has been demonstrated to be inversely related to GCS or severity of illness like the WFNS (World Federation of Neurological Surgeons scale) and predicts longer LOS.
[0081] When SAHV is visualized in 3 dimensions, and over time (4 dimensions, or 4D SAHV) is has even greater predictive value. For example, as illustrated in the example SAHV- 4D plot in FIG. 12, a case that had thick SAHV layered over the right cerebral convexity predicted major symptomatic vasospasm over the same hemisphere about 10-11 days later. Insights such as this can be highly advantageous to clinicians, since SAHV can be used as a “brain map” in which surgeons, neurologists, interventionalists, and radiologists can use to make 3D-4D risk prediction models and target interventions to remove SAH volume (using drains, lumbar and CSF irrigation, etc.).
[0082] Aneurysmal SAH (aSAH) is a devastating hemorrhagic stroke subtype that occurs in about 30.000 patients per year in the United States and is with an estimated 30-40% one-month historical mortality. The high morbidity and mortality of SAH are due to both primary neurological injury as well as a cascade of secondary neurological injury that ensues from inflammation cascade in response to blood in the subarachnoid space, including cerebral vasospasm, delayed cerebral ischemia (DCI), hydrocephalus. Although several SAH grading and scoring systems have been proposed to predict outcomes for aSAH, they have limited predictive capability due to an imprecise, semi-quantitative Fisher scale measurements of SAH blood volume.
[0083] Using the systems and methods described in the present disclosure to quantify SAH volume, a simplified predictive model based on admission clinical and radiological features at initial presentation to predict outcomes for aS AH can be implemented.
[0084] In an example study, a cohort of 277 patients with a diagnosis of SAH was analyzed. Demographics, past medical history and preexisting conditions, clinical assessment, and CT imaging studies at admission as well as clinical management and course during hospitalization were retrospectively acquired via an electronic medical record database. The data collected were stored in two separate electronic databases. Neurological deterioration due to DCI was derived from electronic medical record notes, progress reports and neurological imaging. Based on clinical assessments at discharge, modified Rankin Scale (mRS) were abstracted according to the Specification Manual for Joint Commission National Quality Measures (v2018A).
[0085] All admission NCCT images were stored either as DICOM or NlfTI format. Estimated volumetric data on NCCT images were analyzed using a method in which cisternal spaces were predefined and by assuming that the basal cisterns morphology' exhibits an ellipsoid configuration, cylindrical volumetries were estimated by calculating volume through application of ABC/2 (with A = width/thickness, B = length, C = vertical extend), and hemorrhagic volumes thereby measured. Additionally or alternatively, the methods described in the present disclosure can be used to generate qvSAH data. The subsequently estimated volumes were summed to a total cisternal hemorrhagic blood volume (SAHV).
[0086] Discharge modified Rankin scale (mRS) was used as outcome data and was dichotomized with mRS 0-3 as favorable outcome and 4-6 as unfavorable outcome. For univariate analyses, independent variables were compared using x2, Student t-test (as appropriate). The outcome model was developed using multivariate logarithmic regression analysis with all possible prediction variables that would be available at the time of initial presentation (including gender, age, SAH volume, Glasgow Coma Scale (GCS), modified Fisher's score (mFS), Hunt and Hess scale, presence of intraparenchymal hemorrhage, presence of intraventricular hemorrhage). The analysis was then followed by stepwise elimination of variables not contributing to the model (0.05 significance level for entry into the model). First order interactions were tested in the final model.
[0087] An outcome stratification model for volumetrically enhanced Subarachnoid Hemorrhage Score, called eSAH score, was created with the variables in the final outcome model. Cut off points of variables were chosen to produce a simple and intuitive model. A DCI subscore was similarly calculated for risk stratification model for DCI prediction based on labeled DCI outcomes in the dataset using consensus DCI criteria. Nonparametric two-Sample Kolmogorov-Smirnov Test was performed to test the distribution of outcome and in-hospital mortality with eSAH score, and DCI with DCI subscore. Discriminative accuracy of the score was examined using receiver operator characteristics curve and subsequent area under the curve analysis.
[0088] Out of 277 patients with SAH, 72 were deemed to be traumatic in nature and were excluded from analysis. An additional 15 patients who had hemorrhage only in extracistemal spaces but no hemorrhage in described cisternal spaces w ere also excluded from analysis, leaving 190 SAH patients for analysis. Overall 30 day mortality rate in this cohort was 15.79%. The mean cisternal hemorrhagic volume in this corhort w as 11.65 mL (range 0. 13 mL to 116.9 mL).
[0089] Outcome at discharge was found to be statistically associated with age (p<0.001), SAH cisternal volume (p<0.001), GCS (p<0.001), mFS (p<0.001), Hunt and Hess score (p<0.001) presence of intraparenchymal hemorrhage (p=0.02) and intraventricular hemorrhage (p<0.001) in univariate analysis. Gender was not significantly associated with outcome (p=0.08). Out of all variables found to be correlated to outcome in univariate analysis, on conducting the stepwise multivariate regression analysis, only cisternal hemorrhagic volume, age, and GCS were found to statistically contribute to the overall outcome. On conducting a similar analysis for DCI, only GCS and cisternal hemorrhagic volume were found to significantly contribute to the model.
[0090] Outcome risk stratification was developed for all nontraumatic SAH patients with aim of developing a simplified predictive model. Age, GCS and SAH cisternal volume were predictive of outcome at the time of discharge as well as in-hospital mortality . For every increase in SAHV by 1 ml, the odds of unfavorable outcome increased by a factor of 1.148 (95% confidence interval (CI) = 1.973 - 1.227). For every increase in age by 1 year, the odds of unfavorable outcome increased by a factor of 1.051 (95% CI = 1.015 - 1.087). For every increase in GCS by 1 point, the odds of unfavorable outcome decreased by a factor of 0.728 (95% CI = 0.651 - 0.815). Subsequently, the eSAH score was created with cutoffs in age, GCS, and SAH cisternal hemorrhagic volume to create a simple risk stratification tool for prediction outcome and mortality'. Given that DCI was only associated with two variables GCS and SAHV. the eSAH DCI subscore was calculated and derived with these variables of GCS and SAHV.
[0091] Cut offs were made to change the variable to an ordinal scale for development of scoring system and the point assignment for each variable is described in Table 1. As shown in Table 1, eSAH score is calculated by scoring each category points allotted by variable and summing them up. Total eSAH score = GCS score + Age score + SAH Volume (SAHV) score. SAHV is calculated based on work of Fottinger et al(15). using a simplified ABC/2-derived method or automated method, minimum eSAH score = 0, Maximum eSAH score= 5. DCI risk subscore = GCS score + SAH volume score, Minimum score = 0, Maximum score= 4.
[0092] The eSAH score therefore was calculated as a summation of individual points for each variable. The eSAH score ranged from 0 to 5 and the eSAH DCI subscore ranged from 0 to 4 for subsequent risk of developing DCI. The eSAH score at admission was strongly predictive of outcome at the time of discharge with increase in odds of poor outcome by a factor of 4 for increase in every point for eSAH score{OR= 4.27 (95% CI=2.84 - 6.81), p<0.0001, AUC=0.885). It was also a strong predictor of in-hospital mortality with threefold increase in odds of mortality with every point increase in eSAH score{OR=3.02 (95% CI = 1.98 - 4.61), p<0.001, AUC=0.878). The eSAH DCI subscore was also strongly predictive of DCI development and had doubling of odds of DCI with point increase in DCI subscore {OR = 1.97 (95% CI = 1.49 - 2.59), p=0.001, AUC=0.748).
[0093] Among the 50 patients with an eSAH score of 0, zero died and 46 (92%) had favorable outcomes. In contrast, out of 12 patients with score of 5, 0 had a favorable outcome and 7 (58%) ended up dying during the hospital stay. Out of 68 patients with DCI subscore of 0, only 5 (7%) developed DCI as opposed to greater than 50 percent of patients with DCI subscore of 3 & 4.
Table 1: Enhanced SAH (eSAH) Score
Points Variable
Glasgow Coma Scale (GCS) Points GCS o 12-15 1 8-11
'2. 3-7.
SAH Volume (SAHV)* SAHV
0 Less than 10 ml
1 10-20 ml
2 Greater than 20 ml
Age Age
0 Less than or equal to 60
1 Greater than 60
[0094] The eSAH score has potential to triage and risk stratify SAH patients similar to Hemphill’s ICH score for ICH patient mortality and stroke systems of care for these patients. The eSAH score can be leveraged as a relative strength given the novel quantitative SAHV score which measures a ’dose-response" relationship of blood volume compared to GCS with age. The SAHV is a more precise way to quantify in milliliters (ml) the amount of blood compared to the older, Likert-like modified Fisher scale. Advantageously, the methods described in the present disclosure can be used to generate accurate measurements of SAHV (e.g., qvSAH data), which can then be used to calculate an eSAH score for a subject.
[0095] Advantageously, the eSAH score data could be used as a risk stratification and SAH severity tool that could aid in the decision-making processes in tandem with clinician judgement. The eSAH score could also aid the triage and transport of SAH patients from primary stroke center hospitals to comprehensive stroke center hospitals that have dedicated neuro-intensive care unit for complex SAH management as defined by the recent AHA SAH guideline.
[0096] Patients with higher scores have a high likelihood to develop DCI and worse downstream outcomes without the multidisciplinary team that complex SAH patients require in terms of vasospasm monitoring, neuroendovascular interventions for symptomatic vasospasm and NSICU level neuromonitoring. Therefore, the eSAH score could be used in emergency departments similar to the ICH score by Hemphill to stage and document severity during the initial SAH presentation. Such eSAH score triage in the emergency department could lead to expeditious transfer to a higher-level stroke center with dedicated neurosurgical vascular and neuro-intensive care unit teams for SAH management.
[0097] Alternatively, those with lower scores could be initially managed and stabilized in the nearest local stroke center. The eSAH score could therefore help achieves a more equitable allocation of stroke-center resources among stroke networks of care and as recommended by the current AHA SAH guidelines. The eSAH score could also potentially benefit future translational research and targeted interventions (e.g., neuroprotective drugs or minimally invasive approaches such as intraventricular calcium channel blocker drug injection) for SAH patients.
[0098] As described above, using the disclosed systems and methods, SAHV can be segmented and quantified from non-contrast CT (NCCT) scans using a machine learning model, which can then output a 3D brain volumetric map that depicts the three-dimensional spatial distribution of SAHV. By measuring SAHV over time, four-dimensional (4D) brain volumetric data can be generated. As another example application of the disclosed systems and methods, measurement of SAHV can be used to evaluate or otherwise monitor the use of ventricular irrigation and drainage systems that can expedite removal of SAH blood products. For instance, utilizing the SAHV Al model described in the present disclosure, the course of SAHV resolution over time can be generated for a subject receiving ventricular irrigation or drainage. SAHVAI-3D brain maps can be generated to help visualize significant SAHV resolution patterns and predict vasospasm (e.g., by inputting the SAHVA1-3D maps to atrained machine learning model to generate classified feature data indicating the detection, prediction, and/or classification of vasospasm).
[0099] In an example study, the SAHV Al framework described in the present disclosure was applied to SAH cases with mFS(3-4) using the NCCT scans among three groups. Group A included 1 SAH patient treated with a ventricular irrigation system. Group B included one SAH patient who presented with GCS 15 two days after ictus with no requirement for EVD. Group C included 10 patients who underwent regular EVD placement per standard of care.
[00100] Group A showed expedited resolution of SAHV (1.87mL/day) with an mRS of 0 on discharge and minimal vasospasm. Group B showed 16mL increase in SAHV suspected for aneury smal rebleeding days (5-9), and the patient later died (mRS of 6). Group C showed reduction of SAHV of - 0.5ml/day. Further, the resultant 3D brain maps revealed that areas with the highest density of blood concentration were correlated with the severity and location of the vasospasm in all groups.
[00101] This example study demonstrated that SAHVAI, SAHVAI-3D, and SAHVAI- 4D are techniques capable of reliably quantifying S AHV blood volume and changes over time, including SAH blood resolution or rebleeding events. SAHV expediting resolution (SAHVER) is a framework that shows how interventions such as ventricular irrigation can expedite SAHV resolution compared to passive EVD and non-CSF drainage groups.
[00102] As another example, the SAHVAI framework described in the present disclosure can also be used to evaluate patient with acute hydrocephalus requiring ventriculoperitoneal (VP) shunt placement, which is a known complication after aneurysmal SAH. In an example study, a predictive model (CHECKMATE) utilizing a mathematical model of SAH volumetric (SAHV) blood on the initial CT scan, the CHESS Score, Glasgow Coma Scale (GCS) and other variables to predict the dichotomous outcome of VPS placement by hospital discharge was evaluated.
[00103] A retrospective single, tertiary, comprehensive stroke center database review of SAH patients was conducted in terms of clinical, radiographic, and laboratory parameters. Clinical parameters included age, gender, GCS, CHESS Score, and SAHV (rnL) on the initial CT scan, EVD placement, and VPS dependency by hospital discharge. Univariate and multiregressions were performed on these variables.
[00104] 129 SAH patients with a mean age of 55.3 ± 13.6 (range 21-92) were studied.
30 (23.2%) of whom were females. Average GCS was 11.09 ± 13 (3-15), average SAHV was 13.56 ± 11.92 (0-54.6), and average CHESS score was 5.71 ± 6 (0-8). Our regression analysis demonstrated that there was no significant relationship between CHESS and age (r = 0.12, P = 0.19) or SAHV (r = 0.025, P = 0.77). CHESS score, however, was inversely correlated with GCS (r = -0.339, P < 0.0001). CHESS score also predicted future VPS by discharge (P=0.05). SAHV also showed a marginally significant inverse correlation with GCS (r = -0. 169, P = 0.05) which suggests a possible dose-response relationship of SAH blood volume on SAH severity of illness.
[00105] A combination model using GCS, CHESS score, and SAHV on initial CT scan called CHECKMATE may be a useful tool for predicting admission severity of illness and future need for EVD and ultimate VP shunt dependency during hospitalization. In this way, GCS and CHESS score can be data included in the patient health data used in some embodiments described in the present disclosure (e.g., the method described with respect to FIG. 4).
[00106] In another example study, the clinical effects of hypotension and nimodipine dose reduction after aSAH admission were retrospectively analyzed and compared with the available CYP genoty pes in the medical record and a prospective stroke genetics registry. The pharmacogenomics of nimodipine are metabolized predominantly by CYP3A4 and CYP3A5 enzymes. Nimodipine is chiefly metabolized by the CYP enzyme system. The drug undergoes extensive hepatic metabolism by the isoforms CYP3A4, CYP3A5, and CYP2C19 subfamilies. The genoty pe-phenoty pe pattern of nimodipine was defined based on allelic variants (particularly for CYP3A4 and CYP3A5 subtypes) that convey the metabolic properties. The allelic combinations categorized the metabolizers into the following groups: extensive, intermediate, average, and poor. Clinical outcome in this example study was defined as the patient's modified Rankin scale (mRS) score at hospital discharge.
[00107] Of 150 patients with aS AH identified, the mean age was 56.0, and most patients were women (70.7%), non-Hispanic or Latino (95.3%), White (66.7%), with history of hypertension (62%), and no history' of diabetes (82.7%) or previous stroke (89.3%). Most aneurysms were in the anterior (26%) or posterior (21.3%) communicating artery'. The most common surgical technique was aneury smal coiling (76.7%).
[00108] A simplification of the heterogeneity of multiple dose changes (i.e., 60 mg, 30 mg, 15 mg. or 0 mg) in the cohort from hospitalization to discharge was evaluated. All patients initially received the FDA-recommended nimodipine dose of 60 mg every 4 hours, and 42.7% of patients did not have their dose reduced from the standard. When reduced, doses were either left at the reduced level (30 mg [25.3%], 15 mg [8.0%], or 0 mg [0.67%]) or returned to 60 mg prior to discharge (23.3%). The most common mRS score at discharge was 0.
[00109] There are significant clinical implications using a one-size fits all 60 mg nimodipine dosage for all patients with aS AH regardless of body weight. A fixed 60 mg every' 4 hours does not consider a patient’s individualized body weight, nor initial severity' of illness, which can lead clinical dose-modification and reduction.
[00110] An individualized dosing of nimodipine for patients with aS AH can be considered in some instances, since patients in this example study had different responses to the initial 60-mg dose.
[00111] Advantageously, the disclosed SAHVAI framework can be used as a tool to monitor the efficacy of drug such as nimodipine, to recommend individualized dose based on patient pharmacogenomics, and to adjust doses based on patient response. As described above, the SAHVAI framework can be used to generate classified feature data that can indicate a probability of adverse drug effects. For instance, higher blood volumes can create medication sensitivity and with certain genotypes this can lead to dose reduction of neuroprotective drugs, such as nimodipine.
[00112] FIG. 13 shows an example of a system 1300 for detecting, risk stratifying, and determining prognostic data for SAH in accordance with some embodiments of the systems and methods described in the present disclosure. As shown in FIG. 13, a computing device 1350 can receive one or more types of data (e.g., medical imaging data, qvSAH data, patient health data) from data source 1302. In some embodiments, computing device 1350 can execute at least a portion of an SAH detection, risk stratification, and/or prognosis system 1304 to detect SAH, quantify SAH volume, risk stratify detected SAH volumes, and/or provide prognostic data for detected SAH volumes from data received from the data source 1302.
[00113] Additionally or alternatively, in some embodiments, the computing device 1350 can communicate information about data received from the data source 1302 to a server 1352 over a communication network 1354, which can execute at least a portion of the SAH detection, risk stratification, and/or prognosis system 1304. In such embodiments, the server 1352 can return information to the computing device 1350 (and/or any other suitable computing device) indicative of an output of the SAH detection, risk stratification, and/or prognosis system 1304. [00114] In some embodiments, computing device 1350 and/or server 1352 can be any suitable computing device or combination of devices, such as a desktop computer, a laptop computer, a smartphone, a tablet computer, a wearable computer, a server computer, a virtual machine being executed by a physical computing device, and so on. The computing device 1350 and/or server 1352 can also reconstruct images from the data.
[00115] In some embodiments, data source 1302 can be any suitable source of data (e.g., measurement data, images reconstructed from measurement data, processed image data), such as a medical imaging system, another computing device (e.g., a server storing measurement data, images reconstructed from measurement data, processed image data), and so on. In some embodiments, data source 1302 can be local to computing device 1350. For example, data source 1302 can be incorporated with computing device 1350 (e.g., computing device 1350 can be configured as part of a device for measuring, recording, estimating, acquiring, or otherwise collecting or storing data). As another example, data source 1302 can be connected to computing device 1350 by a cable, a direct wireless link, and so on. Additionally or alternatively, in some embodiments, data source 1302 can be located locally and/or remotely from computing device 1350. and can communicate data to computing device 1350 (and/or server 1352) via a communication network (e.g., communication network 1354).
[00116] In some embodiments, communication network 1354 can be any suitable communication network or combination of communication networks. For example, communication network 1354 can include a Wi-Fi network (which can include one or more wireless routers, one or more switches, etc.), a peer-to-peer network (e.g., a Bluetooth network), a cellular network (e.g., a 3G network, a 4G network, etc., complying with any suitable standard, such as CDMA, GSM, LTE, LTE Advanced, WiMAX, etc.), other ty pes of wireless network, a wired network, and so on. In some embodiments, communication network 1354 can be a local area network, a wide area network, a public network (e.g.. the Internet), a private or semi-private network (e.g., a corporate or university intranet), any other suitable type of network, or any suitable combination of networks. Communications links shown in FIG. 13 can each be any suitable communications link or combination of communications links, such as wired links, fiber optic links, Wi-Fi links. Bluetooth links, cellular links, and so on.
[00117] Referring now to FIG. 14, an example of hardware 1400 that can be used to implement data source 1302, computing device 1350, and server 1352 in accordance with some embodiments of the systems and methods described in the present disclosure is shown.
[00118] As shown in FIG. 14, in some embodiments, computing device 1350 can include a processor 1402. a display 1404, one or more inputs 1406. one or more communication systems 1408, and/or memory 1410. In some embodiments, processor 1402 can be any suitable hardware processor or combination of processors, such as a central processing unit (CPU), a graphics processing unit (GPU), and so on. In some embodiments, display 1404 can include any suitable display devices, such as a liquid crystal display (LCD) screen, a light-emitting diode (LED) display, an organic LED (OLED) display, an electrophoretic display (e.g., an “e- ink” display), a computer monitor, a touchscreen, a television, and so on. In some embodiments, inputs 1406 can include any suitable input devices and/or sensors that can be used to receive user input, such as a keyboard, a mouse, a touchscreen, a microphone, and so on.
[00119] In some embodiments, communications systems 1408 can include any suitable hardware, firmware, and/or software for communicating information over communication network 1354 and/or any other suitable communication networks. For example, communications systems 1408 can include one or more transceivers, one or more communication chips and/or chip sets, and so on. In a more particular example, communications systems 1408 can include hardware, firmware, and/or software that can be used to establish a Wi-Fi connection, a Bluetooth connection, a cellular connection, an Ethernet connection, and so on.
[00120] In some embodiments, memory' 1410 can include any suitable storage device or devices that can be used to store instructions, values, data, or the like, that can be used, for example, by processor 1402 to present content using display 1404, to communicate with server 1352 via communications system(s) 1408, and so on. Memory 1410 can include any suitable volatile memory, non-volatile memory, storage, or any suitable combination thereof. For example, memory 1410 can include random-access memory’ (RAM), read-only memory (ROM), electrically programmable ROM (EPROM), electrically erasable ROM (EEPROM), other forms of volatile memory, other forms of non-volatile memory, one or more forms of semi-volatile memory, one or more flash drives, one or more hard disks, one or more solid state drives, one or more optical drives, and so on. In some embodiments, memory 1410 can have encoded thereon, or otherwise stored therein, a computer program for controlling operation of computing device 1350. In such embodiments, processor 1402 can execute at least a portion of the computer program to present content (e.g., images, user interfaces, graphics, tables), receive content from server 1352, transmit information to server 1352, and so on. For example, the processor 1402 and the memory’ 1410 can be configured to perform the methods described herein (e.g., the method of FIG. 1. the method of FIG. 2, the method of FIG. 4, the method of FIG. 5).
[00121] In some embodiments, server 1352 can include a processor 1412, a display 1414, one or more inputs 1416, one or more communications systems 1418, and/or memory 1420. In some embodiments, processor 1412 can be any suitable hardware processor or combination of processors, such as a CPU, a GPU, and so on. In some embodiments, display 1414 can include any suitable display devices, such as an LCD screen, LED display, OLED display, electrophoretic display, a computer monitor, a touchscreen, a television, and so on. In some embodiments, inputs 1416 can include any suitable input devices and/or sensors that can be used to receive user input, such as a keyboard, a mouse, a touchscreen, a microphone, and so on.
[00122] In some embodiments, communications sy stems 1418 can include any suitable hardware, firmware, and/or software for communicating information over communication network 1354 and/or any other suitable communication networks. For example. communications systems 1418 can include one or more transceivers, one or more communication chips and/or chip sets, and so on. In a more particular example, communications systems 1418 can include hardware, firmware, and/or software that can be used to establish a Wi-Fi connection, a Bluetooth connection, a cellular connection, an Ethernet connection, and so on.
[00123] In some embodiments, memory’ 1420 can include any suitable storage device or devices that can be used to store instructions, values, data, or the like, that can be used, for example, by processor 1412 to present content using display 1414, to communicate with one or more computing devices 1350, and so on. Memory7 1420 can include any suitable volatile memory, non-volatile memory7, storage, or any suitable combination thereof. For example, memory 1420 can include RAM, ROM. EPROM. EEPROM, other types of volatile memory, other ty pes of non-volatile memory, one or more types of semi-volatile memory, one or more flash drives, one or more hard disks, one or more solid state drives, one or more optical drives, and so on. In some embodiments, memory 1420 can have encoded thereon a server program for controlling operation of server 1352. In such embodiments, processor 1412 can execute at least a portion of the server program to transmit information and/or content (e.g., data, images, a user interface) to one or more computing devices 1350, receive information and/or content from one or more computing devices 1350, receive instructions from one or more devices (e.g., a personal computer, a laptop computer, a tablet computer, a smartphone), and so on.
[00124] In some embodiments, the server 1352 is configured to perform the methods described in the present disclosure. For example, the processor 1412 and memory7 1420 can be configured to perform the methods described herein (e.g., the method of FIG. 1, the method of FIG. 2, the method of FIG. 4, the method of FIG. 5).
[00125] In some embodiments, data source 1302 can include a processor 1422. one or more data acquisition systems 1424, one or more communications systems 1426, and/or memory 1428. In some embodiments, processor 1422 can be any suitable hardware processor or combination of processors, such as a CPU, a GPU, and so on. In some embodiments, the one or more data acquisition systems 1424 are generally configured to acquire data, images, or both, and can include medical imaging system (e.g., a CT system, an MRI system). Additionally or alternatively, in some embodiments, the one or more data acquisition systems 1424 can include any suitable hardware, firmware, and/or software for coupling to and/or controlling operations of a medical imaging system (e g., a CT system, an MRI system). In some embodiments, one or more portions of the data acquisition system(s) 1424 can be removable and/or replaceable.
[00126] Note that, although not shown, data source 1302 can include any suitable inputs and/or outputs. For example, data source 1302 can include input devices and/or sensors that can be used to receive user input, such as a keyboard, a mouse, a touchscreen, a microphone, a trackpad, a trackball, and so on. As another example, data source 1302 can include any suitable display devices, such as an LCD screen, an LED display, an OLED display, an electrophoretic display, a computer monitor, a touchscreen, a television, etc., one or more speakers, and so on. [00127] In some embodiments, communications systems 1426 can include any suitable hardware, firmware, and/or software for communicating information to computing device 1350 (and. in some embodiments, over communication network 1354 and/or any other suitable communication networks). For example, communications systems 1426 can include one or more transceivers, one or more communication chips and/or chip sets, and so on. In a more particular example, communications systems 1426 can include hardware, firmware, and/or software that can be used to establish a wired connection using any suitable port and/or communication standard (e.g., VGA, DVI video, USB, RS-232, etc.), Wi-Fi connection, a Bluetooth connection, a cellular connection, an Ethernet connection, and so on.
[00128] In some embodiments, memory' 1428 can include any suitable storage device or devices that can be used to store instructions, values, data, or the like, that can be used, for example, by processor 1422 to control the one or more data acquisition systems 1424. and/or receive data from the one or more data acquisition systems 1424; to generate images from data; present content (e.g., data, images, a user interface) using a display; communicate with one or more computing devices 1350; and so on. Memory 1428 can include any suitable volatile memory, non-volatile memory, storage, or any suitable combination thereof. For example, memory 1428 can include RAM, ROM, EPROM, EEPROM, other types of volatile memory, other ty pes of non-volatile memory, one or more ty pes of semi-volatile memory, one or more flash drives, one or more hard disks, one or more solid state drives, one or more optical drives, and so on. In some embodiments, memory 1428 can have encoded thereon, or otherwise stored therein, a program for controlling operation of data source 1302. In such embodiments, processor 1422 can execute at least a portion of the program to generate images, transmit information and/or content (e.g., data, images, a user interface) to one or more computing devices 1350, receive information and/or content from one or more computing devices 1350, receive instructions from one or more devices (e.g., a personal computer, a laptop computer, a tablet computer, a smartphone, etc.), and so on.
[00129] In some embodiments, any suitable computer-readable media can be used for storing instructions for performing the functions and/or processes described herein. For example, in some embodiments, computer-readable media can be transitory' or non-transitory. For example, non-transitory computer-readable media can include media such as magnetic media (e.g., hard disks, floppy disks), optical media (e.g., compact discs, digital video discs, Blu-ray discs), semiconductor media (e.g., RAM, flash memory, EPROM, EEPROM), any suitable media that is not fleeting or devoid of any semblance of permanence during transmission, and/or any suitable tangible media. As another example, transitory computer- readable media can include signals on networks, in wires, conductors, optical fibers, circuits, or any suitable media that is fleeting and devoid of any semblance of permanence during transmission, and/or any suitable intangible media.
[00130] As used herein in the context of computer implementation, unless otherwise specified or limited, the terms ‘"component;’ “system,” “module,” “framework,” and the like are intended to encompass part or all of computer-related systems that include hardware, software, a combination of hardware and software, or software in execution. For example, a component may be, but is not limited to being, a processor device, a process being executed (or executable) by a processor device, an object, an executable, a thread of execution, a computer program, or a computer. By way of illustration, both an application running on a computer and the computer can be a component. One or more components (or system, module, and so on) may reside within a process or thread of execution, may be localized on one computer, may be distributed between two or more computers or other processor devices, or may be included within another component (or system, module, and so on).
[00131] In some implementations, devices or systems disclosed herein can be utilized or installed using methods embodying aspects of the disclosure. Correspondingly, description herein of particular features, capabilities, or intended purposes of a device or system is generally intended to inherently include disclosure of a method of using such features for the intended purposes, a method of implementing such capabilities, and a method of installing disclosed (or otherwise known) components to support these purposes or capabilities. Similarly, unless otherwise indicated or limited, discussion herein of any method of manufacturing or using a particular device or system, including installing the device or system, is intended to inherently include disclosure, as embodiments of the disclosure, of the utilized features and implemented capabilities of such device or system.
[00132] The present disclosure has described one or more preferred embodiments, and it should be appreciated that many equivalents, alternatives, variations, and modifications, aside from those expressly stated, are possible and within the scope of the invention.

Claims

1 . A method for assessing subarachnoid hemorrhage (S AH) in a patient based on medical imaging data, the method comprising:
(a) accessing medical imaging data with a computer system, wherein the medical imaging data have been acquired from a patient;
(b) accessing a machine learning model with the computer system, wherein the machine learning model has been trained on training data to detect and assess subarachnoid hemorrhage based on medical images;
(c) inputting the medical imaging data to the machine learning model with the computer system, generating classified feature data as an output, wherein the classified feature data indicate at least one of SAH detection, SAH risk stratification, or SAH prognosis for the patient; and
(d) generating a report with the computer system using the classified feature data, wherein the report indicates the at least one of the SAH detection. SAH risk stratification, or SAH prognosis.
2. The method of claim 1, comprising accessing patient health data with the computer system and inputting the patient health data as an additional input to the machine learning model.
3. The method of claim 2, wherein the patient health data include clinical lab results.
4. The method of claim 3, wherein the clinical lab results comprise a marker of SAH inflammation.
5. The method of claim 4. wherein the marker of SAH inflammation comprises neutrophil to lymphocyte ratio (NLR) measured from complete blood count from the patient.
6. The method of claim 1 or 2, wherein the classified feature data indicate SAH risk stratification, wherein the classified feature data include a risk score for the patient.
7. The method of claim 6. wherein the report indicates a triage plan for the patient based on the risk score.
8. The method of claim 1 or 2, wherein the classified feature data indicate SAH risk stratification, wherein the classified feature data include a severity score that indicates a severity of SAH for the patient.
9. The method of claim 8, wherein the report indicates at least one of a clinical decision support or intervention based on the severity score.
10. The method of claim 1 or 2, wherein the classified feature data indicate SAH prognosis, wherein the classified feature data include probability of one or more future outcomes for the patient.
11. The method of claim 10, wherein the generated report indicates a planned intervention for the patient based on the SAH prognosis in the classified feature data.
12. The method of claim 1 or 2, wherein the classified feature data indicate SAH detection, wherein the classified feature data comprise an SAH map indicating locations of probable SAH in the patient based on the medical imaging data.
13. The method of claim 12, wherein the report comprises overlaying the SAH map on the medical imaging data.
14. The method of claim 1 or 2, wherein the classified feature data indicate SAH detection, wherein the classified feature data comprise regions in the patient identified as probable SAH and quantified subarachnoid hemorrhage blood volume (qvSAH) data comprising a quantified volume measurement for each probable SAH region.
15. The method of claim 1 or 2, wherein the machine learning model comprises a first machine learning model and a second machine learning model, wherein the medical imaging data are input to the first machine learning model to generate quantified subarachnoid hemorrhage blood volume (qvSAH) data as an output, and the qvSAH data and the patient health data are input to the second machine learning model to generate the classified feature data as an output.
16. The method of claim 15, wherein the first machine learning model is a neural network and the second machine learning model is a tree-based model.
17. The method of claim 16, wherein the tree-based model comprises one of a decision tree model, a random forest model, a boosting model, or a gradient boosting model.
18. The method of claim 15, wherein the report indicates an increased severity of a probable SAH region having a volume measured in the qvSAH data that is greater than 10 mL.
19. The method of claim 14 or 15, wherein the qvSAH data comprise fourdimensional qvSAH data indicating the quantified volume measurement for each probable three-dimensional SAH region over time.
20. A method for assessing subarachnoid hemorrhage in a patient based on medical imaging data, the method comprising:
(a) accessing quantified subarachnoid hemorrhage blood volume (qvSAH) data with a computer system, wherein the qvSAH data indicate quantified volume measurements for probable SAH blood volumes in a patient;
(b) accessing patient health data for the patient with the computer system;
(c) accessing a machine learning model with the computer system, wherein the machine learning model has been trained on training data to assess subarachnoid hemorrhage based on qvSAH data and patient health data;
(d) inputting the qvSAH data and patient health data to the machine learning model with the computer system, generating classified feature data as an output, wherein the classified feature data indicate at least one of SAH risk stratification or SAH prognosis for the patient; and
(e) generating a report with the computer system using the classified feature data, wherein the report indicates the at least one of the SAH risk stratification or SAH prognosis.
PCT/US2024/030654 2023-05-22 2024-05-22 Subarachnoid hemorrhage detection and risk stratification with machine learning-based analysis of medical images Pending WO2024243364A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US202363503621P 2023-05-22 2023-05-22
US63/503,621 2023-05-22

Publications (1)

Publication Number Publication Date
WO2024243364A1 true WO2024243364A1 (en) 2024-11-28

Family

ID=91585467

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2024/030654 Pending WO2024243364A1 (en) 2023-05-22 2024-05-22 Subarachnoid hemorrhage detection and risk stratification with machine learning-based analysis of medical images

Country Status (1)

Country Link
WO (1) WO2024243364A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN120413016A (en) * 2025-04-17 2025-08-01 中国人民解放军总医院第一医学中心 A hemiplegia risk prediction method and system suitable for patients with brain trauma

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210093278A1 (en) * 2019-09-30 2021-04-01 GE Precision Healthcare LLC Computed tomography medical imaging intracranial hemorrhage model
EP3813075A1 (en) * 2019-10-22 2021-04-28 Qure.Ai Technologies Private Limited System and method for automating medical images screening and triage
US20220293247A1 (en) * 2021-03-12 2022-09-15 Siemens Healthcare Gmbh Machine learning for automatic detection of intracranial hemorrhages with uncertainty measures from ct images

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210093278A1 (en) * 2019-09-30 2021-04-01 GE Precision Healthcare LLC Computed tomography medical imaging intracranial hemorrhage model
EP3813075A1 (en) * 2019-10-22 2021-04-28 Qure.Ai Technologies Private Limited System and method for automating medical images screening and triage
US20220293247A1 (en) * 2021-03-12 2022-09-15 Siemens Healthcare Gmbh Machine learning for automatic detection of intracranial hemorrhages with uncertainty measures from ct images

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
SCHERER MORITZ ET AL: "Association of Cerebrospinal Fluid Volume with Cerebral Vasospasm After Aneurysmal Subarachnoid Hemorrhage: A Retrospective Volumetric Analysis", NEUROCRITICAL CARE, SPRINGER US, NEW YORK, vol. 33, no. 1, 26 November 2019 (2019-11-26), pages 152 - 164, XP037207124, ISSN: 1541-6933, [retrieved on 20191126], DOI: 10.1007/S12028-019-00878-2 *
YUAN JANE Y ET AL: "Automated Quantification of Compartmental Blood Volumes Enables Prediction of Delayed Cerebral Ischemia and Outcomes After Aneurysmal Subarachnoid Hemorrhage", WORLD NEUROSURGERY, ELSEVIER, AMSTERDAM, NL, vol. 170, 30 October 2022 (2022-10-30), XP087263155, ISSN: 1878-8750, [retrieved on 20221030], DOI: 10.1016/J.WNEU.2022.10.105 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN120413016A (en) * 2025-04-17 2025-08-01 中国人民解放军总医院第一医学中心 A hemiplegia risk prediction method and system suitable for patients with brain trauma

Similar Documents

Publication Publication Date Title
Yim et al. Predicting conversion to wet age-related macular degeneration using deep learning
US10504227B1 (en) Application of deep learning for medical imaging evaluation
US20180315505A1 (en) Optimization of clinical decision making
WO2019103908A1 (en) Automated information collection and processing of clinical data
US20160203599A1 (en) Systems, methods and devices for analyzing quantitative information obtained from radiological images
KR20190062461A (en) System and method for medical data mining
WO2010115885A1 (en) Predictive classifier score for cancer patient outcome
WO2019102917A1 (en) Radiologist determination device, method, and program
Street et al. Predicting vasospasm risk using first presentation aneurysmal subarachnoid hemorrhage volume: A semi-automated CT image segmentation analysis using ITK-SNAP
Kampondeni et al. Noninvasive measures of brain edema predict outcome in pediatric cerebral malaria
Zhu et al. Can perihaematomal radiomics features predict haematoma expansion?
Krag et al. Diagnostic test accuracy study of a commercially available deep learning algorithm for ischemic lesion detection on brain MRIs in suspected stroke patients from a non-comprehensive stroke center
Yu et al. CT radiomics combined with clinical and radiological factors predict hematoma expansion in hypertensive intracerebral hemorrhage
Danala et al. Applying quantitative radiographic image markers to predict clinical complications after aneurysmal subarachnoid hemorrhage: A pilot study
US20160166192A1 (en) Magnetic resonance imaging tool to detect clinical difference in brain anatomy
Singh et al. Leveraging calcium score CT radiomics for heart failure risk prediction
WO2024243364A1 (en) Subarachnoid hemorrhage detection and risk stratification with machine learning-based analysis of medical images
Yenikekaluva et al. Evaluating the effectiveness of AI-powered UrologiQ’s in accurately measuring kidney stone volume in urolithiasis patients
Jabal et al. Quantitative image signature and machine learning-based prediction of outcomes in cerebral cavernous malformations
EP3813075A1 (en) System and method for automating medical images screening and triage
Chen et al. Machine-learning-based performance comparison of two-dimensional (2D) and three-dimensional (3D) CT radiomics features for intracerebral haemorrhage expansion
Nawabi et al. Cross-institutional automated multilabel segmentation for acute intracerebral hemorrhage, intraventricular hemorrhage, and perihematomal edema on CT
CN111436215A (en) Applications of Deep Learning for Medical Imaging Evaluation
Baeza et al. A novel intelligent radiomic analysis of perfusion SPECT/CT images to optimize pulmonary embolism diagnosis in COVID-19 patients
Sharma et al. The eSAH score: a simple practical predictive model for SAH mortality and outcomes

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 24734605

Country of ref document: EP

Kind code of ref document: A1