[go: up one dir, main page]

WO2024098147A1 - Système d'évaluation quantitative automatisée de la stabilité rachidienne métastatique - Google Patents

Système d'évaluation quantitative automatisée de la stabilité rachidienne métastatique Download PDF

Info

Publication number
WO2024098147A1
WO2024098147A1 PCT/CA2023/051491 CA2023051491W WO2024098147A1 WO 2024098147 A1 WO2024098147 A1 WO 2024098147A1 CA 2023051491 W CA2023051491 W CA 2023051491W WO 2024098147 A1 WO2024098147 A1 WO 2024098147A1
Authority
WO
WIPO (PCT)
Prior art keywords
features
imaging
vertebrae
spinal
spine
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/CA2023/051491
Other languages
English (en)
Inventor
Michael Raymond HARDISTY
Cari WHYNE
Arjun SAHGAL
Geoffrey KLEIN
Anne Martel
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sunnybrook Research Institute
Original Assignee
Sunnybrook Research Institute
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sunnybrook Research Institute filed Critical Sunnybrook Research Institute
Priority to JP2025526244A priority Critical patent/JP2025536474A/ja
Priority to EP23887238.6A priority patent/EP4615307A1/fr
Publication of WO2024098147A1 publication Critical patent/WO2024098147A1/fr
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • A61B6/50Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment specially adapted for specific body parts; specially adapted for specific clinical applications
    • A61B6/505Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment specially adapted for specific body parts; specially adapted for specific clinical applications for diagnosis of bone
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/0033Features or image-related aspects of imaging apparatus, e.g. for MRI, optical tomography or impedance tomography apparatus; Arrangements of imaging apparatus in a room
    • A61B5/004Features or image-related aspects of imaging apparatus, e.g. for MRI, optical tomography or impedance tomography apparatus; Arrangements of imaging apparatus in a room adapted for image acquisition of a particular organ or body part
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/45For evaluating or diagnosing the musculoskeletal system or teeth
    • A61B5/4538Evaluating a particular part of the muscoloskeletal system or a particular medical condition
    • A61B5/4566Evaluating the spine
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/7264Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
    • A61B5/7267Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems involving training the classification device
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7271Specific aspects of physiological measurement analysis
    • A61B5/7275Determining trends in physiological measurement data; Predicting development of a medical condition based on physiological measurements, e.g. determining a risk factor
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • A61B6/52Devices using data or image processing specially adapted for radiation diagnosis
    • A61B6/5211Devices using data or image processing specially adapted for radiation diagnosis involving processing of medical diagnostic data
    • A61B6/5217Devices using data or image processing specially adapted for radiation diagnosis involving processing of medical diagnostic data extracting a diagnostic or physiological parameter from medical diagnostic data
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/40ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/30ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for calculating health indices; for individual health risk assessment
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/70ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for mining of medical data, e.g. analysing previous cases of other patients
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • A61B6/02Arrangements for diagnosis sequentially in different planes; Stereoscopic radiation diagnosis
    • A61B6/03Computed tomography [CT]
    • A61B6/032Transmission computed tomography [CT]
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30008Bone
    • G06T2207/30012Spine; Backbone

Definitions

  • the present disclosure relates to a system that uses deep learning to automatically quantify spinal architecture (geometry/quality) and disease burden from 3D medical images (non limiting examples being CT and MR) that can be used to predict patient outcomes of vertebral stability / risk of fracture. More particularly, the present disclosure provides an interpretable automated spinal instability neoplastic scoring (SINS) tool which calculates improved individual image-based parameters used in SINS and calculates stability through a similar scoring system; and also provides an image-based tool that directly estimates vertebral stability/fracture risk.
  • SINS interpretable automated spinal instability neoplastic scoring
  • the existing scientific literature does not target computational medical image analysis with most studies that assess fracture risk and vertebral mechanical stability, using patient specific factors (Age, Primary Tumour location, BMI, etc.) and/or SINS assessment with limited fidelity.
  • This invention describes an AI-enabled medical image analysis-based tool for the assessment of VCF risk and spinal mechanical stability.
  • a method for assessing mechanical stability and/or fracture risk in the metastatically involved spine comprising: a) obtaining 3D imaging data of a patient’s spine; b) inputting the imaging data into a machine learning algorithm trained on a dataset of spinal imaging to determine mechanical stability and/or fracture risk, the machine learning algorithm configured to i) calculate mechanical stability and/or fracture risk by the steps of calculating deep features from a feature extractor backbone network; ii) combining the deep features with convolutional layers, graph networks, and vertebrae specific features derived from convolutional arms; and/or extracting the latent deep features of each vertebrae; iii) combining the deep features combined with the vertebrae specific features in step ii) and with non-imaging patient specific features with dense layers to yield predictions on mechanical stability and/or fracture risk, wherein training is performed by an optimizer based on the 3D imaging data of the patient’s spine.
  • the 3D imaging data of the patient’s spine may be CT and/or MR imaging data.
  • the dataset of spinal imaging may be comprised of CT and/or MR imaging that includes a clinical cohort of patients with spinal metastases.
  • the feature extractor backbone network may be a ResNet network, or a transformer network or inception network or use fully connected layers, uses principal component analysis or radiomics or linear discriminant analysis, independent component analysis, t-distributed Stochastic Neighbor Embedding to mention some non-limiting examples.
  • the feature extractor backbone network may be a ResNet 50 network.
  • the present disclosure provides a method for assessing mechanical stability and/or fracture risk in the metastatically involved spine, comprising: a) obtaining 3D imaging data of a patient’s spine; b) inputting the imaging data into a machine learning algorithm trained on a dataset of spinal imaging to determine mechanical stability and/or fracture risk, the machine learning algorithm configured to calculate spinal instability neoplastic score elements, said elements including bone lesions, spinal alignment, vertebral body collapse, posterolateral involvement; c) wherein calculating each of the elements of bone lesions, spinal alignment, vertebral body collapse and posterolateral involvement includes segmenting the vertebrae and localizing the vertebrae by the steps of calculating deep features from a feature extractor backbone network and combining the deep features with convolutional layers and vertebrae specific features derived from convolutional arms, combining the deep features with the vertebrae specific features and non-imaging patient specific features with dense layers to yield predictions on mechanical stability and/or fracture risk, wherein training is performed by an optimizer based on the 3D imaging data of the
  • the dataset of spinal imaging may be comprised of CT and/or MR imaging that includes a clinical cohort of patients with spinal metastases.
  • the elements bone lesions, spinal alignment, vertebral body collapse, posterolateral involvement may be combined with clinical pain and vertebral level to yield an automated spinal instability neoplastic score.
  • the elements bone lesions, spinal alignment, vertebral body collapse, posterolateral involvement are combined with clinical pain and other patient specific features and vertebral level to yield an Automated SINS score which predicts mechanical stability and/or fracture risk.
  • FIGURE 1 shows an overview diagram for the system for automated and quantitative assessment of metastatic spinal stability disclosed herein.
  • FIGURE 2 is an expanded view of the non-limiting VertDetect model 14 of FIGURE 1.
  • DETAILED DESCRIPTION Various embodiments and aspects of the disclosure will be described with reference to details discussed below. The following description and drawings are illustrative of the disclosure and are not to be construed as limiting the disclosure. The drawings are not necessarily to scale. Numerous specific details are described to provide a thorough understanding of various embodiments of the present disclosure.
  • the terms, “comprises” and “comprising” are to be construed as being inclusive and open ended, and not exclusive. Specifically, when used in this specification including claims, the terms, “comprises” and “comprising” and variations thereof mean the specified features, steps or components are included. These terms are not to be interpreted to exclude the presence of other features, steps or components.
  • the term “exemplary” means “serving as an example, instance, or illustration,” and should not be construed as preferred or advantageous over other configurations disclosed herein.
  • the terms “about” and “approximately”, when used in conjunction with ranges of dimensions of particles, compositions of mixtures or other physical properties or characteristics, are meant to cover slight variations that may exist in the upper and lower limits of the ranges of dimensions so as to not exclude embodiments where on average most of the dimensions are satisfied but where statistically dimensions may exist outside this region.
  • the terms “about” and “approximately” mean plus or minus 25 percent or less. It is not the intention to exclude embodiments such as these from the present disclosure.
  • any specified range or group is as a shorthand way of referring to each and every member of a range or group individually, as well as each and every possible sub-range or sub-group encompassed therein and similarly with respect to any sub-ranges or sub-groups therein.
  • the present disclosure relates to and explicitly incorporates each and every specific member and combination of sub-ranges or sub-groups.
  • the term "on the order of”, when used in conjunction with a quantity or parameter refers to a range spanning approximately one tenth to ten times the stated quantity or parameter.
  • machine learning algorithm refers to algorithms that ‘learn’ how to perform their task by being ‘trained’ on datasets to perform the task by optimizing the performance by changing internal elements that affect the output of the algorithms. The internal elements are varied such that the output generated fits what is seen in the dataset.
  • machine learning algorithms that learn how to represent imaging data with feature extracting units and how to use these latent representations, combined with patient specific factors to make predictions that are consistent with the training dataset.
  • deep features refers to quantitative values that are extracted from an image that are the result of deep learning network architectures. Network architectures with greater than one (1) layer are referred to as deep learning architectures. The deep learning networks that used herein have multiple layers.
  • the networks used herein learn how to represent images.
  • a representation of an image which contains its features is referred to as a feature representation.
  • deep learning networks are used to learn a feature representation. Because we learn the representation with deep learning, we refer to the features within as deep features.
  • feature extractor backbone network refers to a network or part of a network where latent features are used, or shared, between different subsequent tasks. This can refer to a network that after training generates a feature representation for the entire image volume input that is useful for the tasks required. We use the features extracted by this network as the backbone for the predictions and determinations of the entire network.
  • the phrase “convolutional layers” refers to structures that perform a series of filtering operations on the input data, producing an output. Convolutional layers perform discrete convolutions operations on some input where the weights are the kernel to be learned. Convolutional layers can also have a bias which are learned weights. Many convolutional layers can create filters that derive output from an entire image or only small aspects of it.
  • the phrase “convolutional arms” refers to a series of convolutional, pooling and fully connected layers. Pooling layers reduce the dimensionality or number of voxels/pixels output, combining information from many input voxels/pixels to produce outputs with fewer voxel/pixels.
  • Fully connected layers have connections and weights from each input node (pixel/voxel, prior layer output) to every node in the fully connected layer.
  • the use of arms denotes that there are several arms that do computation in series and are specifically trained to perform specific tasks.
  • vertebrae specific features refers to features or aspects that characterize individual vertebrae in the imaging data. This is in contrast to features or aspects that describe the whole image, whole spine, or the patient.
  • the features can be human understandable, such as vertebral size, vertebral bone density, or they may be the result of machine learning computations such as those through the convolutional arms that produce features that are descriptive of specific vertebrae but are not intuitively understandable.
  • non-imaging patient specific features refers to patient demographic data including age, sex, body mass index, primary tumor location and histology, prior treatments (e.g., systemic drug therapies and/or local treatments (such as radiation and dose received to give some non- limiting examples), number of vertebral metastases, presence of other metastases in the body, existing fractures, presence of pain.
  • training is performed by an optimizer” means that we adjust the weights within the machine learning algorithm during a phase called training where the input data has been labeled specifying its output state. The weights within the machine learning algorithm are adjusted in order to fit the desired output.
  • An optimizer varies the weights in an iterative fashion by changing the weights then evaluates the output against the desired output from the training dataset. The optimizer continues varying the weights until the output from the machine learning algorithm matches the output in the training dataset.
  • This disclosure describes AI enabled tools for automated and quantitative assessment of metastatic spinal stability. In this, we use deep learning to automatically quantify spinal geometry/quality and disease burden from medical images (CT and MR) that can be used to predict patient outcomes of vertebral stability / risk of fracture.
  • An interpretable automated SINS spinal instability neoplastic scoring
  • SINS spinal instability neoplastic scoring
  • VCF vertebral compression fracture
  • the present inventors have created a data set of imaging and associated clinical data from patients who have undergone SBRT for spinal metastases at the Sunnybrook Odette Cancer Centre, and have trained our algorithms against this retrospective data.
  • An additional clinical research dataset was also used consisting of temporal CT imaging data of patients with spinal metastases, as well as an open dataset (VerSe [12]).
  • the technology and trained algorithms would be of use to clinicians planning treatment for patients with spinal metastases, and companies that create technology for planning SBRT procedures.
  • the algorithms are novel in that they quantify bio- markers from clinical imaging in an automated quantitative manner using innovative multitask architectures. They replace currently used clinical manual qualitative scoring which is poor at predicting fracture risk, with automated and improved predictions.
  • FIGURES 1 and 2. 10 Part Numbers for features in FIGURES 1 and 2. 10 - Overall flow diagram 12, 50 - Spinal imaging 14 - VertDetect model which includes three main branches; the detection, classification, and segmentation branches.
  • the detection branch detects each vertebra in a 3D CT scan by determining both its vertebral body centroid location and placing a bounding box around the whole vertebra.
  • the classification branch utilizes shared information between each neighbouring vertebra to determine what vertebra are present in the input CT scan.
  • the segmentation branch semantically segments vertebra that are positively detected from the classification and the detection branches.
  • Bone Lesion Element histogram-based calculation of tumor type and volume 22 - Spinal alignment Element: Cobb angles calculated in the coronal and sagittal planes 24 - Vertebral Collapse Element 26 - Posterolateral Involvement Element 28 - deep learning features directly generated by the machine learning network 30 - Patient specific non-imaging-based factors (i.e. age, sex, BMI, current medications, systemic and local therapies etc.) 36 - AutoSINS score 38 - Fracture risk /vertebral stability score 52 - Backbone architecture consisting of a 3D ResNet-50 and Feature Pyramid Network (FPN).
  • FPN Feature Pyramid Network
  • P1-P5 refer to feature maps generated from the convolution backbone. Feature maps generated from this backbone are then used in further downstream tasks. 54, 56, 60-65 - 3x3x3 Convolutions + Batch Normalization +ReLU 58 - 1x1x1 Convolutions + Batch Normalization +ReLU 67-69, 128 – 1x1x1 Convolutions 70 - Offset calculation 72 - Bounding Box sizes 74 – Gausian Heatmap 76, 114 - Concatenations 80 - Bounding boxes 82 - Initial Bounding Box processing 84 - Binary Classification block 86 - Binary Vertebra Classification 88 – Cropped regions of interest for each individual vertebra 100,102, 108, 110, 120, 122, 124, 126 - 3x3x3 Convolutions + Instance Normalization +ReLU 104, 122 - 1x1x1 Convolutions + Instance Normalization +ReLU 106 - 2x2x2 Convolution Transpose + Instance
  • VertDetect takes as input in Figure 2 a 3D image 50 containing a depiction of vertebrae. It adjusts first layers of ResNet backbone 52 in Figure 2 for improved high-resolution feature map.
  • the architecture uses Gaussian heatmaps 74 shown in Figure 2 for the segmentation branch to help identify which vertebrae to segment. It applies Graph Convolutional Network (GCN) layers for vertebrae classification, allowing vertebrae identification to be influenced by neighboring predictions. It uses linear scheduling aids in the gradient descent for localization stages by combining a variant focal loss from CornerNet with mean-square-error (MSE) loss.
  • GCN Graph Convolutional Network
  • the detection branch utilizes an anchorless approach.
  • the largest resolution feature map from the convolution backbone (P1) is convoluted by three convolutions, the first two having kernels 54 and 56 in Figure 2 of 3x3x3, and the last kernal 58 having 1x1x1 shown in Figure 2.
  • the first of these three convolutions 54 and 56 also compress the P1 feature maps to 128 features to reduce the memory impact.
  • the resulting feature map (after 54, 56 and 58) is then sent to three separate blocks of convolutions: 1) for the heatmap, 62, 65 and 69 shown in Figure 2, 2) for bounding box sizes, 61, 64 and 68, and 3) for offset predictions, 60, 63 and 67.
  • Heatmap 74 in Figure 2 The heatmaps have C channels, where each channel corresponds to a vertebra (i.e., channel 0 is C1, channel 1 is C2, etc.). Therefore, each centroid and bounding box is implicitly defined in the heatmap predictions.
  • the maxima of each channel’s heatmap is used to provide the centroid predictions for each vertebra.
  • Ground truth 3D Gaussian heatmaps are generated using ground truth centroid points in a down-sampled space.
  • Offset Sizes 70 in Figure 2 The heatmaps, and therefore the predicted centroids, are in a downsampled state. Following the work by Yi et al. [7] a predicted offset coordinate is used to shift the predicted centroids to compensate for potential differences during upsampling.
  • Bounding Box Sizes 72 in Figure 2 Both the heatmap outputs and the offsets are used to determine the centroid of an object in the full resolution image. The bounding boxes sizes are used to determine the bounding box surrounding the centroid point.
  • Classification Branch Each heatmap channel corresponds to an individual vertebra, the information is combined with the offset and bounding box size to create bounding box 80 ( Figure 2) candidates for each vertebrae. However, these heatmaps do not consider neighboring vertebra. The purpose of the classification branch is to leverage the information between neighboring vertebra to improve overall classification and detection.
  • Initial bounding box processing in step 82 ( Figure 2) uses centroid locations predicted from the heat maps and then combines these predictions with features maps from the convolutional backbone within the binary classification step 84 in Figure 2.
  • the features from the convolutional backbone are cropped to the regions centered about the heatmap predicted centroids.
  • the features from the convolutional backbone are then sent through two (2) convolution layers and then sent through a graph convolutional network layer that uses shared information between vertebrae to make the best binary classification of vertebrae 88 (see Figure 2).
  • Segmentation Branch The segmentation branch semantically segments positively detected vertebra.
  • unnormalized Gaussian heatmaps 131 in Figure 2 are concatenated in step 76 in Figure 2 with the full resolution input image. This Gaussian is centered about the predicted full-resolution centroid locations with a standard deviation of four (4).
  • bounding boxes contain neighboring vertebra due to the necessity to include posterior elements.
  • the bounding boxes are defined and used to crop regions of interest with step 86 in Figure 2 from the input image and feature maps from the convolutional backbone.
  • the Gaussian heatmap ensures that the model focuses on the correct vertebra during semantic segmentation.
  • the obtained feature map is passed through the concluding convolutional layers in steps 120, 122, 124, 126 and 128 shown in Figure, to generate the segmentation predictions 130.
  • the tool that directly estimates vertebral stability/fracture risk based on 3D imaging provides a method for assessing mechanical stability and fracture risk in the metastatically involved spine.
  • the method comprises a) obtaining 3D imaging data of a patient’s spine 12 shown in Figure 1; b) inputting the imaging data into a machine learning algorithm in step 14 of Figure 1 to determine mechanical stability and fracture risk, the machine learning algorithm configured to calculate mechanical stability and fracture risk by the steps of i) calculating deep features from a feature extractor backbone network 28 shown in Figure 1; ii) combining these deep features with convolutional layers, graph networks and vertebrae specific features derived from convolutional arms; and/or extracting the latent deep features of each vertebrae; and iii) combining the deep features in step ii) and with non-imaging patient specific features 30 in Figure 1 with dense layers to yield predictions on mechanical stability and fracture risk, wherein training is performed by an optimizer based on the 3D imaging data of the patient’s spine.
  • the 3D imaging data 12 may be acquired by imaging systems such as computed tomography (CT) or magnetic resonance imaging (MRI).
  • CT computed tomography
  • MRI magnetic resonance imaging
  • Figure 1 also shows the tool that provides an interpretable automated SINS (spinal instability neoplastic scoring) tool 36 which calculates improved individual image-based parameters used in SINS and calculates stability through a similar scoring system and provides a method for assessing mechanical stability and fracture risk in the metastatically involved spine, comprising: a) obtaining 3D imaging data of a patient’s spine 12; b) inputting the imaging data into a machine learning algorithm to determine mechanical stability and fracture risk in step 38 in Figure 1, the machine learning algorithm configured to calculate spinal instability neoplastic score elements, said elements including step 20 in Figure 1 that quantifies bone lesions involvement, spinal alignment in step 22 in Figure 1, vertebral body collapse in step 24 in Figure 1 and, posterolateral involvement in step 26 in Figure 1; c) wherein calculating each of the elements of bone lesions, spinal alignment, vertebral body
  • Dataset consists of C1 to L5 vertebrae and includes rare T13 and L6 transitional vertebrae. Dataset does not include images with implants or diseases.
  • Step 1 Vertebral Detection, Vertebral Segmentation and Feature Extraction Input: Computed tomography imaging
  • Output Deep Learning Features, Vertebral location, Vertebral identity, Vertebral segmentation
  • the VertDetect model 14 of Figure 1 used herein can be broken down into three main branches; the detection, classification, and segmentation branches.
  • the detection branch detects each vertebra in a 3D image by determining both its vertebral body centroid location and placing a bounding box around the whole vertebra.
  • the model predicts a location heatmap, offset (sub voxel location adjustment to heatmap location), and bounding box for each vertebrae in the CT volume.
  • the classification branch utilizes shared information between each neighbouring vertebra to determine what vertebra are present in the input CT scan (vertebral identity).
  • the segmentation branch semantically segments vertebra that are positively detected from the classification and the detection branches.
  • the overall architecture of VertDetect 14 of Figure 1 is shown in Figure 2.
  • a 3D ResNet-50 [9] and Feature Pyramid Network (FPN) [10] shown at 52 in Figure 2 act as a backbone architecture. Feature maps from this backbone are then used in further downstream tasks.
  • FPN Feature Pyramid Network
  • the ResNet-50 +FPN is used to extract features from the CT image that are used for the vertebral detection, classification, segmentation tasks in this step, and for tasks in the later steps.
  • the network was trained using data from both the 2019 and 2020 VerSe datasets [12].
  • the model was trained with a composite loss.
  • Heatmap The heatmaps have C channels, where each channel corresponds to a vertebra (i.e., channel 0 is C1, channel 1 is C2, etc.). Therefore, each centroid and bounding box is implicitly defined in the heatmap predictions.
  • the maxima of each channel’s heatmap is used to provide the centroid predictions for each vertebra.
  • Ground truth 3D Gaussian heatmaps are generated using ground truth centroid points in a down-sampled space.
  • a downsampled space was used to be consistent with the P1 output of the FPN (2x downsampling from input image size).
  • ⁇ ⁇ .
  • Offset Sizes The heatmaps, and therefore the predicted centroids, are in a downsampled state.
  • brackets ⁇ ⁇ are the floor operation and n is the downsampling size.
  • ⁇ ⁇ ⁇ h ⁇ ( ⁇ ⁇ ⁇ ⁇ ⁇ )
  • ⁇ ⁇ and ⁇ ⁇ are Bounding Box Sizes
  • ci is the full-scale for vertebra i
  • s is the sizes.
  • the subscripts for the sizes s correspond to left (l), right (r), posterior (p), anterior (a), inferior (i), and superior (s).
  • Each heatmap channel corresponds to an individual vertebra. However, these heatmaps do not consider neighbouring vertebra. If a channel that corresponds to a T3 vertebra shows a high probability of existing in the scan, then the probability for neighbouring T2 and T1 should reflect it.
  • the purpose of the classification branch is to leverage the information between neighbouring vertebra to improve overall classification and detection.
  • a RoiAlign [3] generates C feature maps of size 7x7x7 from P1 cropping and resampling regions focused on the centroid locations. The resampled feature maps are then sent through a 7x7x7 followed by a 1x1x1 convolution, both with ReLU activation.
  • GCN Graph Convolutional Network
  • ⁇ ⁇ is the binary value specifying if the i-th vertebra is active
  • ⁇ ⁇ is the predicted probability vertebra i being active
  • N is all possible vertebra.
  • Classification of the vertebrae is further refined by a graph pos- processing step.
  • the predicted heatmaps can have multiple local clusters and these local clusters can incorrectly correspond to the neighbouring vertebrae.
  • a post- processing method is used to determine which maxima from the local clusters are correct.
  • a non-maximum suppression is first used through a max- pooling layer to select the top k candidates from each channel of the heatmap predictions. These k candidates are then filtered by Euclidean distances to ensure no neighbours from the same local cluster exist resulting in k′ candidates for each heatmap channel. The logits for each k′ candidate from the heatmap predictions are then averaged with the logits from the classification branch for each corresponding vertebrae to scale each based on the probability of that particular vertebra existing in the scan. The resulting k′ candidates are constructed in a graph as seen in 5 based on two rules. The first rule is that the axial position of the node above must be greater than the node below to ensure the correct vertebrae ordering is enforced.
  • the second is that the Euclidean distance between any two connected nodes must be greater than 3 voxels to ensure that no two nodes from the same physical locations are used.
  • the weights for each node are taken as the averaged logits.
  • the centroid location of each vertebra is then determined by solving the graph from T (top) to B (bottom) by determining the longest path and therefore the path with the highest sum of averaged logits; this process is also solved from B to T and the path with the largest sum taken as the correct path.
  • the model was first trained with only the heatmap output for 500 epochs and will be referred to as the self-initialization. After the self-initialization, all outputs were predicted and all loss functions were used.
  • Step 2 SINS Element Bone Lesion Input: Vertebral Segmentation from Step 1, Computed tomography imaging.
  • Output osteoblastic lesion volume, osteolytic lesion volume, tumour classification (osteolytic, osteoblastic, mixed).
  • the bone lesion element is quantified and automated by calculating the amount of osteolytic and osteoblastic tissue in the vertebrae using the vertebral body segmentation.
  • a vertebra is cropped based on the vertebral body centroid location predictions from Step 1 and a secondary convolutional network is used to segment the vertebral body.
  • the vertebral detection model described in Step 1 was combined with another deep learning network (a U-Net, convolutional neural network (CNN)) yielding a vertebral body segmentation model of the trabecular centrum.
  • a histogram-based analysis is used to define osteolytic and osteoblastic tumour involvement.
  • Step 3 SINS Element Spinal Alignment The vertebral localization and segmentation were used from Step 1.
  • the Cobb angle was calculated in both the coronal and sagittal planes from the gradient of a spline curve generated through the vertebral body centroids. The angles calculated were validated against manual measurements of cobb angles made in the sagittal and coronal planes. Manual Cobb angle measurement (local to the vertebrae of interest and over the whole spine region (cervical, thoracic, lumbar) were taken on a subset of the Sunnybrook Spine SBRT treated patients with clinical indication of malalignment. Measurements were made by an orthopedic spine surgeon receiving fellowship training with consultation from a fellowship trained staff spine surgeon based on the CT imaging available for Spine SBRT planning.
  • Step 4 SINS Element Vertebral Body Collapse Vertebral body collapse was calculated using a combination of deep learning-based methods.
  • the vertebral detection model described in Step 2 was combined with another deep learning network (a U-Net, convolutional neural network (CNN)) for segmentation of the vertebral body centrums.
  • the CNN was trained to segment the vertebral body with unfractured vertebrae from metastatically involved spines from the patient cohort study dataset.
  • Vertebral body collapse was quantified utilizing the CNN for vertebral body centrum segmentation developed in Step 1. Using this algorithm, segmentations were generated for each collapsed vertebra of interest and for the 4 adjacent non-collapsed vertebrae (2 proximal and 2 distal).
  • Step 5 SINS Element Posterolateral Involvement The neural network from Step 1 was retrained to detect posterolateral tumour involvement.
  • Step 7 Automate Quantitative calculation of musculoskeletal health biomarkers. The vertebral localization and segmentation were used from Step 1.
  • Lumbar vertebrae (L1-L5) vertebral bodies were segmented using the vertebral detection model described in Step 2 and combined with another deep learning network (a U-Net, convolutional neural network (CNN)) for segmentation of the vertebral body centrums.
  • U-Net convolutional neural network
  • a psoas muscle region from the midpoints between the L2/L3 and L4/L5 intervertebral discs was isolated based on the vertebrae locations and curve of the spine found in Step 1.
  • the psoas muscle region was segmented by using another deep learning network (a U-Net, convolutional neural network (CNN)) for segmentation. derived from the vertebrae locations from Step 1.
  • U-Net convolutional neural network
  • Muscles segmented for use in biomarker calculations will be defined by the region of the spine imaged and may include: Iliocostalis lumborum, Iliocostalis thoracis, Iliocostalis cervicis, Longissimus thoracis, Longissimus cervicis, Longissimus capitis, Spinalis thoracis, Spinalis cervicis, Spinalis capitis, Semispinalis thoracis, Semispinalis cervicis, Semispinalis capitis, Multifidus, Rotatores brevis, Rotatores longus, Interspinales, Intertransversarii, Levatores costarum, Serratus posterior superior, Serratus posterior inferior, Quadratus lumborum, Psoas major, Psoas minor,
  • Step 8 Automated Quantitative SINS
  • SINS Spine Instability Neoplastic Score
  • SINS is a qualitative score based on six criteria: mechanical pain of the patient, vertebral level, lesion type (osteolytic, osteoblastic, or mixed), existing vertebral body collapse, malalignment, and presence of posterior element involvement. Total scores are binned into three categories: 0-6 (stable), 7-12 (possible instability), and 13-18 (instability) [6]. This work automates and enhances SINS by using the quantitative aspects of the image-based biomarkers allowing finer distinctions.
  • Step 9 Vertebral Fracture Risk Predictive models (random forest classifiers, support vector machines) combined patient factors (sex, age, presence of pain), treatment factors (dose and fractions), and the image-based quantitative biomarkers from the previous steps were utilized to predict vertebral fractures secondary to SBRT.
  • the new prediction model performance was evaluated against SINS with generalizability assessed using 5-fold cross-validation and demonstrated that quantitative imaging-based biomarkers improve the accuracy to predict fractures.
  • the present inventors have also used the deep learning model developed in Step 1 to directly predict vertebral fracture risk from images without calculating the intermediate image-based biomarkers.
  • the detection model described above shares imaging features between its different sub-tasks and this framework was leveraged similarly to extract features regarding the spine’s metastatic involvement.
  • the detection model’s implementation utilizes information from the vertebrae treated and the entire spine at once. These shared features were used to make predictions about fracture risk while using both vertebrae and spine-specific features.
  • the present disclosure provides a method for assessing mechanical stability and/or fracture risk in the metastatically involved spine, comprising: a) obtaining imaging data of a patient’s spine; b) inputting the imaging data into A) a computational algorithm, or B) a machine learning algorithm trained on a dataset of spinal imaging, to determine mechanical stability and/or fracture risk, the algorithm is configured to i) calculate mechanical stability and/or fracture risk by the steps of calculating features from a feature extractor algorithm and/or image processing algorithm, in particular based on user input; ii) combining said features within a computational decision tool iii) combining said features in step ii) with non-imaging patient specific features to yield predictions on mechanical stability and/or fracture risk, using an optimization scheme based on said imaging data of the patient’s spine.
  • a method for assessing mechanical stability and/or fracture risk in the metastatically involved spine comprising: a) obtaining imaging data of a patient’s spine; b) inputting the imaging data into A) a computational algorithm, or B) a machine learning algorithm trained on a dataset of spinal imaging, to determine mechanical stability and/or fracture risk, the algorithm configured to i) calculate mechanical stability and/or fracture risk by the steps of calculating features from a feature extractor algorithm such as a feature extractor backbone network; ii) combining said features within a computational decision tool comprising convolutional layers and vertebrae specific features derived from convolutional arms; and/or extracting the latent features of each vertebrae; and iii) combining said features combined with said vertebrae specific features in step ii) and with non-imaging patient specific features with dense layers to yield predictions on mechanical stability and/or fracture risk, wherein training is performed by an optimizer based on said imaging data of the patient’s spine.
  • a feature extractor algorithm such as a feature extract
  • the imaging data of the patient’s spine is CT and/or MR imaging data.
  • the dataset of spinal imaging is comprised of CT and/or MR imaging that includes a clinical cohort of patients with spinal metastases.
  • the feature extractor algorithm is a feature extractor backbone network, which is any one of a ResNet (Residual Network)+Feature Pyramid Network (FPN), Transformer, Inception, Convolutional Neural Network (CNN), Fully Connected Neural Network (FCNN), Scale-Invariant Feature Transform (SIFT), Speeded Up Robust Features (SURF), Histogram of Oriented Gradients (HOG), Local Binary Patterns (LBP), GIST, U-Net, AlexNet, Visual Geometry Group Network (VGGNet), GoogLeNet, Radiomics, Wavelet Transforms, Fourier Transforms, Edge Detection Algorithms, Region-based Methods, Filter Banks, Texture Descriptors, Color Spaces Transformations, Geometric Descriptor
  • the Edge Detection Algorithms are any one of Sobel Algorithms, Prewitt Algorithms and Canny Algorithms; the Region-based Methods are any one of Watershed algorithms and Superpixel Segmentation; the Filter Banks are Gabor filters; the Texture Descriptors are any one of Haralick Texture Descriptors, and Tamura Texture Descriptors; the Color Spaces Transformation are any one of Red, Green, Blue (RGB) to Hue, Saturation, Lightness (HSL) and Hue, Saturation, Value (HSV) Transformations; the Geometric Descriptors are any one of Zernike moments Geometric Descriptors and Fourier descriptors Geometric Descriptors); the Binary Pattern Descriptors and any one of Completed Local Binary Patterns (CLBP) and Dominant Local Binary Patterns (DLBP); the Keypoint Descriptors are any one of Binary Robust Invariant Scalable Keypoints (BRISK) and Fast Retina
  • the feature extractor algorithm is a feature extractor backbone network, which is a ResNet 50 +FPN network.
  • the features calculated through a feature extractor algorithm and extracted latent features of each vertebrae are deep features.
  • a method for determining tumorous involvement of the posterolateral elements of the spine from imaging comprising: a) obtaining imaging data of a patient’s spine; b) inputting the imaging data into a machine learning algorithm trained on a dataset of spinal imaging to determine involvement of the posterolateral elements of the spine; c) combining the segmented and localized vertebrae and calculating vertebra specific features from a feature extractor backbone network to determine posterolateral involvement or by retraining a feature exactor backbone network that is specific to classification of the posterolateral involvement, then using the features extracted to classify using any one of a classification branch, or a machine learning classifier, or a statistical classifier if there is posterolateral involvement, present or not, the extent (bilateral, unilateral), and the location
  • the vertebra specific features calculated from a feature extractor backbone network are deep features.
  • a method for assessing mechanical stability and/or fracture risk in the metastatically involved spine comprising: a) obtaining imaging data of a patient’s spine; b) inputting the imaging data into a machine learning algorithm trained on a dataset of spinal imaging to determine mechanical stability and/or fracture risk, the machine learning algorithm configured to calculate spinal instability neoplastic score elements, said elements including bone lesions, spinal alignment, vertebral body collapse, posterolateral involvement, c) wherein calculating each of the elements of bone lesions, spinal alignment, vertebral body collapse and posterolateral involvement includes segmenting and/or localizing the vertebrae by the steps of calculating features from a feature extractor backbone network and combining said features with convolutional layers, graph networks, and vertebrae specific features derived from convolutional arms and/or extracting the latent features, combining said features with said vertebrae specific features and non-imaging patient specific features with dense layers to yield
  • the dataset of spinal imaging is comprised of CT and/or MR imaging that includes a clinical cohort of patients with spinal metastases.
  • the elements of bone lesions, spinal alignment, vertebral body collapse, posterolateral involvement are combined with clinical pain and vertebral level to yield an automated spinal instability neoplastic score.
  • the elements of bone lesions, spinal alignment, vertebral body collapse, posterolateral involvement are combined with clinical pain and other patient specific features and vertebral level to yield an Automated SINS score which predicts mechanical stability and/or fracture risk.
  • the elements bone lesions, spinal alignment, vertebral body collapse, posterolateral involvement are combined with quantification of any one or all of musculoskeletal health, clinical pain and other patient specific features and vertebral level to yield an Automated Enhanced SINS type assessment which predicts mechanical stability and/or fracture risk better than the existing SINS.
  • the musculoskeletal health is calculated by the steps of combining the segmented and localized vertebrae with another convolutional network that segments and calculates the volumes of the vertebral body trabecular centrum in the 3D image, calculates the density of the vertebral body trabecular bone, the bone density distribution and further segments muscle volume with another convolutional network and calculates the volume, density and density distribution of the muscle.
  • the musculoskeletal health includes bone quality, bone mass, bone density, muscle quality, and muscle size.
  • the features calculated through a feature extractor algorithm and extracted latent features of each vertebrae are deep features.
  • FCOS Fully convolutional one-stage object detection,” in Proceedings of the IEEE International Conference on Computer Vision, 2019, vol.2019-Octob, pp.9626–9635, doi: 10.1109/ICCV.2019.00972.
  • J. Yi et al. “Object-Guided Instance Segmentation with Auxiliary Feature Refinement for Biological Images,” IEEE Trans. Med.

Landscapes

  • Health & Medical Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Medical Informatics (AREA)
  • Public Health (AREA)
  • General Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Biomedical Technology (AREA)
  • Pathology (AREA)
  • Veterinary Medicine (AREA)
  • Animal Behavior & Ethology (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Surgery (AREA)
  • Molecular Biology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Biophysics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • Physiology (AREA)
  • Data Mining & Analysis (AREA)
  • Primary Health Care (AREA)
  • Epidemiology (AREA)
  • Dentistry (AREA)
  • Orthopedic Medicine & Surgery (AREA)
  • High Energy & Nuclear Physics (AREA)
  • Optics & Photonics (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Signal Processing (AREA)
  • Psychiatry (AREA)
  • Databases & Information Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Quality & Reliability (AREA)
  • General Physics & Mathematics (AREA)
  • Physical Education & Sports Medicine (AREA)
  • Rheumatology (AREA)
  • Evolutionary Computation (AREA)
  • Fuzzy Systems (AREA)
  • Mathematical Physics (AREA)
  • Image Analysis (AREA)

Abstract

Outils de calcul pour une évaluation quantitative automatisée de la stabilité rachidienne métastatique. Des procédés informatiques d'analyse d'image, qui peuvent comprendre un apprentissage profond, sont utilisés pour quantifier automatiquement la géométrie/la qualité rachidienne et l'impact de la maladie à partir d'images médicales (scanner et RM) qui peuvent être utilisées pour prédire les résultats pour le patient en termes de stabilité vertébrale/risque de fracture. Deux outils sont prévus : 1) un outil SINS (score néoplasique d'instabilité rachidienne) automatisé amélioré interprétable qui calcule des paramètres quantitatifs individuels améliorés basés sur des images utilisées dans le SINS et calcule la stabilité par un système de notation similaire ; 2) un outil qui estime directement la stabilité vertébrale/le risque de fracture sur la base de l'imagerie 3D (avec ou sans données cliniques propres à un patient non basées sur l'imagerie). Les outils automatisés peuvent être utilisés pour améliorer les flux de travail cliniques (par automatisation) et la précision, la sensibilité et la spécificité de prédiction de la stabilité vertébrale, ce qui peut aider les cliniciens à mieux orienter le traitement pour optimiser les résultats pour le patient. Les outils sont particulièrement utiles chez des patients chez lesquels est planifiée une radiothérapie corporelle stéréotaxique (SBRT) en raison de l'accessibilité de l'imagerie clinique et de la probabilité élevée (14 %) de fracture vertébrale par compression (FVC) et d'une instabilité mécanique associée à la suite de cette procédure.
PCT/CA2023/051491 2022-11-08 2023-11-08 Système d'évaluation quantitative automatisée de la stabilité rachidienne métastatique Ceased WO2024098147A1 (fr)

Priority Applications (2)

Application Number Priority Date Filing Date Title
JP2025526244A JP2025536474A (ja) 2022-11-08 2023-11-08 転移性脊椎安定性の自動的および定量的評価のためのシステム
EP23887238.6A EP4615307A1 (fr) 2022-11-08 2023-11-08 Système d'évaluation quantitative automatisée de la stabilité rachidienne métastatique

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US202263423674P 2022-11-08 2022-11-08
US63/423,674 2022-11-08

Publications (1)

Publication Number Publication Date
WO2024098147A1 true WO2024098147A1 (fr) 2024-05-16

Family

ID=91031537

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CA2023/051491 Ceased WO2024098147A1 (fr) 2022-11-08 2023-11-08 Système d'évaluation quantitative automatisée de la stabilité rachidienne métastatique

Country Status (3)

Country Link
EP (1) EP4615307A1 (fr)
JP (1) JP2025536474A (fr)
WO (1) WO2024098147A1 (fr)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN119205741A (zh) * 2024-11-25 2024-12-27 核工业总医院 基于人工智能的椎体骨折自动识别分析系统
CN119559226A (zh) * 2024-10-28 2025-03-04 太原理工大学 基于共现滤波与方向梯度直方图多模态遥感影像配准方法

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180247020A1 (en) * 2017-02-24 2018-08-30 Siemens Healthcare Gmbh Personalized Assessment of Bone Health
US20190336097A1 (en) * 2014-07-21 2019-11-07 Zebra Medical Vision Ltd. Systems and methods for prediction of osteoporotic fracture risk
US20200069973A1 (en) * 2018-05-30 2020-03-05 Siemens Healthcare Gmbh Decision Support System for Individualizing Radiotherapy Dose

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190336097A1 (en) * 2014-07-21 2019-11-07 Zebra Medical Vision Ltd. Systems and methods for prediction of osteoporotic fracture risk
US20180247020A1 (en) * 2017-02-24 2018-08-30 Siemens Healthcare Gmbh Personalized Assessment of Bone Health
US20200069973A1 (en) * 2018-05-30 2020-03-05 Siemens Healthcare Gmbh Decision Support System for Individualizing Radiotherapy Dose

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN119559226A (zh) * 2024-10-28 2025-03-04 太原理工大学 基于共现滤波与方向梯度直方图多模态遥感影像配准方法
CN119205741A (zh) * 2024-11-25 2024-12-27 核工业总医院 基于人工智能的椎体骨折自动识别分析系统

Also Published As

Publication number Publication date
JP2025536474A (ja) 2025-11-06
EP4615307A1 (fr) 2025-09-17

Similar Documents

Publication Publication Date Title
Kim et al. Automatic detection and segmentation of lumbar vertebrae from X-ray images for compression fracture evaluation
Kushnure et al. MS-UNet: A multi-scale UNet with feature recalibration approach for automatic liver and tumor segmentation in CT images
Ambellan et al. Automated segmentation of knee bone and cartilage combining statistical shape knowledge and convolutional neural networks: Data from the Osteoarthritis Initiative
Li et al. Automatic lumbar spinal MRI image segmentation with a multi-scale attention network
US12361543B2 (en) Automated detection of tumors based on image processing
Raja'S et al. Labeling of lumbar discs using both pixel-and object-level features with a two-level probabilistic model
US8693750B2 (en) Method and system for automatic detection of spinal bone lesions in 3D medical image data
Ghosh et al. Supervised methods for detection and segmentation of tissues in clinical lumbar MRI
Diniz et al. Spinal cord detection in planning CT for radiotherapy through adaptive template matching, IMSLIC and convolutional neural networks
Ghosh et al. Automatic lumbar vertebra segmentation from clinical CT for wedge compression fracture diagnosis
Wang et al. Fully automatic intervertebral disc segmentation using multimodal 3D U-Net
EP4615307A1 (fr) Système d'évaluation quantitative automatisée de la stabilité rachidienne métastatique
Lin et al. Multitask deep learning for segmentation and lumbosacral spine inspection
Elkhill et al. Geometric learning and statistical modeling for surgical outcomes evaluation in craniosynostosis using 3D photogrammetry
Ghosh et al. Composite features for automatic diagnosis of intervertebral disc herniation from lumbar MRI
Nisar et al. Lumbar intervertebral disc detection and classification with novel deep learning models
Kumar et al. Robust msfm learning network for classification and weakly supervised localization
Badarneh et al. Semi-automated spine and intervertebral disk detection and segmentation from whole spine MR images
Antony et al. Feature learning to automatically assess radiographic knee osteoarthritis severity
Khan et al. Brain Tumor Detection and Segmentation Using RCNN.
Kuok et al. Retracted on March 1, 2022: Vertebrae Segmentation from X-ray Images Using Convolutional Neural Network
Subha et al. Efficient Liver Segmentation using Advanced 3D-DCNN Algorithm on CT Images
Zheng et al. Adaptive segmentation of vertebral bodies from sagittal MR images based on local spatial information and Gaussian weighted chi-square distance
Sadeghi et al. A Novel Sep-Unet architecture of convolutional neural networks to improve dermoscopic image segmentation by training parameters reduction
JP7325310B2 (ja) 医用画像処理装置、医用画像解析装置、及び、標準画像作成プログラム

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23887238

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2025526244

Country of ref document: JP

Kind code of ref document: A

WWE Wipo information: entry into national phase

Ref document number: 2025526244

Country of ref document: JP

WWE Wipo information: entry into national phase

Ref document number: 2023887238

Country of ref document: EP

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2023887238

Country of ref document: EP

Effective date: 20250610

WWP Wipo information: published in national office

Ref document number: 2023887238

Country of ref document: EP