WO2025083687A1 - Système et procédé de prédiction d'évaluations de pathologie à partir d'images médicales - Google Patents
Système et procédé de prédiction d'évaluations de pathologie à partir d'images médicales Download PDFInfo
- Publication number
- WO2025083687A1 WO2025083687A1 PCT/IL2024/051009 IL2024051009W WO2025083687A1 WO 2025083687 A1 WO2025083687 A1 WO 2025083687A1 IL 2024051009 W IL2024051009 W IL 2024051009W WO 2025083687 A1 WO2025083687 A1 WO 2025083687A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- radiological
- images
- embeddings
- common
- pathological
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H10/00—ICT specially adapted for the handling or processing of patient-related medical or healthcare data
- G16H10/40—ICT specially adapted for the handling or processing of patient-related medical or healthcare data for data related to laboratory analysis, e.g. patient specimen analysis
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H30/00—ICT specially adapted for the handling or processing of medical images
- G16H30/40—ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H50/00—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
- G16H50/20—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H50/00—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
- G16H50/70—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for mining of medical data, e.g. analysing previous cases of other patients
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H70/00—ICT specially adapted for the handling or processing of medical references
- G16H70/60—ICT specially adapted for the handling or processing of medical references relating to pathologies
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B6/00—Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
- A61B6/52—Devices using data or image processing specially adapted for radiation diagnosis
- A61B6/5211—Devices using data or image processing specially adapted for radiation diagnosis involving processing of medical diagnostic data
- A61B6/5217—Devices using data or image processing specially adapted for radiation diagnosis involving processing of medical diagnostic data extracting a diagnostic or physiological parameter from medical diagnostic data
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10056—Microscopic image
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30024—Cell structures in vitro; Tissue sections in vitro
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30096—Tumor; Lesion
Definitions
- the present invention relates generally to integrated medical image analysis generally.
- Imaging has become an indispensable tool in modem healthcare, providing clinicians with valuable insights into the human body's internal structures and functions.
- Various imaging modalities such as X-ray, computed tomography (CT), magnetic resonance imaging (MRI), and ultrasound, offer different perspectives and levels of detail, each serving unique diagnostic purposes. These technologies have revolutionized the way medical professionals detect, diagnose, and monitor a wide range of conditions.
- MICCAI Discovery Integrating Radiology, Pathology, Genomic, and Clinical Data
- a system for predicting non-invasive pathology assessments from medical images includes a processor configured to receive an input radiological image, process the input radiological image to generate a representation in a common semantic space, compare the generated representation to stored pathological representations in the common semantic space, and predict pathology results based on the comparison, where the common semantic space establishes semantic correlations between radiological and pathological domains without spatial alignment of radiological and pathological datasets.
- the processor is further configured to generate radiological embeddings from the input radiological image using a trained feature extractor.
- the processor is further configured to generate the representation in the common semantic space from the generated radiological embeddings using a trained radiological common space (RCS) projecting neural network trained together with a pathological common space projecting neural network.
- RCS radiological common space
- the stored pathological representations include histopathological common space (HCS) embeddings generated during a training phase.
- HCS histopathological common space
- the HCS embeddings are stored in a histopathological database along with associated whole slide images (WSIs) and pathology reports.
- the processor is further configured to use a distance metric to identify a most similar stored pathological representation to the generated representation.
- the distance metric is cosine similarity.
- the processor is further configured to retrieve pathology information associated with the most similar stored pathological representation.
- the pathology information includes a whole slide image and an associated pathology diagnosis and report.
- the common semantic space is established during a training phase using supervised contrastive learning on paired radiological and pathological data.
- the paired radiological and pathological data includes positive pairs of semantically related radiological and pathological images and negative pairs of unrelated radiological and pathological images.
- the radiological images are at least one of magnetic resonance imaging (MRI) images, ultrasound, computed tomography (CT) images and positron emission tomography (PET) images.
- MRI magnetic resonance imaging
- CT computed tomography
- PET positron emission tomography
- a system for predicting non-invasive pathology assessments from medical images the system being implemented on a computing device having at least one processor and at least one memory unit.
- the system includes a storage unit and a common space pathoradiomics predictor (CSPP).
- the storage unit stores pathological representations in a common semantic space.
- the CSPP processes an input radiological image to generate a representation in the common semantic space, compares the generated representation to the stored pathological representations, and predicts pathology results based on the comparison, where the common semantic space establishes semantic correlations between radiological and pathological domains without spatial alignment of radiological and pathological datasets.
- the system also includes a CSPP trainer to generate the common semantic space and the stored pathological representations.
- the CSPP trainer includes a database, a radiological feature extractor, a histopathological feature extractor, a radiological common space projecting neural network and a histopathological common space projecting neural network.
- the database stores at least radiological images, histopathological slide images, and diagnoses associated with the histopathological slide images, the database also storing annotations indicating which of the radiological images and histopathological slide images are positive pairs and which of them are negative pairs.
- the radiological feature extractor generates radiological embeddings of the radiological images.
- the histopathological feature extractor generates histopathological embeddings of the histopathological slide images.
- the radiological common space projecting neural network and the histopathological common space projecting neural network transform the radiological and histological embeddings into common space embeddings, the neural networks being trained together using a supervised contrastive loss based on the positive pairs and the negative pairs.
- the CSPP trainer generates as output a trained radiological common space projecting neural network and trained histological common space embeddings.
- the CSPP includes the radiological feature extractor, the trained radiological common space projecting neural network, a common space matcher, and a pathology retriever.
- the radiological feature extractor generates radiological embeddings of the input radiological image.
- the trained radiological common space projecting neural network processes the radiological embeddings and generates therefrom radiological common space embeddings.
- the common space matcher finds a match to the radiological common space embeddings in the trained histological common space embeddings.
- the pathology retriever retrieves, from the database, a histopathological slide image associated with the matched trained histological common space embeddings, and a diagnosis associated with the histopathological slide image.
- the common space matcher uses a distance metric to identify the match.
- the radiological images are at least one of magnetic resonance imaging (MRI) images, ultrasound, computed tomography (CT) images and positron emission tomography (PET) images.
- MRI magnetic resonance imaging
- CT computed tomography
- PET positron emission tomography
- a method for predicting non-invasive pathology assessments from medical images the method being implemented on a computing device having at least one processor and at least one memory unit.
- the method includes storing pathological representations in a common semantic space, processing an input radiological image to generate a representation in the common semantic space, comparing the generated representation to the stored pathological representations, and predicting pathology results based on the comparison.
- the common semantic space establishes semantic correlations between radiological and pathological domains without spatial alignment of radiological and pathological datasets.
- the method also includes generating the common semantic space and the stored pathological representations.
- the generating includes storing at least radiological images, histopathological slide images, and diagnoses associated with the histopathological slide images, the database also storing annotations indicating which of the radiological images and histopathological slide images are positive pairs and which of them are negative pairs, first extracting radiological embeddings from the radiological images, second extracting histopathological embeddings of the histopathological slide images, training a radiological common space projecting neural network and a histopathological common space projecting neural network using a supervised contrastive loss based on the positive pairs and the negative pairs to transform the radiological and histological embeddings into common space embeddings, and generating as output a trained radiological common space projecting neural network and trained histological common space embeddings.
- the processing includes implementing the first extracting to generate radiological embeddings of the input radiological image, processing the radiological embeddings and generating therefrom radiological common space embeddings using the trained radiological common space projecting neural network, finding a match to the radiological common space embeddings in the trained histological common space embeddings, and retrieving, from the database, a histopathological slide image associated with the matched trained histological common space embeddings and a diagnosis associated with the histopathological slide image.
- the finding uses a distance metric to identify the match.
- the radiological images are at least one of magnetic resonance imaging (MRI) images, ultrasound, computed tomography (CT) images and positron emission tomography (PET) images.
- FIG. 1 is a block diagram illustration of a common space pathoradiomics predictor system, constructed and operative according to an embodiment of the present invention
- FIG. 2A is a schematic illustration of an MRI image with a lesion
- FIG. 2B is a schematic illustration of a histopathological slide for the lesion of Fig. 2A;
- Fig. 3 is a block diagram illustration of the elements of a common space pathoradiomics predictor trainer for the predictor system of Fig. 1, constructed and operative according to an embodiment of the present invention;
- FIG. 4 is a schematic illustration of multiple MRI images and histopathological slides with corresponding lesions, useful for the trainer of Fig. 3;
- FIG. 5 is a schematic illustration of a histopathological database, useful in the system of Fig. 1;
- Fig. 6 is a block diagram illustration of the elements of a common space pathoradiomics predictor for the predictor system of Fig. 1, constructed and operative according to an embodiment of the present invention.
- Applicant has realized that existing approaches to medical image analysis often fail to fully leverage the complementary information available across different imaging modalities and data types. Applicant has further realized that, while the articles listed above have attempted to correlate radiological and pathological data, these efforts have largely been constrained to cases where spatial alignment of the data is possible, such as in breast or prostate imaging of tumors.
- a system which determines semantic correlations between these domains may be able to predict pathology results from an input medical image, without the need for spatial alignment of the datasets.
- Such a system may provide a more holistic view of tumor heterogeneity, may shorten the diagnostic timeframe may provide more precise diagnoses, and may ultimately improve the quality and efficacy of personalized treatment strategies. In addition, it may aid in the identification of novel biomarkers for tumor characterization and subtype classification.
- Fig. 1 illustrates a pathoradiomics system 100.
- System 100 comprises a common space pathoradiomics predictor (CSPP) 102 and a CSPP trainer 120.
- CSPP common space pathoradiomics predictor
- CSPP 102 may predict pathology results from an input medical image. This approach may enable the use of radiological data to predict or infer pathological information, potentially allowing for non-invasive pathological assessments based on radiological imaging.
- CSPP 102 may utilize a common space concept to establish semantic correlations between radiological and pathological domains.
- This common space may represent a shared semantic framework where features from both radiological and pathological data may be mapped and then compared. By projecting data from these different modalities onto a common semantic space, CSPP 102 may leverage relationships between radiological characteristics and pathological findings.
- CSPP 102 may employ machine learning techniques to learn and refine the mapping between radiological features and pathological outcomes within the common semantic space. This learning process may involve analyzing paired radiological and pathological data to identify patterns and relationships that can be used for prediction.
- This approach may have potential applications in various areas of medical diagnosis and research, including tumor characterization, disease progression monitoring, and treatment response assessment. Furthermore, it may process many different types of image and diagnostic data.
- pathoradiomics system 100 is described using two particular types of input, radiological images and histopathological images, examples of which are shown in Figs. 2A and
- Fig. 2A illustrates an exemplary radiology image 10 obtained through magnetic resonance imaging (MRI), of a pair of breasts 11.
- Image 10 displays varying shades of gray, indicating different densities or compositions of the tissues being imaged.
- Exemplary radiology image 10 shows a lesion 13 in one of breasts 11.
- MRI magnetic resonance imaging
- the limited specificity of MRI the resolution is typically 1 - 3mm
- the lesions are sent to biopsy, whose results are shown in Fig. 2B.
- Fig. 2B illustrates an exemplary, digitized histopathological slide 12 of a tissue sample or cellular material for the lesion shown in Fig. 2A.
- histopathological slides contain tissue sections to be examined under a microscope and, for this purpose, the material is typically stained with specific dyes to highlight different cellular structures or components.
- hematoxylin and eosin (H&E) staining were used to differentiate between nuclei and cytoplasm in the tissue sample, where cell nuclei are stained blue, and the cytoplasm and extracellular matrix elements are stained pink.
- the resolution of the digital slide is 0.25 micron/pixel.
- IDC invasive ductal carcinoma
- Other staining techniques such as immunohistochemistry or special stains, may be used to highlight specific proteins, carbohydrates, or other molecules of interest within the tissue sample.
- the exemplary histopathological slide 12 has been digitized to create a whole slide image (WSI), which is normally analyzed using digital pathology techniques.
- the WSI may be stored in a database along with other relevant data, such as the patient's medical history, diagnosis, and treatment plan.
- the WSI may be analyzed using various image analysis algorithms to extract features and patterns that may be indicative of specific diseases or conditions.
- CSPP trainer 120 comprises a trained radiological image neural network 122, a radiological common space (RCS) projecting neural network in training 124, a trained histopathological image neural network 126, a histopathological common space (HCS) projecting neural network in training 128, and a contrastive loss function 130.
- RCS radiological common space
- HCS histopathological common space
- CSPP trainer 120 may utilize paired input data, including both positive and negative pairs, to build a common semantic space.
- Positive pairs may comprise radiological and histopathological images that are semantically related (i.e. indicate the same or a similar diagnosis, as determined by experts), while negative pairs may include unrelated image pairs.
- FIG. 4 shows 2 MRI images, labelled 10A and 10B, with lesions 13A and 13B, respectively.
- Fig. 4 also shows their associated 2 WSI slides, labelled 12A and 12B.
- MRI image 10A and WSI slide 12A are also shown in Figs. 2A and 2B, respectively.
- patient A was diagnosed with IDC, a malignant breast cancer, and patient B was diagnosed with fibroadenoma, a benign breast lesion.
- MRI image 10A is positively paired with WSI slide 12A, as is MRI image 10B and WSI slide 12B. This is noted in Fig. 4 with solid arrows.
- MRI image 10A can be negatively paired with WSI slide 12B, and MRI image 10B can also be negatively paired with WSI slide 12A. This is noted in Fig. 4 with dashed arrows.
- domain experts may create annotations for both positive and negative pairings between radiological scans and their known, corresponding histopathological slides. It will be appreciated that these expert-curated associations, informed by semantic content, are established without the constraint of spatial alignments.
- radiological and histopathological (WSI) data each undergoes distinct processes to extract their respective embeddings from their respective image neural networks 122 and 126.
- the embeddings of both modalities are subsequently projected onto a shared semantic space during shared training of their common space projecting neural networks 124 and 128.
- Trained radiological image neural network 122 may be any suitable feature extractor for radiological images, which may generate radiological embeddings that capture the features of radiological images related to the lesion of interest. These embeddings may then be fed into RCS projecting neural network in training 124, which may attempt to project them onto the common semantic space.
- trained histopathological image neural network 126 may be any suitable feature extractor for histopathological images, which may produce histopathological embeddings that represent characteristics of the tissue samples related to the lesion of interest. HCS projecting neural network in training 128 may then attempt to project these embeddings into the same common semantic space. [0064] It will be appreciated that the generation of feature embedding vectors from raw imaging data is tailored for each combination of modality and pathology.
- CSPP trainer 120 may employ scenariospecific feature extractors, such as image neural networks 122 and 126, which are specifically tailored to represent the key aspects of each modality and pathology while being aligned in a way that they reflect the same underlying lesion.
- scenariospecific feature extractors such as image neural networks 122 and 126, which are specifically tailored to represent the key aspects of each modality and pathology while being aligned in a way that they reflect the same underlying lesion.
- the scenario- specific feature extractors transform the images into compact embedding vectors, capturing the deep semantic relationships of the scenario and reducing dimensionality in the process. These vectors retain vital information and capture the unique patterns, textures, and features from each image that are relevant to the scenario, making them suitable for further analytical stages.
- One scenario might be the IDH (isocitrate dehydrogenase) mutation status which is critical factor in diagnosis of gliomas (i.e. brain tumors) and in their treatment decision-making.
- the pathology feature extractor e.g. trained histopathological image neural network 1266
- the pathology feature extractor might be designed to extract features that can be used to identify IDH mutation status in gliomas.
- One such feature extractor may utilize the classifier described in US provisional patent application, filed October 30, 2023, and entitled “IDH Mutation Status Prediction in Gliomas Using H&E Slides and Deep Learning”, commonly owned by Applicant and incorporated herein by reference.
- trained histopathological image neural network 126 segments H&E slides into tiles, and extracts features using through a two-step process.
- the extractor uses a self-supervised Vision Transformer (EsViT) neural network to extract features from histopathology slides of glioma.
- This self-supervised approach allows the neural network to learn meaningful representations from unlabeled data, making it more robust and capable of generalizing across diverse datasets.
- DeepMIL Deep Multiple Instance Learning
- the DeepMIL classifier ensures that neural network 126 focuses on the critical regions of the slides, improving the accuracy of subtype identification and enhancing the overall pathology representation.
- trained histopathological image neural network 126 outputs histopathological embeddings from the last hidden layer of the neural network (rather than the final pathology classifications from the output layer as taught by 63/594,111). These embeddings focus on histopathological patterns associated with IDH mutant and wild-type gliomas, thus ensuring that neural network 126 represents the mutation status accurately and efficiently.
- the feature extractor such as radiological image neural network 122
- radiological image neural network 122 may be formed of a UNet-based feature extractor to identify and segment the tumor core, the enhanced tumor around the core, and any edema around the enhanced tumor.
- radiological image neural network 122 to generate a comprehensive representation of the lesion's morphology, including both the tumor and its surrounding impact on the brain tissue, which is important for precise glioma classification and monitoring of tumor dynamics, ensuring that it reflects the lesion’s structure and behavior with sufficient precision to correlate with the pathology features (e.g. the patterns associated with IDH mutant and wild-type gliomas) extracted by histopathological image neural network 126.
- the extracted features are thus features from the two modalities (MRI and H&E slides) that are related to the particular scenario (IDH gliomas).
- the common semantic space may serve as a unified representation incorporating features from both radiological and histopathological domains, enabling data from one to be compared and analyzed with the other.
- the common space may be built from pairing the radiological and WSI data, ensuring that embeddings from semantically related radiological and WSI images are close in the shared space while embeddings that are not semantically related are far from each other in the common space.
- RCS projecting neural network in training 124 and HCS projecting neural network in training 128 may be any suitable neural network.
- both of them may be feed-forward neural networks, composed of an input layer, multiple hidden layers, and an output layer.
- Contrastive loss function 130 may provide a supervised loss function updating both RCS projecting neural network in training 124 and HCS projecting neural network in training 128, and, as a result, may shape the common semantic space.
- An exemplary supervised contrastive learning system is described in Khosla, P., Teterwak, P., Wang, C., Sama, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., & Krishnan, D. (2020). Supervised Contrastive Learning. Advances in Neural Information Processing Systems, 33, 18661-18673.
- An exemplary supervised contrastive loss function for CSPP trainer 120 may be:
- Equation 1 Equation 1 where: N is the total number of samples currently being processed in a batch.
- D represents the distance (e.g., cosine similarity distance) between the radiology embedding its paired pathology embedding. This distance measures how far apart the embeddings of the radiology and pathology data points are in the common space.
- indicator variable which is set to:
- indicator variable which is set to:
- msame i s a margin parameter for positive pairs from different patients, which defines the allowable distance between embeddings of different patients who belong to the same class.
- mdiff is a margin parameter for negative pairs, which defines the minimum distance required between pairs from different classes to ensure separation in the common space.
- contrastive loss function 130 may operate to pull positive pairs closer within the common embedding space, while pushing apart negative pairs. As can be seen from equation 1, positive pairs may be those belonging to the same tumor subtype or diagnosis while negative pairs may be those from different subtype or diagnosis, as defined by an expert. [0075] As mentioned above, contrastive loss function 130 may implement supervised contrastive learning (i.e. it uses the pair labels).
- Contrastive loss function 130 may pull positive pairs closer by minimizing the distance between embeddings of positive pairs, ensuring that the radiological image and its corresponding histopathological slide are close in the semantic space and may push apart the negative pairs by maximizing the distance between embeddings of negative pairs.
- non-matching radiological images and histopathological slides may be separated in the semantic space, within a predefined margin.
- CSPP trainer 120 may uniquely interlink radiology and pathology images as two intertwined yet distinct modalities.
- CSPP trainer 120 may output the trained RCS projecting neural network and the final HCS embeddings.
- Fig. 5 illustrates a histopathological database 140 which may store both the input WSI slides, each having a slide ID, and their associated final HCS embeddings output from CSPP trainer 120.
- histopathological database 140 may provide a repository of pre-processed histopathological data in a format that is directly comparable to radiological data within the common semantic space.
- Trained CSPP 102 comprises trained radiological image neural network 122, trained RCS projecting neural network, here labeled 134, histopathological database 140, a common space matcher 136, and a pathology retriever 138.
- trained CSPP 102 may receive an input radiology image, such as image 10, as its sole input.
- Trained radiological image neural network 122 may generate radiological embeddings from radiology image 10.
- Trained CSPP 102 may then utilize now trained RCS projecting neural network 134 to project these radiological embeddings onto the common semantic space, resulting in the RCS embeddings for the input radiology image.
- Common space matcher 136 may compare the generated RCS embeddings with the final HCS embeddings generated by CSPP trainer 120 and stored in histopathological database 140.
- Common space matcher 136 may be any suitable similarity searcher. For instance, it may use a distance metric, such as cosine similarity.
- Common space matcher 136 may identify the most similar HCS embedding to the RCS embedding generated from input radiology image 10.
- Pathology retriever 138 may then retrieve the corresponding pathology information associated with the matched HCS embedding from histopathological database 130 using the slide ID as the unique identifier of each slide in database 130. The information retrieved is typically both a whole slide image and its associated pathology diagnosis and report.
- trained CSPP 102 may leverage the common semantic space established during training to infer pathological characteristics from input radiological data alone.
- the histological information is effectively encapsulated within the HCS embeddings stored in histopathological database 140, eliminating the need for real-time processing of histopathological images during inference.
- This approach may enable non-invasive pathological assessments based solely on radiological imaging.
- trained CSPP 102 may predict pathological features or outcomes without direct access to histopathological slides.
- an MRI image may be used to identify a biomarker, such as the IDH biomarker.
- CSPP 100 may, in other scenarios, be able to learn radiological features that correlate with pathology and genomic characteristics.
- a notable advancement is the ability of CSPP 100 to predict pathology (any one or both of textual or visual pathological characteristics) based on radiology.
- CSPP 100 may aid in subtype classification and identification of novel biomarkers for tumor characterization.
- CSPP 100 may significantly enhance the accuracy of oncological diagnoses by integrating radiology and pathology insights. This interdisciplinary approach may not only accelerate the diagnostic process, ensuring timely interventions, but also may facilitate more personalized and cost-effective treatment decisions. Some of the potential applications include:
- BI-RADS Breast Imaging Reporting and Data System
- CSPP 100 may be used to enhance diagnostic accuracy for lesions categorized as BI-RADS 4, which are classified as uncertain.
- Lung cancer biomarker prediction using CT imaging Molecular biomarkers are critical in determining lung cancer treatment strategies. CSPP 100 may leverage CT imaging to non-invasively predict these biomarkers.
- CSPP 100 may define the staging of colon adenocarcinoma by analyzing intricate patterns present in MRI data.
- CSPP 100 may predict pivotal molecular characteristics of brain tumors, including the status of IDH mutations and of other molecular markers, such as 06-Methylguanine-methyltransferase (MGMT) promoter methylation, by analyzing multi-parametric MRI data.
- MGMT 06-Methylguanine-methyltransferase
- CSPP 100 may employ
- CSPP 100 may employ MRI data for the non-invasive characterization and prediction of the stage of the disease.
- Common space pathoradiomics predictor 102 and CSPP trainer 120 may be implemented on any suitable apparatus.
- This apparatus may be specially constructed for the desired purposes, or it may comprise a computing device or system typically having at least one processor and at least one memory, selectively activated or reconfigured by a computer program stored in the computer.
- the resultant apparatus when instructed by software may turn the general- purpose computer into inventive elements as discussed herein.
- the instructions may define the inventive device in operation with the computer platform for which it is desired.
- Such a computer program may be stored in a computer readable storage medium, such as, but not limited to, any type of disk, including optical disks, magnetic-optical disks, read-only memories (ROMs), volatile and non-volatile memories, random access memories (RAMs), electrically programmable readonly memories (EPROMs), electrically erasable and programmable read only memories (EEPROMs), magnetic or optical cards, Flash memory, disk-on-key or any other type of media suitable for storing electronic instructions and capable of being coupled to a computer system bus.
- ROMs read-only memories
- RAMs random access memories
- EPROMs electrically programmable readonly memories
- EEPROMs electrically erasable and programmable read only memories
- magnetic or optical cards such as, but not limited to, any type of disk, including optical disks, magnetic-optical disks, read-only memories (ROMs), volatile and non-volatile memories, random access memories (RAMs), electrically programmable readonly memories (EPROMs), electrically era
- the computer readable storage medium may also be implemented in cloud storage.
- Some general-purpose computers may comprise at least one communication element to enable communication with a data network and/or a mobile communications network.
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- General Health & Medical Sciences (AREA)
- Public Health (AREA)
- Epidemiology (AREA)
- Primary Health Care (AREA)
- Biomedical Technology (AREA)
- Data Mining & Analysis (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Databases & Information Systems (AREA)
- Pathology (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Radiology & Medical Imaging (AREA)
- Life Sciences & Earth Sciences (AREA)
- Computational Linguistics (AREA)
- Evolutionary Computation (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Biophysics (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Artificial Intelligence (AREA)
- Quality & Reliability (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Measuring And Recording Apparatus For Diagnosis (AREA)
Abstract
Selon l'invention, un système de prédiction d'évaluations non invasives de pathologie à partir d'images médicales comprend une unité de stockage et un prédicteur patho-radiomique d'espace commun (CSPP). L'unité de stockage stocke des représentations pathologiques dans un espace sémantique commun. Le CSPP traite une image radiologique d'entrée pour générer une représentation dans l'espace sémantique commun, compare la représentation générée aux représentations pathologiques stockées, et prédit des résultats de pathologie sur la base de la comparaison. L'espace sémantique commun établit des corrélations sémantiques entre des domaines radiologique et pathologique sans alignement spatial d'ensembles de données radiologiques et pathologiques.
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US202363590446P | 2023-10-15 | 2023-10-15 | |
| US63/590,446 | 2023-10-15 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2025083687A1 true WO2025083687A1 (fr) | 2025-04-24 |
Family
ID=95448547
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/IL2024/051009 Pending WO2025083687A1 (fr) | 2023-10-15 | 2024-10-15 | Système et procédé de prédiction d'évaluations de pathologie à partir d'images médicales |
Country Status (1)
| Country | Link |
|---|---|
| WO (1) | WO2025083687A1 (fr) |
Citations (6)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20210110541A1 (en) * | 2019-10-11 | 2021-04-15 | Case Western Reserve University | Combination of radiomic and pathomic features in the prediction of prognoses for tumors |
| US20220012877A1 (en) * | 2015-08-14 | 2022-01-13 | Elucid Bioimaging Inc. | Quantitative imaging for detecting histopathologically defined plaque fissure non-invasively |
| US20220028064A1 (en) * | 2018-11-19 | 2022-01-27 | Koninklijke Philips N.V. | Characterizing lesions in radiology images |
| US20220039681A1 (en) * | 2014-09-11 | 2022-02-10 | The Medical College Of Wisconsin, Inc. | Systems and Methods for Estimating Histological Features From Medical Images Using a Trained Model |
| WO2023011936A1 (fr) * | 2021-08-02 | 2023-02-09 | Koninklijke Philips N.V. | Procédé et système de prédiction d'histopathologie de lésions |
| US20230106440A1 (en) * | 2017-11-22 | 2023-04-06 | Arterys Inc. | Content based image retrieval for lesion analysis |
-
2024
- 2024-10-15 WO PCT/IL2024/051009 patent/WO2025083687A1/fr active Pending
Patent Citations (6)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20220039681A1 (en) * | 2014-09-11 | 2022-02-10 | The Medical College Of Wisconsin, Inc. | Systems and Methods for Estimating Histological Features From Medical Images Using a Trained Model |
| US20220012877A1 (en) * | 2015-08-14 | 2022-01-13 | Elucid Bioimaging Inc. | Quantitative imaging for detecting histopathologically defined plaque fissure non-invasively |
| US20230106440A1 (en) * | 2017-11-22 | 2023-04-06 | Arterys Inc. | Content based image retrieval for lesion analysis |
| US20220028064A1 (en) * | 2018-11-19 | 2022-01-27 | Koninklijke Philips N.V. | Characterizing lesions in radiology images |
| US20210110541A1 (en) * | 2019-10-11 | 2021-04-15 | Case Western Reserve University | Combination of radiomic and pathomic features in the prediction of prognoses for tumors |
| WO2023011936A1 (fr) * | 2021-08-02 | 2023-02-09 | Koninklijke Philips N.V. | Procédé et système de prédiction d'histopathologie de lésions |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| Yousef et al. | A holistic overview of deep learning approach in medical imaging | |
| Scapicchio et al. | A deep look into radiomics | |
| Bhattacharya et al. | A review of artificial intelligence in prostate cancer detection on imaging | |
| Alksas et al. | A novel computer-aided diagnostic system for accurate detection and grading of liver tumors | |
| Soni et al. | Light weighted healthcare CNN model to detect prostate cancer on multiparametric MRI | |
| Antropova et al. | A deep feature fusion methodology for breast cancer diagnosis demonstrated on three imaging modality datasets | |
| Bickelhaupt et al. | Prediction of malignancy by a radiomic signature from contrast agent‐free diffusion MRI in suspicious breast lesions found on screening mammography. | |
| Xu et al. | Differentiating benign from malignant renal tumors using T2‐and diffusion‐weighted images: A comparison of deep learning and radiomics models versus assessment from radiologists | |
| Lyu et al. | A transformer-based deep-learning approach for classifying brain metastases into primary organ sites using clinical whole-brain MRI images | |
| Mohammed et al. | The Spreading Prediction and Severity Analysis of Blood Cancer Using Scale-Invariant Feature Transform | |
| Zhu et al. | An accurate prediction of the origin for bone metastatic cancer using deep learning on digital pathological images | |
| Meyer-Bäse et al. | Current status and future perspectives of artificial intelligence in magnetic resonance breast imaging | |
| Sreenivasu et al. | [Retracted] Dense Convolutional Neural Network for Detection of Cancer from CT Images | |
| Xu et al. | Using transfer learning on whole slide images to predict tumor mutational burden in bladder cancer patients | |
| Dhiravidachelvi et al. | Enhancing image classification using adaptive convolutional autoencoder-based snow avalanches algorithm | |
| CN113705595B (zh) | 异常细胞转移程度的预测方法、装置和存储介质 | |
| Chaddad et al. | Future artificial intelligence tools and perspectives in medicine | |
| Mazin et al. | Identification of sarcomatoid differentiation in renal cell carcinoma by machine learning on multiparametric MRI | |
| Alyami et al. | Automatic skin lesions detection from images through microscopic hybrid features set and machine learning classifiers | |
| Islam et al. | Prostate cancer detection from MRI using efficient feature extraction with transfer learning | |
| Hayashi | New unified insights on deep learning in radiological and pathological images: Beyond quantitative performances to qualitative interpretation | |
| Giannini et al. | Specificity improvement of a CAD system for multiparametric MR prostate cancer using texture features and artificial neural networks | |
| Wang et al. | Potential value of novel multiparametric MRI radiomics for preoperative prediction of microsatellite instability and Ki-67 expression in endometrial cancer | |
| Tai et al. | Enhancing clinical support for breast cancer with deep learning models using synthetic correlated diffusion imaging | |
| Goswami et al. | Application of deep learning in cytopathology and prostate adenocarcinoma diagnosis |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 24879304 Country of ref document: EP Kind code of ref document: A1 |