[go: up one dir, main page]

WO2025175223A1 - Machine-learning-based systems and methods for classifying masses in medical images - Google Patents

Machine-learning-based systems and methods for classifying masses in medical images

Info

Publication number
WO2025175223A1
WO2025175223A1 PCT/US2025/016112 US2025016112W WO2025175223A1 WO 2025175223 A1 WO2025175223 A1 WO 2025175223A1 US 2025016112 W US2025016112 W US 2025016112W WO 2025175223 A1 WO2025175223 A1 WO 2025175223A1
Authority
WO
WIPO (PCT)
Prior art keywords
mass
image
radiomic
class
mass component
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
PCT/US2025/016112
Other languages
French (fr)
Inventor
Maryellen Giger
Heather Whitney
Ernst Lengyel
Roni YOELI-BIK
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Chicago
Original Assignee
University of Chicago
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Chicago filed Critical University of Chicago
Publication of WO2025175223A1 publication Critical patent/WO2025175223A1/en
Pending legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10132Ultrasound image
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30096Tumor; Lesion

Definitions

  • Ovarian cancer is the most lethal gynecological malignancy and the fifth-leading cause of cancer deaths among women [1], resulting in substantial interest in improving noninvasive diagnosis.
  • Ultrasound imaging is the initial modality for diagnosing adnexal masses because it is safe, widely available, quick to complete, and relatively inexpensive [2],
  • accurate diagnosis of adnexal masses presents significant challenges due to the low incidence of ovarian cancer and the frequent occurrence of ovarian masses.
  • sonographically indeterminate masses may be further evaluated by magnetic resonance imaging (MRI) examinations, which provide enhanced soft tissue characterization and can potentially reduce false positives and unnecessary surgeries for asymptomatic patients with benign masses [2, 10, 11],
  • MRI magnetic resonance imaging
  • MRI imaging can have a high negative predictive value and relatively high specificity [3, 12-14]
  • this imaging modality is expensive, time-consuming, and not widely available. Consequently, improving the performance of sonographic assessments is of clinical interest, and with recent developments in computational power and data collection, more quantitative imaging assessments that may lead to improved noninvasive diagnostic accuracy are now possible [15, 16], SUMMARY
  • AI/CADx Artificial-intelligence/computer-aided-diagnosis
  • image-based phenotypes i.e., radiomic features
  • these models may also decrease interpretative variability and human error, thereby improving treatment planning [15]
  • AI/CADx-based models are still not part of standard clinical practice for adnexal- mass diagnosis.
  • a method for classifying a mass in a medical image includes segmenting the medical image into a mass image and a background image, separating the mass image into a first mass component and a second mass component, extracting a set of radiomic-feature values from the first mass component and the second mass component, and processing the set of radiomic-feature values to classify the mass as malignant or benign.
  • the mass image may be separated by clustering the pixels of the mass image (e.g., using fuzzy c- means clustering).
  • the method also includes outputting an indication that the mass is malignant or benign. The indication may be used to screen for, diagnose, treat, or monitor cancer in a human patient from which the medical image was obtained.
  • FIG. 1 shows a machine-learning (ML) pipeline for classifying an adnexal mass in a medical image, in some embodiments.
  • ML machine-learning
  • FIG. 2 is a flowchart showing exclusion criteria and resulting eligible cases and masses.
  • FIG. 3 shows an artificial-intelligence/computer-aided-diagnosis (AI/CADx) pipeline for adnexal-mass diagnosis, in some embodiments.
  • FIG. 4 shows a receiver-operating-characteristic (ROC) analysis for the task of classifying adnexal masses as malignant or benign.
  • ROC receiver-operating-characteristic
  • the image segmentor 104 may implement any type of supervised or unsupervised image-segmentation technique known in the art, examples of which include, but are not limited to, thresholding, clustering (e.g., k-means clustering), compression-based segmentation, histogram-analysis-based segmentation, region-growing methods, graph partitioning, and watershed transformations. Note that some of these techniques (e.g., segmenting) do not use an ML model. Accordingly, the MLM 134 is not always necessary and may be excluded from some of the present embodiments.
  • the mass-image separator 110 separates the mass image 106 into a first mass component 112 and a second mass component 114.
  • the first mass component 112 is a “hyperechoic” component that represents regions of the adnexal mass 130 that are relatively denser and therefore produce strong acoustic reflections. These hyperechoic regions appear relatively brighter in the images 102 and 106.
  • the second mass component 114 is a “hypoechoic” component that represents regions of the adnexal mass 130 that are relatively less dense and therefore produce weaker acoustic reflections. These hypoechoic regions appear relatively darker in the images 102 and 106.
  • the mass image 106 may be separated into the mass components 112 and 114 based on the pixel values of the mass image 106.
  • the mass-image separator 110 uses an MLM 144 that implements clustering of the pixels forming the mass image 106.
  • the MLM 144 may implement “hard” clustering by assigning each pixel of the mass image 106 to either the first mass component 112 or the second mass component 114 (but not both of the mass components 112 and 114). In this case, the union of the mass components 112 and 114 equals the unseparated mass image 106.
  • the MLM 144 may implement “soft” or “fuzzy” clustering by assigning each pixel of the mass image 106 to one or both of the mass components 112 and 114. In this case, some of the pixels may belong to both of the mass components 112 and 114.
  • the MLM 144 implements fuzzy c-means clustering. However, the MLM 144 may implement another type of clustering technique without departing from the scope hereof.
  • the radiomic-feature processor 116 processes the mass components 112 and 114 to extract a set of radiomic-feature values 136 that quantify a corresponding set of radiomic features.
  • Radiomic features are metrics that quantitatively characterize certain aspects, properties, or signatures within medical images. Of particular importance for classifying masses are metrics that are correlated with malignant and benign phenotypes.
  • the set of radiomic- feature values 136 may be used to help quantify how closely the mass components 112 and 114 match the malignant or benign phenotypes, and therefore how likely it is that the adnexal mass 130 is malignant or benign.
  • radiomic features may be used any combination of radiomic features known in the art, including radiomic features that are described as “pre-defined,” “hand-crafted,” or “human-engineered.”
  • Pre-defined radiomic features are usually divided into classes that typically include morphology (e.g., spatial variation of pixel values near an edge of the mass), size (e.g., area, maximum diameter, major axis, minor axis, etc.), shape (elongation, sphericity, etc.), first-order texture (e.g., entropy, energy, 90 th percentile, skewness, etc.), second-order texture (e.g., gray-level co-occurrence matrix, gray-level run-length matrix, neighborhood gray-tone difference matrix, etc.), and higher-order texture (e.g., Harr wavelet transforms, autoregressive model, etc.).
  • one or more of the radiomic features are “deep features” that are automatically identified and selected by deep learning algorithms.
  • the set of radiomic-feature values includes a radiomic- feature value that quantifies a morphology of one of the mass components 112 and 114, a radiomic-feature value that quantifies a geometry (i.e., size and/or shape) of one of the mass components 112 and 114, a radiomic-feature value that quantifies a texture of one of the mass components 112 and 114, or any combination thereof.
  • the set of radiomic-feature values 136 includes at least one radiomic-feature value that is extracted only from the first mass component 112. In some embodiments, the set of radiomic-feature values includes at least one radiomic-feature value that is extracted only from the second mass component 114.
  • the set of radiomic-feature values includes at least one radiomic-feature value that is extracted from both the first mass component 112 and the second mass component 114 (e.g., see the radiomic features described by Eqns. 1 and 2 in the section below titled “Mass Segmentation and Feature Extraction”).
  • the diagnostic classifier 118 processes the set of radiomic-feature values 136 to generate and output the indication 120. Specifically, the diagnostic classifier 118 identifies a class to which the adnexal mass 130 belongs, the class being one of “malignant,” “benign,” and “borderline.”
  • the indication 120 may be, for example, an integer whose singular value indicates the identified class (e.g., the integer “1” indicates “benign”). However, the indication 120 may additionally or alternatively represent the identified class in another manner (e.g., symbolically, graphically, as text, etc.) without departing from the scope hereof.
  • the indication 120 may be visually displayed.
  • the indication 120 may be displayed on an electronic display (see the display 708 in FIG. 7), such as a computer monitor, tablet, smartphone, or television to indicate to a radiologist or clinician that the adnexal mass 130 is benign, malignant, or borderline.
  • the indication 120 may be displayed as text (e.g., the word “malignant”).
  • the indication 120 may be displayed, based on its value, as a symbol (e.g., a circle to indicate that the indication 120 has the value corresponding to “borderline”).
  • the indication 120 may be displayed, based on its value, as a single color or a color scheme (e.g., green to indicate “benign”), a texture, a size, or any combination thereof.
  • the indication 120 may be displayed in any additional or alternative manner or manners without departing from the scope hereof.
  • the indication 120 is displayed on the electronic display with other information about the patient.
  • the indication 120 may be displayed next to the medical image 102, the mass image 106, the first mass component 112, the second mass component 114, or any combination thereof.
  • the indication 120 may also be displayed on the electronic display adjacent to personally identifiable information (e.g., name, date-of-birth, etc.) or personal health information so that a person viewing the electronic display associates the indication 120 with the identified patient.
  • the indication 120 is saved to a medical record of the patient.
  • the method also includes the step of processing the set of radiomic-feature values 136 to classify the adnexal mass 130 as being malignant, borderline, or benign. This step of processing may be performed by the diagnostic classifier 118, as described above.
  • the method also includes the step of outputting the indication 120 that the adnexal mass 130 is malignant, borderline, or benign. As shown in FIG. 1, this step of outputting may also be performed by the diagnostic classifier 118.
  • the method may further include one or more steps related to screening for, diagnosing, treating, or monitoring ovarian cancer in a human patient.
  • the method further includes performing sonography on the patient to generate the medical image 102.
  • the sonography may be performed, for example, with an ultrasound machine, as described above.
  • the method further includes diagnosing the patient based on the indication 120.
  • the method further includes ordering or performing additional tests based on the indication 120.
  • additional tests include, but are not limited to, a CA-125 blood test and a rectovaginal pelvic examination. In this case, the patient may then be diagnosed based on the results of these one or more additional tests.
  • the patient may be provided with one or more therapeutic interventions for treating ovarian cancer.
  • therapeutic interventions include, but are not limited to, surgical procedures, non-surgical medical procedures, and prescriptions for one or more pharmaceutical drugs.
  • the present embodiments may also be used for classifying other types of masses, lesions, cysts, or tumors that appear in ultrasound images.
  • the present embodiments are not limited to adnexal masses. Examples of other mass types include, but are not limited to, abdominal masses, breast lumps, renal masses, pancreatic cysts and neoplasms, and neuroblastomas.
  • the present embodiments may be used to detect not just ovarian cancer, but many other types of cancer as well.
  • FIG. 1 shows the medical image 102 as an ultrasound image
  • the present embodiments may be used to classify masses in other types of medical images.
  • Examples of such medical images include, but are not limited to, magnetic resonance imaging (MRI) scans, x-ray scans, CT scans, and PET scans.
  • MRI magnetic resonance imaging
  • x-ray scans For example, in an MRI image, brighter areas indicate regions of relatively high signal intensity while darker areas indicate regions of relatively low signal intensity. The bright areas are called “hyperintense” while the darker areas are called “hypointense.”
  • the mass-image separator 110 of FIG. 1 may be used to separate the MRI image into a hyperintense component that is the first mass component 112 of FIG. 1 and a hypointense component that is the second mass component 114 of FIG. 1.
  • the radiomic-feature processor 116 then extracts the radiomic-feature values 136 from these hyperintense and hypointense components.
  • the resulting classification of the mass based on radiomics of the separated hyperintense and hypointense components, may have higher diagnostic accuracy than that based on radiomics obtained from the unseparated MRI image
  • the brightness of an x-ray or CT-scan image is based on the density of the imaged tissue.
  • the bright areas are called “hyperdense” while the darker areas are called “hypodense.”
  • the mass-image separator 110 of FIG. 1 may be used to separate an x-ray or CT-scan image into a “hyperdense” component that is the first mass component 112 of FIG. 1 and a “hypodense” component that is the second mass component 114 of FIG. 1.
  • the radiomic-feature processor 116 then extracts the radiomic-feature values 136 from these hyperdense and hypodense components.
  • the present embodiments may also be used to classifying masses in medical images from non-human subjects, such as animals.
  • the present embodiments may be used to assist with cancer diagnosis and treatment in veterinary medicine.
  • Mass Segmentation and Feature Extraction' The previously developed physics- driven segmentation model [19] applies a user-provided bounding box around the region of interest (adnexal mass) as input to (1) segment the masses from the image background using a supervised deep learning (DL) U-net model and (2) separate them into intra-mass relative hypoechogenic and hyperechogenic components using an unsupervised machine learning fuzzy c-means algorithm (see FIG. 3).
  • Eight component-based human-engineered radiomic features were extracted from each mass (see Table 2). The features describe component morphology (spatial variations in pixel values near the edge of each component), geometry (shape and size), and texture (the spatial relationships between image pixels in terms of the change in intensity patterns and gray levels) (see FIG.
  • Two features were based upon the geometry of the components: one feature was measured as the fraction of pixels within the mass that was within the hypoechogenic component, while one feature (which we term a “proportion” feature) was calculated as where D e ⁇ is the effective diameter of each component (i.e., the diameter of a circle with the same area as the component), A refers to the area (in pixels) of each component, and the subscripts hypo and hyper refer to the relative hypoechogenic and hyperechogenic components, respectively.
  • Two features, based upon the relative textures of difference entropy and correlation, were calculated as fhypo fhyper (2) where f is the radiomic feature from each component.
  • a binary feature i.e., present or absent
  • for the presence of solid components or lack thereof was determined from an expert manual review of each case. Thus, a total of nine features were used in the study.
  • Mass Classification' The dataset was manually split by patient into a classification training/validation set (95 masses; 70%) and an independent held-out test set (41 masses; 30%) to match stratified adnexal pathologies and clinical parameters (menopausal status and race) between the two sets.
  • a linear discriminant analysis classifier was trained using the nine extracted features to yield a likelihood of malignancy. This classifier was applied to the classification training/validation set (a “self-test”) and then separately to the independent held-out test set.
  • Pathology records served as the ground truth reference standard for classification.
  • the figure of merit was the area under the receiver operating characteristic (ROC) curve (AUC) [24], calculated using the proper binormal model [25], We also calculated the empirical ROC curve. Diagnostic performance was also evaluated at target 95% sensitivity, yielding corresponding specificity, positive predictive value (PPV), negative predictive value (NPV), and accuracy.
  • ROC receiver operating characteristic
  • the AI/CADx model achieved the target 95% sensitivity, along with a very high NPV of 0.997 [0.935, 1.000] and relatively high specificity of 0.71 [0.53, 0.84] and PPV of 0.58 [0.47, 0.71] in the independent test set (see Table 4).
  • radiomic features used in this study reflect geometric characteristics of hyperechogenic components, including papillary projections such as in these benign serous cystadenofibromas.
  • Other radiomic features particularly the edges of the hyperechogenic components as measured by morphology features, correspond to characteristics of the solid papillary projections for this type of benign mass.
  • These features characterize three different qualities of the hyperechogenic components (margin sharpness variance, margin sharpness mean, and variance of the radial gradient histogram), broadening and quantifying the characteristics used for distinguishing them from solid papillary projections in malignant masses.
  • the ratio of texture features between components captures distinctive aspects of these masses that are not readily viewed through visual inspection of ultrasound images.
  • HGSOC high-grade serous ovarian cancers
  • ovarian cancer subtype often present with bilateral irregular solid or multilocular-solid masses (see panel C of FIG. 5), ascites, and upper abdominal disease [28]
  • High vascular flow and areas of necrosis in the solid elements on Doppler imaging are common on ultrasound imaging [28]
  • cystadenofibromas these findings often correlate with the gross pathology findings: HGSOCs are often bilateral, show exophytic growth, and contain solid, papillary, and cystic areas, as well as extensive necrosis.
  • Another rare type of ovarian cancer, malignant Sertoli-Leydig cell tumors are of sex-cord stromal origin.
  • radiomic features demonstrates that quantitative imaging characteristics, which are not readily apparent to the eye (i.e., edges of components, the relative nature of features between components, and the merging of the features by a classifier), provide measures of adnexal masses that are unique and supplement the existing qualitative sonographic review frameworks. Additional examples of sonographic and Al-based component analyses from the training/validation set are presented in FIG. 6.
  • Adnexal masses are common in both pre and postmenopausal patients. Ultrasound is the preferred initial imaging modality for characterizing these masses, but image interpretation can be difficult, and masses are frequently classified as indeterminate.
  • the specificity was 0.71 [0.53, 0.84] (see Table 4), which may suggest that additional pipeline development is warranted to minimize false positive results further.
  • the NPV was 0.997 [0.935, 1.000], clinically reassuring that a negative test is accurate and that no cancer is misclassified as a benign mass.
  • MRI is often used as a secondary imaging modality when the ultrasound assessment is suboptimal or indeterminate (O-RADS ultrasound scores of 3 or 4).
  • O-RADS ultrasound scores of 3 or 4
  • diagnostic pelvic MRI had an NPV of 98% (at 18% malignancy prevalence) [10]
  • Our study presents a low-complexity model with a high and reassuring NPV based on ultrasound imaging, which is widely available, much cheaper than MRI, and does not require additional interpretative skills.
  • a hybrid AI/CADx pipeline incorporating automatic external mass border segmentation, automatic physics-driven internal echogenic component segmentation, and radiomic feature analysis specific to the components and their relative nature can distinguish between malignant and benign masses with very high sensitivity and relatively high specificity.
  • This hybrid AI/CADx pipeline could potentially serve as a second reader to ensure that no malignant tumor will be missed, especially important as expectations for clinical productivity increase. It may also reduce user variability and reflect the mass’s heterogeneous architecture.
  • FIG. 2 Flowchart showing exclusion criteria and resulting eligible cases and masses.
  • FIG. 3 AI/CADx pipeline for adnexal mass diagnosis.
  • AI/CADx Artificial intelligence/computer-aided diagnosis.
  • FIG. 4 ROC analysis in the task of classifying adnexal masses as malignant or benign. Both the proper binormal model and empirical curves are shown. The AUC for the proper binormal model was (median, [95% CI]) 0.90 [0.84, 0.95] in the training/validation set and 0.93 [0.83, 0.98] in the independent test set. ROC: receiver operating characteristic. AUC: area under the receiver operating characteristic curve
  • FIG. 6 Sonographic and AI/CADx-based automatic segmentation and component-based clustering of individual masses in the training/validation set. Images of three benign (A, B, C) and two malignant/borderline (D, E) ovarian masses from the training/validation set and their corresponding likelihood of malignancy (LM) from prediction as malignant or benign by the AI/CADx model are shown.
  • FIG. 7 is a diagram of a system 700 that classifies a mass in a medical image, in accordance with some of the present embodiments.
  • the system 700 implements any of the methods disclosed herein.
  • the system 700 has a processor 702, a memory 720, and a secondary storage device 712 that communicate with each other over a system bus 710.
  • the memory 720 may be volatile RAM located proximate to the processor 702 while the secondary storage device 712 may be a hard disk drive, a solid-state drive, an optical storage device, or another type of persistent data storage.
  • the secondary storage device 712 may alternatively be accessed via an external network. Additional and/or other types of the memory 720 and the secondary storage device 712 may be used without departing from the scope hereof.
  • Each of the I/O blocks 704(1) and 704(2) may implement a wired network interface (e.g., Ethernet, Infiniband, etc.), wireless network interface (e.g., WiFi, Bluetooth, BLE, etc.), cellular network interface (e.g., 4G, 5G, LTE), optical network interface (e.g., SONET, SDH, IrDA, etc.), multi-media card interface (e.g., SD card, CompactFlash, etc.), or other type of communication port through which the system 700 can communicate with other devices.
  • a wired network interface e.g., Ethernet, Infiniband, etc.
  • wireless network interface e.g., WiFi, Bluetooth, BLE, etc.
  • cellular network interface e.g., 4G, 5G, LTE
  • optical network interface e.g., SONET, SDH, IrDA, etc.
  • multi-media card interface e.g., SD card, CompactFlash, etc.
  • the machine-readable instructions 722 include an image segmentor 724 that implements the image segmentor 104 of FIG. 1, a mass-image separator 726 that implements the mass-image separator 110 of FIG. 1, a radiomic-feature processor 728 that implements the radiomic-feature processor 116 of FIG. 1, a diagnostic classifier 730 that implements the diagnostic classifier 118 of FIG. 1, and an outputter 732.
  • the image segmentor 724 when executed by the processor 702, controls the system 700 to segment the medical image 102 into the mass image 106 and the background image 108 to separate the mass image 106 into the first mass component 112 and the second mass component 114.
  • the radiomic-feature processor 728 when executed by the processor 702, controls the system 700 to extract the set of radiomic-feature values 136 from the first mass component 112 and the second mass component 114.
  • the diagnostic classifier 730 when executed by the processor 702, controls the system 700 to process the set of radiomic-feature values 136 to classify the mass as belonging to a first class (e.g., malignant) or a second class (e.g., benign).
  • the diagnostic classifier 730 when executed by the processor 702, may alternatively control the system 700 to process the set of radiomic-feature values 136 to classify the mass as belonging to a first class (e.g., malignant), a second class (e.g., benign), or a third class (e.g., borderline).
  • the outputter 732 when executed by the processor 702, controls the system 700 to output the indication 120.
  • the memory 720 may store additional machine-readable instructions 722 than shown in FIG. 7 without departing from the scope hereof.
  • FIG. 7 shows the system 700 as a computing system that directly executes the machine-readable instructions 722 with the processor 702
  • the system 700 may alternatively be configured, either entirely or in part, using circuitry that is hard-wired to implement the functionality of the present embodiments (as opposed to directly executing code).
  • circuitry include, but are not limited to, field-programmable gate arrays (FPGAs), system- on-chips (SoCs), programmable logic devices (PLDs).
  • a method for classifying a mass in a medical image includes segmenting the medical image into a mass image and a background image, separating the mass image into a first mass component and a second mass component, extracting a set of radiomic-feature values from the first mass component and the second mass component, processing the set of radiomic-feature values to classify the mass as belonging to a first class or a second class, and outputting an indication that the mass belongs to the first class or the second class.
  • processing includes processing the set of radiomic-feature values to classify the mass as belonging to the first class, the second class, or a third class. Furthermore, said outputting includes outputting an indication that the mass belongs to the first class, the second class, or the third class.
  • said segmenting the medical image includes segmenting an ultrasound image, an x-ray image, a two-dimensional slice of a CT-scan image, or a two-dimensional MRI image.
  • said segmenting the ultrasound image includes segmenting a transvaginal ultrasound image having a fully defined border of an adnexal mass.
  • the method further includes performing medical imaging on a patient to generate the medical image.
  • said segmenting the medical image includes feeding the medical image into a trained convolutional neural network.
  • said feeding the medical image into the trained convolutional neural network includes feeding the medical image into a U-Net.
  • said separating the mass image includes clustering each of a plurality of pixels of the mass image into one or both of a first cluster and a second cluster.
  • the first cluster forms the first mass component while the second cluster forms the second mass component.
  • said extracting the set of radiomic-feature values includes extracting at least one of (i) a radiomic-feature value quantifying a morphology of the first mass component or the second mass component, (ii) a radiomic-feature value quantifying a geometry of the first mass component or the second mass component, and (iii) a radiomic-feature value quantifying a texture of the first mass component or the second mass component.
  • the geometry of the first mass component includes one or both of an area of the first mass component and an effective diameter of the first mass component.
  • the geometry of the second mass component includes one or both of an area of the second mass component and an effective diameter of the second mass component.
  • processing the set of radiomic-feature values includes feeding the set of radiomic-feature values into a trained discriminant analysis classifier.
  • said feeding the set of radiomic-feature values into the trained discriminant analysis classifier comprises feeding the set of radiomic- feature values into a trained linear discriminant analysis classifier.
  • said outputting the indication includes displaying the indication on a screen.
  • the method further includes diagnosing, based on the indication, a patient with a disease.
  • the therapeutic intervention includes a surgical procedure, a non-surgical medical procedure, a prescription for one or more pharmaceutical drugs, or a combination thereof.
  • a system for classifying a mass in a medical image includes a processor a and a memory in electronic communication with the processor.
  • the memory stores machine- readable instructions that, when executed by the processor, control the system to segment the medical image into a mass image and a background image, separate the mass image into a first mass component and a second mass component, extract a set of radiomic-feature values from the first mass component and the second mass component, process the set of radiomic-feature values to classify the mass as belonging to a first class or a second class, and output an indication that the mass belongs to the first class or the second class.
  • the machine-readable instructions that, when executed by the processor, control the system to process the set of radiomic-feature values include machine-readable instructions that, when executed by the processor, control the system to process the set of radiomic-feature values to classify the mass as belonging to the first class, the second class, or a third class.
  • the machine- readable instructions that, when executed by the processor, control the system to output the indication include machine-readable instructions that, when executed by the processor, control the system to output an indication that the mass belongs to the first class, the second class, or the third class.
  • the medical image is an ultrasound image, an x-ray image, a two-dimensional slice of a CT-scan image, or a two- dimensional MRI image.
  • the machine-readable instructions that, when executed by the processor, control the system to segment the medical image include machine-readable instructions that, when executed by the processor, control the system to feed the medical image into a trained convolutional neural network.
  • the machine-readable instructions that, when executed by the processor, control the system to separate the mass image include machine-readable instructions that, when executed by the processor, control the system to cluster each of a plurality of pixels of the mass image into one or both of a first cluster and a second cluster.
  • the first cluster forms the first mass component while the second cluster forms the second mass component.
  • the machine-readable instructions that, when executed by the processor, control the system to implement fuzzy clustering include machine- readable instructions that, when executed by the processor, control the system to clustering using fuzzy c-means clustering.
  • the machine-readable instructions that, when executed by the processor, control the system to process the set of radiomic-feature values include machine-readable instructions that, when executed by the processor, control the system to feed the set of radiomic-feature values into a trained discriminant analysis classifier.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Ultra Sonic Daignosis Equipment (AREA)

Abstract

A method for classifying a mass in a medical image includes segmenting the medical image into a mass image and a background image, separating the mass image into a first mass component and a second mass component, extracting a set of radiomic-feature values from the first mass component and the second mass component, and processing the set of radiomic-feature values to classify the mass as malignant or benign. The mass image may be separated by clustering the pixels forming the mass image (e.g., using fuzzy c-means clustering). The method also includes outputting an indication that the mass is malignant or benign. The indication may be used to screen for, diagnose, treat, or monitor cancer in a human patient from which the medical image was obtained. The method may be implemented for use with ultrasound images, x-ray images, CT-scan images, PET-scan images, and MRI images.

Description

MACHINE-LEARNING-BASED SYSTEMS AND METHODS FOR
CLASSIFYING MASSES IN MEDICAL IMAGES
RELATED APPLICATIONS
[0001] This application claims priority to U.S. Provisional Patent Application No. 63/687,625, filed on August 27, 2024, and U.S. Provisional Patent Application No. 63/554,334, filed on February 16, 2024. Each of these aforementioned applications is incorporated herein by reference in its entirety.
BACKGROUND
[0002] Ovarian cancer is the most lethal gynecological malignancy and the fifth-leading cause of cancer deaths among women [1], resulting in substantial interest in improving noninvasive diagnosis. Ultrasound imaging is the initial modality for diagnosing adnexal masses because it is safe, widely available, quick to complete, and relatively inexpensive [2], However, accurate diagnosis of adnexal masses presents significant challenges due to the low incidence of ovarian cancer and the frequent occurrence of ovarian masses. Indeed, the lifetime risk of being diagnosed with an adnexal mass is up to 35% in premenopausal and 17% in postmenopausal patients [3], Adnexal masses are heterogeneous, and benign, borderline, and malignant masses often share similar morphologic characteristics [4, 5], Some qualitative ultrasound-based risk models have been developed that have high sensitivity, but they have only moderate specificity [6-8], Therefore, surgery, which frequently results in benign findings [9] and carries risks of patient morbidity, may be required for a definitive diagnosis. Currently, sonographically indeterminate masses may be further evaluated by magnetic resonance imaging (MRI) examinations, which provide enhanced soft tissue characterization and can potentially reduce false positives and unnecessary surgeries for asymptomatic patients with benign masses [2, 10, 11], However, while MRI imaging can have a high negative predictive value and relatively high specificity [3, 12-14], its sensitivity is still limited, and this imaging modality is expensive, time-consuming, and not widely available. Consequently, improving the performance of sonographic assessments is of clinical interest, and with recent developments in computational power and data collection, more quantitative imaging assessments that may lead to improved noninvasive diagnostic accuracy are now possible [15, 16], SUMMARY
[0003] Artificial-intelligence/computer-aided-diagnosis (AI/CADx) models are based on quantitative image-based phenotypes (i.e., radiomic features) that are correlated with malignant and benign phenotypes and therefore may improve imaging diagnostic accuracy [17], Due to their quantitative nature and use of algorithms, these models may also decrease interpretative variability and human error, thereby improving treatment planning [15], However, AI/CADx-based models are still not part of standard clinical practice for adnexal- mass diagnosis. Given the heterogeneous characteristics of adnexal masses, even in the same histopathologic subtype, extracting quantitative imaging features that reflect and correlate with intra-mass heterogeneity and the underlying biology and tissue architecture may provide adnexal-mass assessments that are both robust and automatic.
[0004] The present embodiments include systems and methods for classifying a mass that appears in a medical image. The present embodiments segment the medical image into a mass image (i.e., a portion of the medical image within which the mass fully appears) and a background image (i.e., the remaining portion of the medical image within which no portion of the mass appears). The mass image is then separated into two mass components. Radiomics is then performed with the two mass components. The resulting radiomic-feature values that are extracted from the mass components are then used to classify the mass as malignant or benign. Advantageously, the present embodiments have higher diagnostic accuracy, as compared to performing radiomics on the original mass image (i.e., without separating into components).
[0005] As an example of the present embodiments, a study was conducted to classify adnexal masses appearing in transvaginal ultrasound images. The observation and interpretation of adnexal masses in such images is frequently used to diagnose ovarian cancer. Diagnosis of adnexal masses from ultrasound imaging is challenging due to relatively high rates of false positives and false negatives. Prior-art techniques that use only CADx or only Al to classify adnexal lesions as malignant or benign treat the mass as a single entity for radiomic analysis. By contrast, the present embodiments divide the mass (e.g., using unsupervised machine learning) into mass components which are then analyzed via radiomics as separate entities. This separation of the original mass image into mass components advantageously increases the amount of information available for classification.
[0006] For ultrasonography, the present embodiments may separate the mass image into mass components based on echogenicity, which is indicated by the pixel values of the image. Thus, the mass image may be separated, based on pixel values, into a hyperechogenic component, which contains regions of the mass image that produce strong acoustic reflections, and a hypoechogenic component, which contains regions of the mass image that produce weak acoustic reflections. For radiography and x-ray-based computed tomography (CT), the mass image may be similarly separated into hyperdense and hypodense components. For magnetic resonance imaging (MRI), the mass image may be similarly separated into hyperintense and hypodense components. The present embodiments may also be implemented with other types of medical imaging (e.g., positron emission tomography, photoacoustic tomography, singlephoton emission computed tomography, etc.).
[0007] In certain embodiments, a method for classifying a mass in a medical image includes segmenting the medical image into a mass image and a background image, separating the mass image into a first mass component and a second mass component, extracting a set of radiomic-feature values from the first mass component and the second mass component, and processing the set of radiomic-feature values to classify the mass as malignant or benign. The mass image may be separated by clustering the pixels of the mass image (e.g., using fuzzy c- means clustering). The method also includes outputting an indication that the mass is malignant or benign. The indication may be used to screen for, diagnose, treat, or monitor cancer in a human patient from which the medical image was obtained.
[0008] In other embodiments, a system for classifying a mass in a medical image includes a processor and a memory in electronic communication with the processor. The memory stores machine-readable instructions that, when executed by the processor, control the system to segment the medical image into a mass image and a background image, separate the mass image into a first mass component and a second mass component, extract a set of radiomic-feature values from the first mass component and the second mass component, process the set of radiomic-feature values to classify the mass as malignant or benign, and output an indication that the mass is malignant or benign.
BRIEF DESCRIPTION OF THE FIGURES
[0009] FIG. 1 shows a machine-learning (ML) pipeline for classifying an adnexal mass in a medical image, in some embodiments.
[0010] FIG. 2 is a flowchart showing exclusion criteria and resulting eligible cases and masses.
[0011] FIG. 3 shows an artificial-intelligence/computer-aided-diagnosis (AI/CADx) pipeline for adnexal-mass diagnosis, in some embodiments. [0012] FIG. 4 shows a receiver-operating-characteristic (ROC) analysis for the task of classifying adnexal masses as malignant or benign.
[0013] FIG. 5 illustrates sonographic and AI/CADx-based automatic segmentation, component-based clustering, and histopathology examples of individual masses in a test set.
[0014] FIG. 6 illustrates sonographic and AI/CADx-based automatic segmentation and component-based clustering of individual masses in a training/validation set.
[0015] FIG. 7 is a diagram of a system that implements the present method embodiments.
DETAILED DESCRIPTION
[0016] FIG. 1 shows a machine-learning (ML) pipeline 100 for classifying an adnexal mass 130 in a medical image 102, in accordance with some of the present embodiments. The ML pipeline 100 includes an image segmentor 104, a mass-image separator 110, a radiomic- feature processor 116, and a diagnostic classifier 118 that cooperatively processes the medical image 102 to generate and output an indication 120 identifying the adnexal mass 130 as malignant, benign, or borderline. As described in more detail below, the indication 120 may be used to screen for, diagnose, treat, or monitor ovarian cancer in a human patient from which the medical image 102 was obtained.
[0017] In the example of FIG. 1, the medical image 102 is a digital ultrasound image formed from a two-dimensional (2D) array of pixels that are arranged into a fixed number of rows and a fixed number of columns. The medical image 102 is also a grayscale image in which each pixel has a single value that indicates a signal size that was recorded for the pixel. Where the medical image 102 is an ultrasound image, the medical image 102 may be obtained, for example, from an ultrasound machine or sonograph. For ovarian cancer, it is common to obtain the medical image 102 via transvaginal sonography, in which case the medical image 102 may also be referred to as a “transvaginal image.” However, the present embodiments may also be used with ultrasound images obtained via transabdominal, or pelvic, sonography.
[0018] The image segmentor 104 segments the medical image 102 into a mass image 106 and a background image 108. To do this, the image segmentor 104 identifies within the medical image 102 a boundary 132 that continuously and fully encloses the adnexal mass 130 (i.e., nowhere in the medical image 102 does any edge of the medical image 102 break or interrupt the boundary 132). All pixels of the medical image 102 that lie within the area enclosed by the boundary 132 are assigned to the mass image 106 while all other pixels of the medical image 102 are assigned to the background image 108. The portion of the medical image 102 enclosed by the boundary 132 may be padded with zeros such that the resulting mass image 106 is rectangular (i.e., having a fixed number of rows and a fixed number of columns). The background image 108 may be discarded.
[0019] In some embodiments, and as shown in FIG. 1, the image segmentor 104 utilizes a machine-learning model (MLM) 134 to segment the medical image 102. The MLM 134 may include one or more trained convolutional neural networks (CNNs). In some embodiments, the MLM 134 is a U-NET or a similar type of fully convolutional neural network. In other embodiments, the MLM 134 is another type of ML or statistical model used for image segmentation model (e.g., support vector machine, random forest, etc.).
[0020] In general, the image segmentor 104 may implement any type of supervised or unsupervised image-segmentation technique known in the art, examples of which include, but are not limited to, thresholding, clustering (e.g., k-means clustering), compression-based segmentation, histogram-analysis-based segmentation, region-growing methods, graph partitioning, and watershed transformations. Note that some of these techniques (e.g., segmenting) do not use an ML model. Accordingly, the MLM 134 is not always necessary and may be excluded from some of the present embodiments.
[0021] In some embodiments, the image segmentor 104 uses a bounding box 138 to assist with image segmentation. The bounding box 138 may be provided by a user. F or example, the medical image 102 may be displayed on a screen (e.g., a computer monitor). A user (e.g., a radiologist or clinician) may then draw the bounding box 138 on the screen with a mouse, stylus, or finger. The bounding box 138 indicates a region of interest (ROI) where the adnexal mass 130 is located within the medical image 102. The bounding box 138 may be a rectangle defined by two points that identify opposing corners of the rectangle. Alternatively, the user may draw the bounding box 138 “free-hand” on the screen. In some embodiments, the bounding box 138 is used as a mask to crop the medical image 102, thereby deleting regions of the medical image 102 that the user has identified as fully excluding the adnexal mass 130. In some embodiments, the bounding box 138 is fed into the MLM 134 as a prior. One example of a ML architecture that uses a bounding-box prior is Bounding Box U-Net (BB-UNet). However, the MLM 134 may be another type of ML model or statistical model that uses a bounding-box prior without departing from the scope hereof.
[0022] The mass-image separator 110 separates the mass image 106 into a first mass component 112 and a second mass component 114. In the example of FIG. 1, where the medical image 102 is an ultrasound image, the first mass component 112 is a “hyperechoic” component that represents regions of the adnexal mass 130 that are relatively denser and therefore produce strong acoustic reflections. These hyperechoic regions appear relatively brighter in the images 102 and 106. By contrast, the second mass component 114 is a “hypoechoic” component that represents regions of the adnexal mass 130 that are relatively less dense and therefore produce weaker acoustic reflections. These hypoechoic regions appear relatively darker in the images 102 and 106. Thus, the mass image 106 may be separated into the mass components 112 and 114 based on the pixel values of the mass image 106.
[0023] In some embodiments, the mass-image separator 110 uses an MLM 144 that implements clustering of the pixels forming the mass image 106. The MLM 144 may implement “hard” clustering by assigning each pixel of the mass image 106 to either the first mass component 112 or the second mass component 114 (but not both of the mass components 112 and 114). In this case, the union of the mass components 112 and 114 equals the unseparated mass image 106. Alternatively, the MLM 144 may implement “soft” or “fuzzy” clustering by assigning each pixel of the mass image 106 to one or both of the mass components 112 and 114. In this case, some of the pixels may belong to both of the mass components 112 and 114. In some embodiments, the MLM 144 implements fuzzy c-means clustering. However, the MLM 144 may implement another type of clustering technique without departing from the scope hereof.
[0024] The radiomic-feature processor 116 processes the mass components 112 and 114 to extract a set of radiomic-feature values 136 that quantify a corresponding set of radiomic features. Radiomic features are metrics that quantitatively characterize certain aspects, properties, or signatures within medical images. Of particular importance for classifying masses are metrics that are correlated with malignant and benign phenotypes. Thus, the set of radiomic- feature values 136, as a whole, may be used to help quantify how closely the mass components 112 and 114 match the malignant or benign phenotypes, and therefore how likely it is that the adnexal mass 130 is malignant or benign.
[0025] The present embodiments may use any combination of radiomic features known in the art, including radiomic features that are described as “pre-defined,” “hand-crafted,” or “human-engineered.” Pre-defined radiomic features are usually divided into classes that typically include morphology (e.g., spatial variation of pixel values near an edge of the mass), size (e.g., area, maximum diameter, major axis, minor axis, etc.), shape (elongation, sphericity, etc.), first-order texture (e.g., entropy, energy, 90th percentile, skewness, etc.), second-order texture (e.g., gray-level co-occurrence matrix, gray-level run-length matrix, neighborhood gray-tone difference matrix, etc.), and higher-order texture (e.g., Harr wavelet transforms, autoregressive model, etc.). In some embodiments, one or more of the radiomic features are “deep features” that are automatically identified and selected by deep learning algorithms.
[0026] More examples of predefined radiomic features are described below in the section titled “Mass Segmentation and Feature Extraction” and Table 3. While the demonstration described below used eight specific predefined radiomic features, it should be understood that the present embodiments may be adapted to work with any number of two or more radiomic features (and therefore two or more radiomi c-feature values 136). Furthermore, the present embodiments are not limited to only those eight radiomic features described below. Additional or alternative radiomic features may be used.
[0027] In some embodiments, the set of radiomic-feature values includes a radiomic- feature value that quantifies a morphology of one of the mass components 112 and 114, a radiomic-feature value that quantifies a geometry (i.e., size and/or shape) of one of the mass components 112 and 114, a radiomic-feature value that quantifies a texture of one of the mass components 112 and 114, or any combination thereof. In some embodiments, the set of radiomic-feature values 136 includes at least one radiomic-feature value that is extracted only from the first mass component 112. In some embodiments, the set of radiomic-feature values includes at least one radiomic-feature value that is extracted only from the second mass component 114. In some embodiments, the set of radiomic-feature values includes at least one radiomic-feature value that is extracted from both the first mass component 112 and the second mass component 114 (e.g., see the radiomic features described by Eqns. 1 and 2 in the section below titled “Mass Segmentation and Feature Extraction”).
[0028] The diagnostic classifier 118 processes the set of radiomic-feature values 136 to generate and output the indication 120. Specifically, the diagnostic classifier 118 identifies a class to which the adnexal mass 130 belongs, the class being one of “malignant,” “benign,” and “borderline.” The indication 120 may be, for example, an integer whose singular value indicates the identified class (e.g., the integer “1” indicates “benign”). However, the indication 120 may additionally or alternatively represent the identified class in another manner (e.g., symbolically, graphically, as text, etc.) without departing from the scope hereof.
[0029] To generate the indication 120, the diagnostic classifier 118 may output a numerical value indicating a probability that the adnexal mass 130 belongs to a specified class, taken to be “malignant” in this example. This probability may then be compared to a malignant threshold to determine the indication 120. For example, if the probability exceeds the malignant threshold, then the indication 120 is set to the value for “malignant.” The probability may also be compared to a benign threshold that is less than the malignant threshold. In this case, if the probability is greater than the benign threshold and less than the malignant threshold, then the indication 120 is set to the value for “borderline.” If the probability is less than the benign threshold, then the indication 120 is set to the value for “benign.”
[0030] The diagnostic classifier 118 uses an MLM 154 to process the set of radiomic- feature values 136 and thereby determine the indication 120. The MLM 154 may have been previously trained to transform sets of radiomi c-feature values into indications, in which case the MLM 154 implements a supervised ML technique. In some embodiments, the MLM 154 is a discriminant analysis classifier, such as a linear discriminant analysis classifier. However, the MLM 154 may another type of trained ML model (e.g., a convolutional or deep neural network) or statistical model with departing from the scope hereof. Some of the present embodiments include training of the MLM 154 using a training (or supervisory) data set.
[0031] To assist with screening or diagnosing a patient, the indication 120 may be visually displayed. For example, the indication 120 may be displayed on an electronic display (see the display 708 in FIG. 7), such as a computer monitor, tablet, smartphone, or television to indicate to a radiologist or clinician that the adnexal mass 130 is benign, malignant, or borderline. The indication 120 may be displayed as text (e.g., the word “malignant”). Alternatively or additionally, the indication 120 may be displayed, based on its value, as a symbol (e.g., a circle to indicate that the indication 120 has the value corresponding to “borderline”). Alternatively or additionally, the indication 120 may be displayed, based on its value, as a single color or a color scheme (e.g., green to indicate “benign”), a texture, a size, or any combination thereof. The indication 120 may be displayed in any additional or alternative manner or manners without departing from the scope hereof.
[0032] In some embodiments, the indication 120 is displayed on the electronic display with other information about the patient. For example, the indication 120 may be displayed next to the medical image 102, the mass image 106, the first mass component 112, the second mass component 114, or any combination thereof. The indication 120 may also be displayed on the electronic display adjacent to personally identifiable information (e.g., name, date-of-birth, etc.) or personal health information so that a person viewing the electronic display associates the indication 120 with the identified patient. In some embodiments, the indication 120 is saved to a medical record of the patient.
[0033] FIG. 1 may also be viewed as illustrating a method for classifying the adnexal mass 130 in the medical image 102. Specifically, the method includes the step of segmenting the medical image 102 into the mass image 106 and the background image 108. This step of segmenting may be performed by the image segmentor 104, as described above. The method also includes the step of separating the mass image 106 into the first mass component 112 and the second mass component 114. This step of separating may be performed by the mass-image separator 110, as described above. The method also includes the step of extracting the set of radiomic-feature values 136 from the first mass component 112 and the second mass component 114. This step of extracting may be performed by the radiomic-feature processor 116, as described above. The method also includes the step of processing the set of radiomic-feature values 136 to classify the adnexal mass 130 as being malignant, borderline, or benign. This step of processing may be performed by the diagnostic classifier 118, as described above. The method also includes the step of outputting the indication 120 that the adnexal mass 130 is malignant, borderline, or benign. As shown in FIG. 1, this step of outputting may also be performed by the diagnostic classifier 118.
[0034] The method may further include one or more steps related to screening for, diagnosing, treating, or monitoring ovarian cancer in a human patient. For example, in some embodiments the method further includes performing sonography on the patient to generate the medical image 102. The sonography may be performed, for example, with an ultrasound machine, as described above. In other embodiments, the method further includes diagnosing the patient based on the indication 120. In other embodiments, the method further includes ordering or performing additional tests based on the indication 120. For example, where the indication 120 indicates that the adnexal mass 130 is malignant orbordering, a doctor may order one or more additional tests. Examples of such additional tests include, but are not limited to, a CA-125 blood test and a rectovaginal pelvic examination. In this case, the patient may then be diagnosed based on the results of these one or more additional tests.
[0035] Where the patient is diagnosed with ovarian cancer, the patient may be provided with one or more therapeutic interventions for treating ovarian cancer. Examples of such therapeutic interventions include, but are not limited to, surgical procedures, non-surgical medical procedures, and prescriptions for one or more pharmaceutical drugs.
[0036] While the above description describes embodiments for classifying adnexal masses in ultrasound images, it should be noted that the present embodiments may also be used for classifying other types of masses, lesions, cysts, or tumors that appear in ultrasound images. Thus, the present embodiments are not limited to adnexal masses. Examples of other mass types include, but are not limited to, abdominal masses, breast lumps, renal masses, pancreatic cysts and neoplasms, and neuroblastomas. Thus, the present embodiments may be used to detect not just ovarian cancer, but many other types of cancer as well. [0037] While FIG. 1 shows the medical image 102 as an ultrasound image, the present embodiments may be used to classify masses in other types of medical images. Examples of such medical images include, but are not limited to, magnetic resonance imaging (MRI) scans, x-ray scans, CT scans, and PET scans. For example, in an MRI image, brighter areas indicate regions of relatively high signal intensity while darker areas indicate regions of relatively low signal intensity. The bright areas are called “hyperintense” while the darker areas are called “hypointense.” The mass-image separator 110 of FIG. 1 may be used to separate the MRI image into a hyperintense component that is the first mass component 112 of FIG. 1 and a hypointense component that is the second mass component 114 of FIG. 1. The radiomic-feature processor 116 then extracts the radiomic-feature values 136 from these hyperintense and hypointense components. The resulting classification of the mass, based on radiomics of the separated hyperintense and hypointense components, may have higher diagnostic accuracy than that based on radiomics obtained from the unseparated MRI image.
[0038] In another example, the brightness of an x-ray or CT-scan image is based on the density of the imaged tissue. The bright areas are called “hyperdense” while the darker areas are called “hypodense.” Thus, the mass-image separator 110 of FIG. 1 may be used to separate an x-ray or CT-scan image into a “hyperdense” component that is the first mass component 112 of FIG. 1 and a “hypodense” component that is the second mass component 114 of FIG. 1. The radiomic-feature processor 116 then extracts the radiomic-feature values 136 from these hyperdense and hypodense components. The resulting classification of the mass, based on radiomics of the separated hyperdense and hypodense components, may have a higher diagnostic accuracy than that based on radiomics obtained from the unseparated x-ray or CT- scan image. Since CT-scan images are three-dimensional (3D), the present embodiments may be used with a 2D slice of the 3D CT-scan image.
[0039] While the above description is focused on classifying masses in medical images obtained from human subjects, the present embodiments may also be used to classifying masses in medical images from non-human subjects, such as animals. Thus, the present embodiments may be used to assist with cancer diagnosis and treatment in veterinary medicine.
Demonstration
[0040] Materials and Methods
[0041] The purpose of this study was to develop an AI/CADx-based method to distinguish between benign and malignant adnexal masses on grayscale ultrasound imaging. The proposed pipeline includes intra-mass and radiomic feature-based machine-learning methodologies to assess adnexal masses using AI/CADx.
[0042] Study Patients and Mass Characteristics'. This retrospective single-center study was conducted at the University of Chicago. The study cohort was retrieved from a previously described clinicopathologic database that included more than 500 consecutive patients with adnexal masses and available ultrasound imaging (2017-2022) [18], The database has since been updated with eight additional consecutive months of patient collection (11/2022-06/2023) under an approved HIPAA-compliant protocol by the institutional review board. Exclusion criteria by patient, mass, and imaging levels were followed for the Al development pipeline [19] (see FIG. 2). Only patients with surgical evaluations, histopathologic findings of adnexal mass origin, and ultrasound imaging conducted at the institution were included. We required one high-quality, representative transvaginal grayscale image per mass and excluded images with undefined mass borders and images with measurement sonographic markups. Patients with bilateral masses, which were malignant and met the study inclusion criteria, were included in the Al dataset, resulting in three additional masses. Borderline ovarian tumors were grouped with malignant masses for diagnostic performance and statistical analyses because borderline tumors also require surgery.
[0043] Image Acquisition'. Sonographic images for the evaluation of adnexal masses were clinically acquired using ultrasound machines (GE Voluson E8 or E10 or Samsung Elite WS80). The images were retrospectively retrieved in DICOM format and fully de-identified for the current study.
[0044] Mass Segmentation and Feature Extraction'. The previously developed physics- driven segmentation model [19] applies a user-provided bounding box around the region of interest (adnexal mass) as input to (1) segment the masses from the image background using a supervised deep learning (DL) U-net model and (2) separate them into intra-mass relative hypoechogenic and hyperechogenic components using an unsupervised machine learning fuzzy c-means algorithm (see FIG. 3). Eight component-based human-engineered radiomic features were extracted from each mass (see Table 2). The features describe component morphology (spatial variations in pixel values near the edge of each component), geometry (shape and size), and texture (the spatial relationships between image pixels in terms of the change in intensity patterns and gray levels) (see FIG. 3) [20-23], These features had been identified separately from this study for their ability to individually distinguish between malignant and benign masses on ultrasound imaging, i.e., algorithmic feature selection was not conducted in this study. Four features based on mass morphology were calculated to describe the hypoechogenic components (one feature) or the hyperechogenic components (three features). Two features were based upon the geometry of the components: one feature was measured as the fraction of pixels within the mass that was within the hypoechogenic component, while one feature (which we term a “proportion” feature) was calculated as where De^ is the effective diameter of each component (i.e., the diameter of a circle with the same area as the component), A refers to the area (in pixels) of each component, and the subscripts hypo and hyper refer to the relative hypoechogenic and hyperechogenic components, respectively. Two features, based upon the relative textures of difference entropy and correlation, were calculated as fhypo fhyper (2) where f is the radiomic feature from each component. A binary feature (i.e., present or absent) for the presence of solid components or lack thereof was determined from an expert manual review of each case. Thus, a total of nine features were used in the study.
[0045] Mass Classification'. The dataset was manually split by patient into a classification training/validation set (95 masses; 70%) and an independent held-out test set (41 masses; 30%) to match stratified adnexal pathologies and clinical parameters (menopausal status and race) between the two sets. Using the training set, a linear discriminant analysis classifier was trained using the nine extracted features to yield a likelihood of malignancy. This classifier was applied to the classification training/validation set (a “self-test”) and then separately to the independent held-out test set. Pathology records served as the ground truth reference standard for classification. For the task of classification of masses as malignant or benign, the figure of merit was the area under the receiver operating characteristic (ROC) curve (AUC) [24], calculated using the proper binormal model [25], We also calculated the empirical ROC curve. Diagnostic performance was also evaluated at target 95% sensitivity, yielding corresponding specificity, positive predictive value (PPV), negative predictive value (NPV), and accuracy.
[0046] The likelihood of malignancy of individual masses, along with internal segmentation performance, was reviewed by clinicians for qualitative correspondence with ultrasound images as well as gross and histological pathologies. [0047] Statistical Analysis'. The median and 95% confidence interval (CI) of the AUC, sensitivity, specificity, PPV, NPV, and accuracy were determined for the training/validation set and for the separate test set by a posteriori bootstrapping of the classifier output 2000 times, i.e., randomly sampling with replacement. Statistical analyses were performed in MATLAB (MATLAB 2022b, MathWorks).
[0048] Results
[0049] Patient and Mass Characteristics'. The research database had 594 patients with at least one adnexal mass. The inclusion and exclusion criteria at the patient, adnexal mass, and image levels (see FIG. 2) resulted in a final dataset of 133 unique patients with 136 adnexal masses (see Table 1 and Table 2). Three patients had bilateral malignant masses. The malignant tumor prevalence was 27.9% (38/136). The mean (range) patient age was 45 (20-82) years, and 39.8% (53/133) of the patients were postmenopausal. Table 1 : Demographics and clinicopathological characteristics of patients by training/validation and test sets (N=133 patients). Unless otherwise indicated, data are numbers of patients and data in parentheses are percentages. Percentages may not add up to 100% due to rounding. The dataset was split into training/validation and test sets by patient. Three patients had bilateral malignant masses. a Data in parentheses are standard deviations. b Other race groups include Asian/ Mideast Indian (n=9), Native Hawaiian or other Pacific Islander (n=l), and more than one race (n=3). c Other race groups include Asian/Mideast Indian (n=2) and more than one race (n=2) patients.
Table 2: Clinicopathological characteristics of masses training/validation and test sets (N=136 masses). Unless otherwise indicated, data are numbers of masses, and data in parentheses are percentages. Percentages may not add up to 100% due to rounding. The dataset was split into training/validation and test sets by patient. Three patients had bilateral malignant masses.
[0050] Mass Classification Performance. The radiomic features used for classifying adnexal masses are shown in Table 3, reflecting morphology, shape, size, and texture qualities. The corresponding ROC curves for the training/validation and independent test sets are shown in FIG. 4 for both empirical (raw) and fitted curves. The AUC in distinguishing between malignant and benign masses was 0.90 [0.84, 0.95] in the training/validation set and 0.93 [0.83, 0.98] in the independent test set using the proper binormal model. The AI/CADx model achieved the target 95% sensitivity, along with a very high NPV of 0.997 [0.935, 1.000] and relatively high specificity of 0.71 [0.53, 0.84] and PPV of 0.58 [0.47, 0.71] in the independent test set (see Table 4).
Table 3: Descriptions of radiomic features used in the study.
Table 4: Median diagnostic performance at target 95% sensitivity (determined in the training/validation set) of the AI/CADx model for classifying benign and malignant adnexal masses. Numbers in parentheses are raw data from the empirical ROC curves, and numbers in brackets are 95% confidence intervals. Values are reported with three significant figures when necessary to differentiate median from confidence interval.
[0051] The review of individual masses showed that the automatic intra-mass segmentations and some of the radiomic features corresponded with mass characteristics observable in qualitative image assessment, gross pathology, and histopathological examination (see FIG. 5). Furthermore, the radiomic features captured unique characteristics that were useful for diagnosis but not easily viewable in qualitative image assessment. These are apparent in the examples pictured in FIG. 5 and described below, which portray the use of corresponding qualitative and quantitative analyses.
[0052] Benign serous cystadenofibromas, which are often difficult to diagnose [4], usually appear on grayscale and Doppler ultrasound imaging as cystic masses that have avascular solid papillary projections with posterior acoustic shadowing [26], These qualitative imaging characteristics correspond with the known gross pathology, since cystadenofibromas contain cysts larger than 1 cm and variable amounts of solid areas, often with simple and broad papillae. On histology, cysts lined by a single layer of serous epithelium are surrounded by dense fibromatous stroma [27], These characteristics were also recognized by the automatic segmentation, as seen in panels A and B of FIG. 5. Some of the radiomic features used in this study (measuring size and shape) reflect geometric characteristics of hyperechogenic components, including papillary projections such as in these benign serous cystadenofibromas. Other radiomic features, particularly the edges of the hyperechogenic components as measured by morphology features, correspond to characteristics of the solid papillary projections for this type of benign mass. These features characterize three different qualities of the hyperechogenic components (margin sharpness variance, margin sharpness mean, and variance of the radial gradient histogram), broadening and quantifying the characteristics used for distinguishing them from solid papillary projections in malignant masses. Furthermore, the ratio of texture features between components captures distinctive aspects of these masses that are not readily viewed through visual inspection of ultrasound images.
[0053] In comparison, patients with advanced-stage high-grade serous ovarian cancers (HGSOC), the most common ovarian cancer subtype, often present with bilateral irregular solid or multilocular-solid masses (see panel C of FIG. 5), ascites, and upper abdominal disease [28], High vascular flow and areas of necrosis in the solid elements on Doppler imaging are common on ultrasound imaging [28], As with cystadenofibromas, these findings often correlate with the gross pathology findings: HGSOCs are often bilateral, show exophytic growth, and contain solid, papillary, and cystic areas, as well as extensive necrosis. Another rare type of ovarian cancer, malignant Sertoli-Leydig cell tumors, are of sex-cord stromal origin. On ultrasound imaging, they may have variable appearances; they are usually purely solid or multilocular- solid masses with areas of packed small cystic locules in the solid elements [29], as seen in panel D of FIG. 5. High vascular flow on Doppler imaging is often present. These findings agree with the pathology examinations since Sertoli-Leydig cell masses are usually solid or mixed solid and cystic on gross pathology. The size and shape of radiomic features used in this study reflect the larger fraction of hyperechogenic components often seen in malignant adnexal masses. The use of morphology features from these hyperechogenic components and the ratio of textures between components for distinguishing malignant from benign masses additionally emphasizes that the edges of the hyperechogenic components and the relative textures of the components characterize the malignant masses in aspects not easily seen in a qualitative review of the images.
[0054] Overall, the use of radiomic features demonstrates that quantitative imaging characteristics, which are not readily apparent to the eye (i.e., edges of components, the relative nature of features between components, and the merging of the features by a classifier), provide measures of adnexal masses that are unique and supplement the existing qualitative sonographic review frameworks. Additional examples of sonographic and Al-based component analyses from the training/validation set are presented in FIG. 6.
[0055] Discussion
[0056] Adnexal masses are common in both pre and postmenopausal patients. Ultrasound is the preferred initial imaging modality for characterizing these masses, but image interpretation can be difficult, and masses are frequently classified as indeterminate. In this study, an AI/CADx-based pipeline to differentiate between malignant and benign adnexal masses on ultrasound images, based upon characteristics of edges, geometry, and characteristics of hypoechogenic and hyperechogenic components, demonstrated overall strong classification performance with an overall AUC of 0.93 in the independent test set (see FIG. 4). At a target 95% sensitivity to maximize cancer diagnosis, the specificity was 0.71 [0.53, 0.84] (see Table 4), which may suggest that additional pipeline development is warranted to minimize false positive results further. However, at the same target, the NPV was 0.997 [0.935, 1.000], clinically reassuring that a negative test is accurate and that no cancer is misclassified as a benign mass. In routine clinical practice, MRI is often used as a secondary imaging modality when the ultrasound assessment is suboptimal or indeterminate (O-RADS ultrasound scores of 3 or 4). In a large prospective multicenter cohort of patients with sonographically indeterminate adnexal masses, diagnostic pelvic MRI had an NPV of 98% (at 18% malignancy prevalence) [10], Our study presents a low-complexity model with a high and reassuring NPV based on ultrasound imaging, which is widely available, much cheaper than MRI, and does not require additional interpretative skills.
[0057] Our findings are consistent with previously published work by others on AI- based classification of adnexal masses using different imaging modalities [30, 31], A recent meta-analysis of Al-based systems found a pooled sensitivity of 0.91, a pooled specificity of 0.87, and an AUC of 0.95 with ultrasound [30], However, a more comprehensive pipeline, including mass segmentation and classification [32, 33] and assessments by different mass components [33-35] on ultrasound imaging, was seldom explored. Moreover, none of these studies analyzed the heterogeneous nature of adnexal masses by the relative echogenicity of mass components under the same unsupervised pipeline. The performance of our innovative AI/CADx-based model suggests that a comprehensive pipeline using human-engineered component-based feature extraction and analysis, while leveraging supervised DL and unsupervised ML tools for automatic mass segmentation, can potentially provide a comprehensive methodology for sonographic adnexal mass assessment with less human variability and more accurate mass diagnosis.
[0058] The features used in this study inform our understanding of radiomic properties useful in the classification of adnexal masses. Previously, these features have been used for the diagnosis of whole breast lesions on ultrasound images [20-23], In the current study, it was notable that the edges of the intra-mass components, characterized through morphology features, were important for classification. This may indicate that the boundaries of intra-mass components portray characteristics that can distinguish malignant tissue from benign masses. Furthermore, the relative nature of radiomic features between hypoechogenic and hyperechogenic components was also informative, as indicated by the use of the ratio of effective diameter, areas, difference entropy, and the correlation between the components. Overall, these features showed that the edges and the relative nature of intra-mass components enable adnexal mass evaluations that reflect the internal mass architecture.
[0059] There were some limitations to our study. First, to reduce confounding factors, we incorporated strict clinical and technical inclusion criteria for the masses and images used in the segmentation and classification pipeline. Future studies will include masses with undefined borders on imaging and pelvic pathologies that do not arise from the adnexal region (e.g., pedunculated fibroids and appendiceal tumors). Second, the size of the dataset was influenced by both the retrospective single-center design, our overall inclusion criteria, and the accrual of cases at our institution. We will continue to evaluate our pipeline as additional cases accrue at our institution and ultimately initiate a prospective multi-center study using diverse image databases. Third, the AI/CADx model used a binary feature for the presence of solid components of the masses, derived from expert review of the case. In the future, the discovery and evaluation of solid components could be automated.
[0060] In conclusion, a hybrid AI/CADx pipeline incorporating automatic external mass border segmentation, automatic physics-driven internal echogenic component segmentation, and radiomic feature analysis specific to the components and their relative nature can distinguish between malignant and benign masses with very high sensitivity and relatively high specificity. This hybrid AI/CADx pipeline could potentially serve as a second reader to ensure that no malignant tumor will be missed, especially important as expectations for clinical productivity increase. It may also reduce user variability and reflect the mass’s heterogeneous architecture. These results highlight the importance of component-based analysis of adnexal masses on ultrasound imaging for automatic assessments. Our study results support further evaluation of the hybrid pipeline on expanded cohorts (including patients managed conservatively), variations in image acquisition, and validation using independent datasets.
[0061] Figure Captions
[0062] FIG. 2: Flowchart showing exclusion criteria and resulting eligible cases and masses.
[0063] FIG. 3: AI/CADx pipeline for adnexal mass diagnosis. AI/CADx: Artificial intelligence/computer-aided diagnosis.
[0064] FIG. 4: ROC analysis in the task of classifying adnexal masses as malignant or benign. Both the proper binormal model and empirical curves are shown. The AUC for the proper binormal model was (median, [95% CI]) 0.90 [0.84, 0.95] in the training/validation set and 0.93 [0.83, 0.98] in the independent test set. ROC: receiver operating characteristic. AUC: area under the receiver operating characteristic curve
[0065] FIG. 5: Sonographic and AI/CADx-based automatic segmentation, componentbased clustering, and histopathology examples of individual masses in the test set. Images of two benign (A, B) and two malignant (C, D) ovarian masses and their corresponding likelihood of malignancy (LM) from prediction as malignant or benign by the AI/CADx model are shown. On histopathology, the architecture of high-grade serous ovarian cancers varies widely between solid, papillary, cribriform, and glandular tumor growth. For the histopathology of Sertoli- Leydig cell masses, hypocellular and cellular areas alternate and contain variable quantities of Sertoli-cell and Leydig-cell components and primitive gonadal stroma. AI/CADx: artificial intelligence/computer-aided diagnosis. Pathology case numbers are obscured from the images according to HIPAA regulations.
[0066] FIG. 6: Sonographic and AI/CADx-based automatic segmentation and component-based clustering of individual masses in the training/validation set. Images of three benign (A, B, C) and two malignant/borderline (D, E) ovarian masses from the training/validation set and their corresponding likelihood of malignancy (LM) from prediction as malignant or benign by the AI/CADx model are shown. System Embodiments
[0067] FIG. 7 is a diagram of a system 700 that classifies a mass in a medical image, in accordance with some of the present embodiments. The system 700 implements any of the methods disclosed herein. The system 700 has a processor 702, a memory 720, and a secondary storage device 712 that communicate with each other over a system bus 710. For example, the memory 720 may be volatile RAM located proximate to the processor 702 while the secondary storage device 712 may be a hard disk drive, a solid-state drive, an optical storage device, or another type of persistent data storage. The secondary storage device 712 may alternatively be accessed via an external network. Additional and/or other types of the memory 720 and the secondary storage device 712 may be used without departing from the scope hereof.
[0068] The system 700 may include one or more input/output (I/O) blocks 704 for communicating with one or more peripheral devices. In the example of FIG. 7, the system 700 includes a first VO block 704(1) that receives the medical image 102 from an external device (e.g., an ultrasound machine) and a second I/O block 704(2) that transmits the indication 120 to an external device. The I/O blocks 704(1) and 704(2) are connected to the system bus 710 and therefore can communicate with the processor 702 and the memory 720. Each of the I/O blocks 704(1) and 704(2) may implement a wired network interface (e.g., Ethernet, Infiniband, etc.), wireless network interface (e.g., WiFi, Bluetooth, BLE, etc.), cellular network interface (e.g., 4G, 5G, LTE), optical network interface (e.g., SONET, SDH, IrDA, etc.), multi-media card interface (e.g., SD card, CompactFlash, etc.), or other type of communication port through which the system 700 can communicate with other devices.
[0069] In some embodiments, and as shown in the example of FIG. 7, the system 700 also includes a display adapter 706 connected to the system bus 710. The system 700 may further include a display 708 connected to the display adapter 706. In these embodiments, the system 700 may transmit the indication 120 to the display adapter 706 for displaying on the display 708. As described above, the system 700 may display, on the display 708, additional information (e.g., the medical image 102, the mass image 106, one or more of the radiomic- feature values 136, personally identifiable information, etc.).
[0070] The processor 702 may be any type of circuit or integrated circuit capable of performing logic, control, and input/output operations. For example, the processor 702 may include a microprocessor with one or more central processing unit (CPU) cores, a graphics processing unit (GPU), a digital signal processor (DSP), a microcontroller unit (MCU), or a combination thereof. The processor 702 may also include a memory controller, bus controller, and other components that manage data flow between the processor 702, the memory 720, and other devices connected to the system bus 710. Although not shown in FIG. 7, the system 700 may include a co-processor (e.g., a GPU, field-programmable gate array (FPGA), or machinelearning accelerator) that communicates with the processor 702 over the system bus 710. The co-processor may assist with execution of machine-learning and statistical models (e.g., the MLM 134 of FIG. 1, the MLM 144 of FIG. 1, and the MLM 154 of FIG. 1).
[0071] The memory 720 stores machine-readable instructions 722 that, when executed by the processor 702 (and co-processor, when present), control the system 700 to implement the functionality and methods described herein. The memory 720 also stores data 740 used by the processor 702 (and co-processor, when present) when executing the machine-readable instructions 722. In the example of FIG. 7, the data 740 includes the medical image 102, the mass image 106, the background image 108, the first mass component 112, the second mass component 114, the set of radiomic-feature values 136, and the indication 120. The memory 720 also stores parameters (e.g., weights) of machine-learning and statistical models, including image-segmentor parameters 742 that define the MLM 134 of FIG. 1, image-separator parameters 744 that define the MLM 144 of FIG. 1, and diagnostic-classifier parameters 746 that define the MLM 154 of FIG. 1. The memory 720 may store additional data 740 than shown in FIG. 7. In addition, some or all of the data 740 may be stored in the secondary storage device 712 and fetched from the secondary storage device 712 when needed.
[0072] In the example of FIG. 7, the machine-readable instructions 722 include an image segmentor 724 that implements the image segmentor 104 of FIG. 1, a mass-image separator 726 that implements the mass-image separator 110 of FIG. 1, a radiomic-feature processor 728 that implements the radiomic-feature processor 116 of FIG. 1, a diagnostic classifier 730 that implements the diagnostic classifier 118 of FIG. 1, and an outputter 732. The image segmentor 724, when executed by the processor 702, controls the system 700 to segment the medical image 102 into the mass image 106 and the background image 108 to separate the mass image 106 into the first mass component 112 and the second mass component 114. The radiomic-feature processor 728, when executed by the processor 702, controls the system 700 to extract the set of radiomic-feature values 136 from the first mass component 112 and the second mass component 114. The diagnostic classifier 730, when executed by the processor 702, controls the system 700 to process the set of radiomic-feature values 136 to classify the mass as belonging to a first class (e.g., malignant) or a second class (e.g., benign). The diagnostic classifier 730, when executed by the processor 702, may alternatively control the system 700 to process the set of radiomic-feature values 136 to classify the mass as belonging to a first class (e.g., malignant), a second class (e.g., benign), or a third class (e.g., borderline). The outputter 732, when executed by the processor 702, controls the system 700 to output the indication 120. The memory 720 may store additional machine-readable instructions 722 than shown in FIG. 7 without departing from the scope hereof.
[0073] In some embodiments, the system 700 is incorporated into a medical-imaging system, such as a CT scanner, sonagraph, or MRI machine. In these embodiments, the system 700 may cooperate with the medical-imaging system to receive the medical image 102 and output the indication 120. In other embodiments, the system 700 is separate from the medicalimaging system. In these embodiments, the system 700 may communicate with the medicalimaging system (e.g., via an Ethernet connection) to receive the medical image 102. In other embodiments, the system 700 operates independently of any medical-imaging system. For example, the system 700 may download the medical image 102 from a server, memory stick, or flash drive on which the medical image 102 is stored.
[0074] While FIG. 7 shows the system 700 as a computing system that directly executes the machine-readable instructions 722 with the processor 702, the system 700 may alternatively be configured, either entirely or in part, using circuitry that is hard-wired to implement the functionality of the present embodiments (as opposed to directly executing code). Examples of such circuitry include, but are not limited to, field-programmable gate arrays (FPGAs), system- on-chips (SoCs), programmable logic devices (PLDs).
Combinations of Features
[0075] Features described above as well as those claimed below may be combined in various ways without departing from the scope hereof. The following examples illustrate possible, non-limiting combinations of features and embodiments described above. It should be clear that other changes and modifications may be made to the present embodiments without departing from the spirit and scope of this invention:
[0076] (Al) A method for classifying a mass in a medical image includes segmenting the medical image into a mass image and a background image, separating the mass image into a first mass component and a second mass component, extracting a set of radiomic-feature values from the first mass component and the second mass component, processing the set of radiomic-feature values to classify the mass as belonging to a first class or a second class, and outputting an indication that the mass belongs to the first class or the second class.
[0077] (A2) In the method denoted (Al), the first class is malignant and the second class is benign. [0078] (A3) In either of the methods denoted (Al) and (A2), said processing includes processing the set of radiomic-feature values to classify the mass as belonging to the first class, the second class, or a third class. Furthermore, said outputting includes outputting an indication that the mass belongs to the first class, the second class, or the third class.
[0079] (A4) In any of the methods denoted (Al) to (A3), said segmenting the medical image includes segmenting an ultrasound image, an x-ray image, a two-dimensional slice of a CT-scan image, or a two-dimensional MRI image.
[0080] (A5) In the method denoted (A4), said segmenting the ultrasound image includes segmenting a transvaginal ultrasound image having a fully defined border of an adnexal mass.
[0081] (A6) In any of the methods denoted (Al) to (A5), the method further includes performing medical imaging on a patient to generate the medical image.
[0082] (A7) In any of the methods denoted (Al) to (A6), said segmenting the medical image includes feeding the medical image into a trained convolutional neural network.
[0083] (A8) In the method denoted (A7), said feeding the medical image into the trained convolutional neural network includes feeding the medical image into a U-Net.
[0084] (A9) In any of the methods denoted (Al) to (A8), said segmenting is based on a bounding box.
[0085] (A10) In any of the methods denoted (Al) to (A9), said separating the mass image includes clustering each of a plurality of pixels of the mass image into one or both of a first cluster and a second cluster. The first cluster forms the first mass component while the second cluster forms the second mass component.
[0086] (Al 1) In the method denoted (A10), said clustering uses fuzzy clustering.
[0087] (A12) In the method denoted (Al l), said clustering uses fuzzy c-means clustering.
[0088] (A13) In any of the methods denoted (Al) to (A12), said extracting the set of radiomic-feature values includes extracting at least one of (i) a radiomic-feature value quantifying a morphology of the first mass component or the second mass component, (ii) a radiomic-feature value quantifying a geometry of the first mass component or the second mass component, and (iii) a radiomic-feature value quantifying a texture of the first mass component or the second mass component.
[0089] (A 14) In the method denoted (Al 3), the geometry of the first mass component includes one or both of an area of the first mass component and an effective diameter of the first mass component. Furthermore, the geometry of the second mass component includes one or both of an area of the second mass component and an effective diameter of the second mass component.
[0090] (Al 5) In any of the methods denoted (Al) to (A14), said processing the set of radiomic-feature values includes feeding the set of radiomic-feature values into a trained discriminant analysis classifier.
[0091] (Al 6) In the method denoted (Al 5), said feeding the set of radiomic-feature values into the trained discriminant analysis classifier comprises feeding the set of radiomic- feature values into a trained linear discriminant analysis classifier.
[0092] (A17) In any of the methods denoted (Al) to (A16), said outputting the indication includes displaying the indication on a screen.
[0093] (Al 8) In any of the methods denoted (Al) to (Al 7), the method further includes diagnosing, based on the indication, a patient with a disease.
[0094] (Al 9) In the method denoted (Al 8), the method further includes providing the patient with a therapeutic intervention for treating the disease.
[0095] (A20) In the method denoted (Al 9), the therapeutic intervention includes a surgical procedure, a non-surgical medical procedure, a prescription for one or more pharmaceutical drugs, or a combination thereof.
[0096] (Bl) A system for classifying a mass in a medical image includes a processor a and a memory in electronic communication with the processor. The memory stores machine- readable instructions that, when executed by the processor, control the system to segment the medical image into a mass image and a background image, separate the mass image into a first mass component and a second mass component, extract a set of radiomic-feature values from the first mass component and the second mass component, process the set of radiomic-feature values to classify the mass as belonging to a first class or a second class, and output an indication that the mass belongs to the first class or the second class.
[0097] (B2) In the system denoted (Bl), the first class is malignant and the second class is benign.
[0098] (B3) In either of the systems denoted (Bl) and (B2), the machine-readable instructions that, when executed by the processor, control the system to process the set of radiomic-feature values include machine-readable instructions that, when executed by the processor, control the system to process the set of radiomic-feature values to classify the mass as belonging to the first class, the second class, or a third class. Furthermore, the machine- readable instructions that, when executed by the processor, control the system to output the indication include machine-readable instructions that, when executed by the processor, control the system to output an indication that the mass belongs to the first class, the second class, or the third class.
[0099] (B4) In any of the systems denoted (Bl) to (B3), the medical image is an ultrasound image, an x-ray image, a two-dimensional slice of a CT-scan image, or a two- dimensional MRI image.
[0100] (B5) In the system denoted (B4), the ultrasound image is a transvaginal ultrasound image having a fully defined border of an adnexal mass.
[0101] (B6) In any of the systems denoted (Bl) to (B5), the machine-readable instructions that, when executed by the processor, control the system to segment the medical image include machine-readable instructions that, when executed by the processor, control the system to feed the medical image into a trained convolutional neural network.
[0102] (B7) In the system denoted (B6), the trained convolutional neural network is a U-Net.
[0103] (B8) In any of the systems denoted (Bl) to (B7), the machine-readable instructions that, when executed by the processor, control the system to segment the medical image include machine-readable instructions that, when executed by the processor, control the system to segment the medical image based on a bounding box.
[0104] (B9) In any of the systems denoted (Bl) to (B8), the machine-readable instructions that, when executed by the processor, control the system to separate the mass image include machine-readable instructions that, when executed by the processor, control the system to cluster each of a plurality of pixels of the mass image into one or both of a first cluster and a second cluster. The first cluster forms the first mass component while the second cluster forms the second mass component.
[0105] (B10) In the system denoted (B9), the machine-readable instructions that, when executed by the processor, control the system to cluster each of the plurality of pixels include machine-readable instructions that, when executed by the processor, control the system to cluster using fuzzy clustering.
[0106] (Bl 1) In the system denoted (B10), the machine-readable instructions that, when executed by the processor, control the system to implement fuzzy clustering include machine- readable instructions that, when executed by the processor, control the system to clustering using fuzzy c-means clustering.
[0107] (B12) In any of the systems denoted (Bl) to (Bl l), the set of radiomic-feature values including at least one of (i) a radiomic-feature value quantifying a morphology of the first mass component or the second mass component, (ii) a radiomic-feature value quantifying a geometry of the first mass component or the second mass component, and (iii) a radiomic- feature value quantifying a texture of the first mass component or the second mass component.
[0108] (Bl 3) In the system denoted (B12), the geometry of the first mass component includes one or both of an area of the first mass component and an effective diameter of the first mass component. Furthermore, the geometry of the second mass component includes one or both of an area of the second mass component and an effective diameter of the second mass component.
[0109] (B14) In any of the systems denoted (Bl) to (B13), the machine-readable instructions that, when executed by the processor, control the system to process the set of radiomic-feature values include machine-readable instructions that, when executed by the processor, control the system to feed the set of radiomic-feature values into a trained discriminant analysis classifier.
[0110] (B15) In the system denoted (B14), the trained discriminant analysis classifier is a trained linear discriminant analysis classifier.
[0111] (B16) In any of the systems denoted (Bl) to (B15), the memory stores additional machine-readable instructions that, when executed by the processor, control the system to transmit the indication to a screen for display on the screen.
[0112] (Bl 7) In the system denoted (Bl 6), the system further includes the screen.
[0113] Changes may be made in the above methods and systems without departing from the scope hereof. It should thus be noted that the matter contained in the above description or shown in the accompanying drawings should be interpreted as illustrative and not in a limiting sense. The following claims are intended to cover all generic and specific features described herein, as well as all statements of the scope of the present method and system, which, as a matter of language, might be said to fall therebetween.
References
[1] Siegel RL, Miller KD, Wagle NS, Jemal A. Cancer statistics, 2023. CA Cancer J Clin. Jan 2023;73(l): 17-48. doi: 10.3322/caac.21763
[2] Timmerman D, Planchamp F, Bourne T, et al. ESGO/ISUOG/IOTA/ESGE Consensus Statement on preoperative diagnosis of ovarian tumours. Facts Views Vis Obgyn. Jun 2021;13(2):107-130. doi:10.52054/FVVO.13.2.016
[3] Sisodia RC, Del Carmen MG. Lesions of the Ovary and Fallopian Tube. N Engl J Med. Aug 25 2022;387(8):727-736. doi: 10.1056/NEJMra2108956 [4] Valentin L, Ameye L, Jurkovic D, et al. Which extrauterine pelvic masses are difficult to correctly classify as benign or malignant on the basis of ultrasound findings and is there a way of making a correct diagnosis? Ultrasound Obstet Gynecol. Apr 2006;27(4):438- 44. doi: 10.1002/uog.2707
[5] Sadowski EA, Paroder V, Patel-Lippmann K, et al. Indeterminate Adnexal Cysts at US: Prevalence and Characteristics of Ovarian Cancer. Radiology. Jun 2018;287(3): 1041- 1049. doi : 10.1148/radiol .2018172271
[6] Meys EM, Kaijser J, Kruitwagen RF, et al. Subjective assessment versus ultrasound models to diagnose ovarian cancer: A systematic review and meta-analysis. Eur J Cancer. May 2016;58: 17-29. doi:10.1016/j.ejca.2016.01.007
[7] Barrenada L, Ledger A, Dhiman P, et al. ADNEX risk prediction model for diagnosis of ovarian cancer: systematic review and meta-analysis of external validation studies. BMJ Med. 2024;3(l):e000817. doi: 10.1136/bmjmed-2023-000817
[8] Lee S, Lee JE, Hwang JA, Shin H. O-RADS US: A Systematic Review and Meta-Analysis of Category-specific Malignancy Rates. Radiology. Aug 2023;308(2):e223269. doi: 10.1148/radiol.223269
[9] Glanc P, Benacerraf B, Bourne T, et al. First International Consensus Report on Adnexal Masses: Management Recommendations. J Ultrasound Med. May 2017;36(5):849-863. doi: 10.1002/jum.14197
[10] Thomassin-Naggara I, Poncelet E, Jalaguier-Coudray A, et al. Ovarian- Adnexal Reporting Data System Magnetic Resonance Imaging (O-RADS MRI) Score for Risk Stratification of Sonographically Indeterminate Adnexal Masses. JAMA Netw Open. Jan 3 2020;3(l):el919896. doi: 10.1001/jamanetworkopen.2019.19896
[11] Sadowski EA, Thomassin-Naggara I, Rockall A, et al. O-RADS MRI Risk Stratification System: Guide for Assessing Adnexal Lesions from the ACR O-RADS Committee. Radiology. Apr 2022;303(l):35-47. doi: 10.1148/radiol.204371
[12] Anthoulakis C, Nikoloudis N. Pelvic MRI as the “gold standard” in the subsequent evaluation of ultrasound-indeterminate adnexal lesions: a systematic review. Gynecol Oncol. Mar 2014; 132(3):661-8. doi:10.1016/j.ygyno.2013.10.022
[13] Medeiros LR, Freitas LB, Rosa DD, et al. Accuracy of magnetic resonance imaging in ovarian tumor: a systematic quantitative review. Am J Obstet Gynecol. Jan 2011;204(l):67 el-10. doi: 10.1016/j.ajog.2010.08.031 [14] Campos A, Villermain-Lecolier C, Sadowski EA, et al. O-RADS scoring system for adnexal lesions: Diagnostic performance on TVUS performed by an expert sonographer and MRI. Eur J Radiol. Dec 2023 ; 169: 111172. doi : 10.1016/j .ejrad.2023.111172
[15] Hosny A, Parmar C, Quackenbush J, Schwartz LH, Aerts H. Artificial intelligence in radiology. Nat Rev Cancer. Aug 2018;18(8):500-510. doi: 10.1038/s41568-018-0016-5
[16] Yang D, Fineberg HV, Cosby K. Diagnostic Excellence. JAMA. Nov 16 2021;326(19): 1905-1906. doi:10.1001/jama.2021.19493
[17] Gillies RJ, Kinahan PE, Hricak H. Radiomics: Images Are More than Pictures, They Are Data. Radiology. Feb 2016;278(2):563-77. doi: 10.1148/radiol.2015151169
[18] Yoeli-Bik R, Longman RE, Wroblewski K, Weigert M, Abramowicz JS, Lengyel E. Diagnostic Performance of Ultrasonography-Based Risk Models in Differentiating Between Benign and Malignant Ovarian Tumors in a US Cohort. JAMA Netw Open. Jul 3 2023;6(7):e2323289. doi: 10.100 l/jamanetworkopen.2023.23289
[19] Whitney HM, Yoeli-Bik R, Abramowicz JS, et al. Al-based automated segmentation for ovarian/adnexal masses and their internal components on ultrasound imaging. Journal of Medical Imaging. 2024; 11(4):044505. doi: 10.1117/1. JMI. l 1.4.044505
[20] Huo Z, Giger ML, Vyborny CJ, Wolverton DE, Schmidt RA, Doi K. Automated computerized classification of malignant and benign masses on digitized mammograms. AcadRadiol. Mar 1998;5(3): 155-68. doi: 10.1016/sl076-6332(98)80278-x
[21] Giger ML, Al-Hallaq H, Huo Z, et al. Computerized analysis of lesions in US images of the breast. AcadRadiol. Nov 1999;6(11):665-74. doi: 10.1016/S1076-6332(99)80115-9
[22] Drukker K, Giger ML, Horsch K, Kupinski MA, Vyborny CJ, Mendelson EB. Computerized lesion detection on breast ultrasound. Med Phys. Jul 2002;29(7): 1438-46. doi:10.1118/1.1485995
[23] Drukker K, Giger ML, Vyborny CJ, Mendelson EB. Computerized detection and classification of cancer on breast ultrasound. Acad Radiol. May 2004; 11(5):526-35. doi : 10.1016/S 1076-6332(03)00723-2
[24] Metz CE, Pan X. “Proper” Binormal ROC Curves: Theory and Maximum -Likelihood Estimation. J Math Psychol. Mar 1999;43(1): 1-33. doi: 10.1006/jmps.1998.1218
[25] Metz CE. Basic principles of ROC analysis. Semin Nucl Med. Oct 1978;8(4):283-98. doi : 10.1016/s0001 -2998(78)80014-2
[26] Virgilio BA, De Blasis I, Sladkevicius P, et al. Imaging in gynecological disease (16): clinical and ultrasound characteristics of serous cystadenofibromas in adnexa. Ultrasound Obstet Gynecol. Dec 2019;54(6):823-830. doi: 10.1002/uog.20277 [27] Seidman JD, Mehrotra A. Benign ovarian serous tumors: a re-evaluation and proposed reclassification of serous “cystadenomas” and “cystadenofibromas”. Gynecol Oncol. Feb 2005;96(2):395-401. doi:10.1016/j.ygyno.2004.10.014
[28] Moro F, BaimaPoma C, Zannoni GF, et al. Imaging in gynecological disease (12): clinical and ultrasound features of invasive and non-invasive malignant serous ovarian tumors. Ultrasound Obstet Gynecol. Dec 2017;50(6):788-799. doi: 10.1002/uog.17414
[29] Demidov VN, Lipatenkova J, Vikhareva O, Van Holsbeke C, Timmerman D, Valentin L. Imaging of gynecological disease (2): clinical and ultrasound characteristics of Sertoli cell tumors, Sertoli-Leydig cell tumors and Leydig cell tumors. Ultrasound Obstet Gynecol. Jan 2008;31(l):85-91. doi: 10.1002/uog.5227
[30] Xu HL, Gong TT, Liu FH, et al. Artificial intelligence performance in image-based ovarian cancer identification: A systematic review and meta-analysis. EClinicalMedicine . Nov 2022;53: 101662. doi: 10.1016/j.eclinm.2022.101662
[31] Ma L, Huang L, Chen Y, et al. Al diagnostic performance based on multiple imaging modalities for ovarian tumor: A systematic review and meta-analysis. Front Oncol. 2023;13: 1133491. doi: 10.3389/fonc.2023.1133491
[32] Barcroft JF, Linton-Reid K, Landolfo C, et al. Machine learning and radiomics for segmentation and classification of adnexal masses on ultrasound. NPJ Precis Oncol. Feb 20 2024;8(l):41. doi: 10.1038/s41698-024-00527-8
[33] Zimmer Y, Tepper R, Akselrod S. An automatic approach for morphological analysis and malignancy evaluation of ovarian masses using B-scans. Ultrasound Med Biol. Nov 2003;29(l l): 1561-70. doi: 10.1016/j.ultrasmedbio.2003.08.013
[34] Chiappa V, Bogani G, Interlenghi M, et al. The Adoption of Radiomics and machine learning improves the diagnostic processes of women with Ovarian Masses (the AROMA pilot study). J Ultrasound. Dec 2021;24(4):429-437. doi:10.1007/s40477-020-00503-5
[35] Moro F, Vagni M, Tran HE, et al. Radiomics analysis of ultrasound images to discriminate between benign and malignant adnexal masses with solid ultrasound morphology. Ultrasound Obstet Gynecol. May 15 2024;doi: 10.1002/uog.27680

Claims

CLAIMS What is claimed is:
1. A method for classifying a mass in a medical image, comprising: segmenting the medical image into a mass image and a background image; separating the mass image into a first mass component and a second mass component; extracting a set of radiomic-feature values from the first mass component and the second mass component; processing the set of radiomic-feature values to classify the mass as belonging to a first class or a second class; and outputting an indication that the mass belongs to the first class or the second class.
2. The method of claim 1, wherein the first class is malignant and the second class is benign.
3. The method of claim 1, wherein: said processing comprises processing the set of radiomic-feature values to classify the mass as belonging to the first class, the second class, or a third class; and said outputting comprises outputting an indication that the mass belongs to the first class, the second class, or the third class.
4. The method of claim 1, wherein said segmenting the medical image comprises segmenting an ultrasound image, an x-ray image, a two-dimensional slice of a CT- scan image, or a two-dimensional MRI image.
5. The method of claim 4, wherein said segmenting the ultrasound image comprises segmenting a transvaginal ultrasound image having a fully defined border of an adnexal mass.
6. The method of claim 1, further comprising performing medical imaging on a patient to generate the medical image.
7. The method of claim 1, wherein said segmenting the medical image comprises feeding the medical image into a trained convolutional neural network.
8. The method of claim 7, wherein said feeding the medical image into the trained convolutional neural network comprises feeding the medical image into a U-Net.
9. The method of claim 1, wherein said segmenting is based on a bounding box.
10. The method of claim 1, wherein said separating the mass image comprises clustering each of a plurality of pixels of the mass image into one or both of a first cluster and a second cluster, the first cluster forming the first mass component, the second cluster forming the second mass component.
11. The method of claim 10, wherein said clustering uses fuzzy clustering.
12. The method of claim 11, wherein said fuzzy clustering uses fuzzy c-means clustering.
13. The method of claim 1, wherein said extracting the set of radiomic-feature values comprises extracting at least one of: a radiomic-feature value quantifying a morphology of the first mass component or the second mass component; a radiomic-feature value quantifying a geometry of the first mass component or the second mass component; and a radiomic-feature value quantifying a texture of the first mass component or the second mass component.
14. The method of claim 13, wherein: the geometry of the first mass component includes one or both of an area of the first mass component and an effective diameter of the first mass component; and the geometry of the second mass component includes one or both of an area of the second mass component and an effective diameter of the second mass component.
15. The method of claim 1, wherein said processing the set of radiomic-feature values comprises feeding the set of radiomic-feature values into a trained discriminant analysis classifier.
16. The method of claim 15, wherein said feeding the set of radiomic-feature values into the trained discriminant analysis classifier comprises feeding the set of radiomic- feature values into a trained linear discriminant analysis classifier.
17. The method of claim 1, wherein said outputting the indication comprises displaying the indication on a screen.
18. The method of claim 1, further comprising diagnosing, based on the indication, a patient with a disease.
19. The method of claim 18, further comprising providing the patient with a therapeutic intervention for treating the disease.
20. The method of claim 19, the therapeutic intervention comprising a surgical procedure, a non-surgical medical procedure, a prescription for one or more pharmaceutical drugs, or a combination thereof.
21. A system for classifying a mass in a medical image, comprising: a processor; and a memory in electronic communication with the processor, the memory storing machine-readable instructions that, when executed by the processor, control the system to: segment the medical image into a mass image and a background image; separate the mass image into a first mass component and a second mass component; extract a set of radiomic-feature values from the first mass component and the second mass component; process the set of radiomic-feature values to classify the mass as belonging to a first class or a second class; and output an indication that the mass belongs to the first class or the second class.
22. The system of claim 21, wherein the first class is malignant and the second class is benign.
23. The system of claim 21, wherein: the machine-readable instructions that, when executed by the processor, control the system to process the set of radiomic-feature values comprise machine- readable instructions that, when executed by the processor, control the system to process the set of radiomic-feature values to classify the mass as belonging to the first class, the second class, or a third class; and the machine-readable instructions that, when executed by the processor, control the system to output the indication comprise machine-readable instructions that, when executed by the processor, control the system to output an indication that the mass belongs to the first class, the second class, or the third class.
24. The system of claim 21, the medical image comprising an ultrasound image, an x-ray image, a two-dimensional slice of a CT-scan image, or a two-dimensional MRI image.
25. The system of claim 24, the ultrasound image comprising a transvaginal ultrasound image having a fully defined border of an adnexal mass.
26. The system of claim 21, wherein the machine-readable instructions that, when executed by the processor, control the system to segment the medical image comprise machine-readable instructions that, when executed by the processor, control the system to feed the medical image into a trained convolutional neural network.
27. The system of claim 26, the trained convolutional neural network comprising a U-Net.
28. The system of claim 21, wherein the machine-readable instructions that, when executed by the processor, control the system to segment the medical image comprise machine-readable instructions that, when executed by the processor, control the system to segment the medical image based on a bounding box.
29. The system of claim 21, wherein the machine-readable instructions that, when executed by the processor, control the system to separate the mass image comprise machine-readable instructions that, when executed by the processor, control the system to cluster each of a plurality of pixels of the mass image into one or both of a first cluster and a second cluster, the first cluster forming the first mass component, the second cluster forming the second mass component.
30. The system of claim 29, wherein the machine-readable instructions that, when executed by the processor, control the system to cluster each of the plurality of pixels comprise machine-readable instructions that, when executed by the processor, control the system to cluster using fuzzy clustering.
31. The system of claim 30, wherein the machine-readable instructions that, when executed by the processor, control the system to implement fuzzy clustering comprise machine-readable instructions that, when executed by the processor, control the system to clustering using fuzzy c-means clustering.
32. The system of claim 21, the set of radiomic-feature values including at least one of: a radiomic-feature value quantifying a morphology of the first mass component or the second mass component; a radiomic-feature value quantifying a geometry of the first mass component or the second mass component; and a radiomic-feature value quantifying a texture of the first mass component or the second mass component.
33. The system of claim 32, wherein: the geometry of the first mass component includes one or both of an area of the first mass component and an effective diameter of the first mass component; and the geometry of the second mass component includes one or both of an area of the second mass component and an effective diameter of the second mass component.
34. The system of claim 21, wherein the machine-readable instructions that, when executed by the processor, control the system to process the set of radiomic-feature values comprise machine-readable instructions that, when executed by the processor, control the system to feed the set of radiomic-feature values into a trained discriminant analysis classifier.
35. The system of claim 34, the trained discriminant analysis classifier comprising a trained linear discriminant analysis classifier.
36. The system of claim 21, the memory storing additional machine-readable instructions that, when executed by the processor, control the system to transmit the indication to a screen for display on the screen.
37. The system of claim 36, further comprising the screen.
PCT/US2025/016112 2024-02-16 2025-02-14 Machine-learning-based systems and methods for classifying masses in medical images Pending WO2025175223A1 (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US202463554334P 2024-02-16 2024-02-16
US63/554,334 2024-02-16
US202463687625P 2024-08-27 2024-08-27
US63/687,625 2024-08-27

Publications (1)

Publication Number Publication Date
WO2025175223A1 true WO2025175223A1 (en) 2025-08-21

Family

ID=96773588

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2025/016112 Pending WO2025175223A1 (en) 2024-02-16 2025-02-14 Machine-learning-based systems and methods for classifying masses in medical images

Country Status (1)

Country Link
WO (1) WO2025175223A1 (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6858007B1 (en) * 1998-11-25 2005-02-22 Ramot University Authority For Applied Research And Industrial Development Ltd. Method and system for automatic classification and quantitative evaluation of adnexal masses based on a cross-sectional or projectional images of the adnex
US20060245629A1 (en) * 2005-04-28 2006-11-02 Zhimin Huo Methods and systems for automated detection and analysis of lesion on magnetic resonance images
US20160078624A1 (en) * 2010-11-26 2016-03-17 Maryellen L. Giger Method, system, software and medium for advanced intelligent image analysis and display of medical images and information
US20210173188A1 (en) * 2017-08-09 2021-06-10 Allen Institute Systems, devices, and methods for image processing to generate an image having predictive tagging

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6858007B1 (en) * 1998-11-25 2005-02-22 Ramot University Authority For Applied Research And Industrial Development Ltd. Method and system for automatic classification and quantitative evaluation of adnexal masses based on a cross-sectional or projectional images of the adnex
US20060245629A1 (en) * 2005-04-28 2006-11-02 Zhimin Huo Methods and systems for automated detection and analysis of lesion on magnetic resonance images
US20160078624A1 (en) * 2010-11-26 2016-03-17 Maryellen L. Giger Method, system, software and medium for advanced intelligent image analysis and display of medical images and information
US20210173188A1 (en) * 2017-08-09 2021-06-10 Allen Institute Systems, devices, and methods for image processing to generate an image having predictive tagging

Similar Documents

Publication Publication Date Title
US20220254023A1 (en) System and Method for Interpretation of Multiple Medical Images Using Deep Learning
US10339648B2 (en) Quantitative predictors of tumor severity
Yip et al. Application of the 3D slicer chest imaging platform segmentation algorithm for large lung nodule delineation
EP3796210A1 (en) Spatial distribution of pathological image patterns in 3d image data
US8331641B2 (en) System and method for automatically classifying regions-of-interest
US20250156677A1 (en) Machine learning-based automated abnormality detection in medical images and presentation thereof
Kathale et al. Breast cancer detection and classification
US20140375671A1 (en) Method, system, software and medium for advanced image-based arrays for analysis and display of biomedical information
CN111247592A (en) System and method for quantifying tissue over time
Chen et al. A deep learning model based on dynamic contrast-enhanced magnetic resonance imaging enables accurate prediction of benign and malignant breast lessons
Frank A deep learning architecture with an object-detection algorithm and a convolutional neural network for breast mass detection and visualization
Gui et al. FS-YOLOv9: a frequency and spatial feature-based YOLOv9 for real-time breast cancer detection
Zhou et al. Improved breast lesion detection in mammogram images using a deep neural network
Grinet et al. Machine learning in breast cancer imaging: a review on data, models and methods
Cascio et al. Computer-aided diagnosis in digital mammography: comparison of two commercial systems
US20060047227A1 (en) System and method for colon wall extraction in the presence of tagged fecal matter or collapsed colon regions
WO2025175223A1 (en) Machine-learning-based systems and methods for classifying masses in medical images
Zhu et al. UD-TN: A comprehensive ultrasound dataset for benign and malignant thyroid nodule classification
Mahdian et al. Deep learning and radiomics-based vascular calcification characterization in dental cone beam computed tomography as a predictive tool for cardiovascular disease: a proof-of-concept study
Bunnell Early Breast Cancer Diagnosis via Breast Ultrasound and Deep Learning
Amritha et al. Liver tumor segmentation and classification using deep learning
Li et al. A new prediction model of triple-negative breast cancer based on ultrasound radiomics.
Subha et al. An Efficient Image Based Mammogram Classification Framework Using Depth Wise Convolutional Neural Network
Goyal et al. Review of Artificial Intelligence Applicability of Various Diagnostic Modalities, their Advantages, Limitations, and Overcoming the Challenges in Breast Imaging
Salh et al. Review of Techniques Used in Computer-Aided Detection and Diagnosis for Breast Cancer Using Mammograms

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 25755765

Country of ref document: EP

Kind code of ref document: A1