[go: up one dir, main page]

WO2020012414A1 - Structure pour la réduction de faux positifs dans des images médicales - Google Patents

Structure pour la réduction de faux positifs dans des images médicales Download PDF

Info

Publication number
WO2020012414A1
WO2020012414A1 PCT/IB2019/055934 IB2019055934W WO2020012414A1 WO 2020012414 A1 WO2020012414 A1 WO 2020012414A1 IB 2019055934 W IB2019055934 W IB 2019055934W WO 2020012414 A1 WO2020012414 A1 WO 2020012414A1
Authority
WO
WIPO (PCT)
Prior art keywords
rois
classifier
positives
false
medical image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/IB2019/055934
Other languages
English (en)
Inventor
Mausumi Acharyya
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Advenio Tecnosys Pvt Ltd
Original Assignee
Advenio Tecnosys Pvt Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Advenio Tecnosys Pvt Ltd filed Critical Advenio Tecnosys Pvt Ltd
Publication of WO2020012414A1 publication Critical patent/WO2020012414A1/fr
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • G06N20/20Ensemble learning
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/243Classification techniques relating to the number of classes
    • G06F18/2431Multiple classes
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/0895Weakly supervised learning, e.g. semi-supervised or self-supervised learning
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/09Supervised learning
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/03Recognition of patterns in medical or anatomical images

Definitions

  • AI Artificial Intelligence
  • CAD Computer Aided Diagnosis
  • AI/CAD systems help scan digital medical images for typical appearances and highlight conspicuous sections, such as possible diseases.
  • the purpose of AI/CAD is to improve the accuracy of image interpretation as well as its consistency using the computer output. They help clinicians and medical professionals better interpret the normal and abnormal features on the image that is provided. It is rapidly entering the diagnostic healthcare mainstream.
  • the computer output is used as a "second opinion" in assisting clinicians/readers' image interpretations.
  • the computer algorithm generally consists of several steps that may include image processing, image feature analysis, and data classification.
  • multi-MTANN multiple massive-training artificial neural network
  • CAD computer-aided diagnostic
  • the efforts in image processing and pattern recognition have been made in order to help improving the detection accuracy by physicians.
  • The“False Positive Reduction in Mass Detection Approach using Spatial Diversity Analysis, Geraldo Braz Junior et al, eTELEMED20l3: The Fifth International Conference on eHealth, Telemedicine, and Social Medicine” discusses Diversity Indexes in a Spatial approach as a texture measure is used to distinguish suspicious regions previously detected by segmentation scheme.
  • the description of the pattern is based on the fact that the important features could be distributed on the region under certain distance, angle and tonalities.
  • tonalities represent species that have aassociation that may be important distinctions between the pattern of mass and non-mass regions helping do false positive reduction and assisting a physician on a task of verify suspicious regions on a mammogram.
  • the computed measures are classified through a Support Vector Machine and reaches a reduction of 75% of false positives on mass detection methodology.
  • CAD systems for detecting lesions in mammograms have been investigated because the computer can improve radiologists' detection accuracy (An Alternative Approach to Reduce Massive False Positives in Mammograms Using Block Variance of Local Coefficients Features and Support Vector Machine, M. P. Nguyen, Procedia Computer Science 20 (2013) 399 - 405).
  • the main problem encountered in the development of CAD systems is a high number of false positives usually arise. It is particularly true in mass detection. Different methods have been proposed so far for this task but the problem has not been fully solved yet.
  • the proposed in [7] idea is lying in the use of Block Variation of Local Correlation Coefficients (BVLC) texture features to characterize detected masses. Then, Support Vector Machine (SVM) classifier is used to classify the detected masses.
  • BVLC Block Variation of Local Correlation Coefficients
  • SVM Support Vector Machine
  • the patent application no. US 2010/0260390 Al disclosesa computer aided detection (CAD) method for detecting polyps within an identified mucosa layer of a virtual representation of a colon includes the steps of identifying candidate polyp patches in the surface of the mucosa layer and extracting the volume of each of the candidate polyp patches.
  • the extracted volume of the candidate polyp patches can be partitioned to extract a plurality of features, of the candidate polyp patch, which includes at least one internal feature of the candidate polyp patch.
  • the features can include density texture features, geometrical features, and morphological features of the polyp candidate volume.
  • the extracted features of the polyp candidates are analyzed to eliminate false positives from the candidate polyp patches. Those candidates which are not eliminated are identified as polyps.
  • FPs false positives
  • FPs false positives
  • the target is to keep a high sensitivity (-100%) while the specificity can be low (FPs high) and in subsequent stages of the proposed cascaded multi-modular framework (refer Figure 1) the FPs are eliminated.
  • FPs false positives
  • FIG. 1 shows a Schematic workflow of cascaded framework for FP reduction
  • FIG. 2 illustrates an anatomical region based weighted feature extraction
  • FIG. 3 illustrates the mixture of experts. DESCRIPTION OF THE INVENTION
  • the invention is about a novel technique/method for reducing hard mimics or false-positives (FPs) or look-alikes of clinical patterns or features in medical images.
  • Clinical patterns are pathophysiological manifestation in the anatomical location for which the medical image is captured and is under analysis these can be - a lesion, or a haemorrhage, or exudates, cotton wool and several such features.
  • An intelligent mix of advanced image processing, computer vision, classical machine learning (ML) and deep learning (DL) is used to develop the AI based automated diagnostic software such that the computational complexity is reduced, not dependent on very large training data size, reduced processing time and no stringent hardware requirement while still achieving high accuracy.
  • FIG. 1 illustrates the schematic framework.
  • the invention relates to a method to reduce look-alikes or hard mimics or false positives (FPs) of clinical patterns or abnormal manifestations in medical images wherein the methodology comprise of:
  • each module means a set of methodology or computer program comprising of combination of image processing, pattern recognition, machine learning, deep learning and other artificial intelligence based techniques;
  • the first module is called a candidate generator and it recognizes/segments/delineates/selects probable regions of interest (ROIs) or candidates corresponding to clinical/pathological abnormal regions in a medical image.
  • ROIs probable regions of interest
  • candidates can be actual abnormal regions called True Positives (TPs) or look-alikes of abnormal regions called False Positives (FPs).
  • TPs True Positives
  • FPs False Positives
  • each module candidates are represented by various different descriptive features. Features are generated by a computer system based on image processing and pattern recognition techniques; - the second and subsequent modules involve a training phase followed by a testing phase. In the training phase a set of Positive and Negative Class candidates are considered (usually 50% positive and 50% negative class for data balancing);
  • the newly labelled candidates are now used in the training phase to regenerate the classifier and the process is repeated for labelling unlabelled candidates. This process is repeated till we are left with fewer candidates which are assigned with probabilities for which no decision cannot be taken. For these candidates again a small effort from experts is required to label these;
  • - false positives are consistently reduced in successive modules of the cascaded framework; - the purpose of the multi-modular framework is to successively reduce the false positives and thus reduce the number of candidates under process, this entails complex feature description of the candidates using more advanced images processing techniques requiring higher complexity. With reduced number of candidates the overall computational complexity is reduced ; - we use an ensemble of classifiers in the modules.
  • the set of classifiers can be based on classical machine learning or deep learning. Inputs to the deep learning classifier system are both candidates and features extracted from these candidates;
  • anatomical location based classifiers comprising of anatomical location based classifiers are used for complex decision logic/ based on anatomical regions of the body part the medical image of which is under analysis (an use case of such body part is the retina (or fundus) and anatomical locations are optic disc, macula, vessels.
  • body part - lung or parenchyma
  • anatomical locations like collar bone, ribs, apical, basal, hilum and exterior pulmonary regions
  • haemorrhages in the retina have different sizes. For large sized haemorrhage it's more likely to have multiple candidates corresponding to one haemorrhage than for small haemorrhages. So we treat them separately by constructing two classifiers based on size of haemorrhage and a gating classifier); - each candidate is passed to all of the three branches, and receives 3 probabilities and these probabilities are merged by a combination rule.
  • the rule can be the Bayes or Dempster-Shafer or Product as illustrated in the flowchart of Figure 3 to provide one final output that is the probability of a candidate being a true positive (TP); - the final module is a deep learning framework wherein each TP candidate is resampled via scaling, translations, and rotations to increase the variation of the training data and avoid overfitting. These multiple views of candidates are used to train a deep Convolutional Neural Network (CNN) classifier. The CNN assigns probabilities to the random views of each candidate. These probabilities are averaged at each candidate to compute a final classification probability for each candidate;
  • CNN Convolutional Neural Network
  • the clinical patterns are pathophysiological manifestation in the anatomical location for which the medical image is captured and is under analysis.
  • Medical images can be one single image (two-dimensional) or multiple images or of three or more dimensional images comprising of plurality of image patches or sub-volumes.
  • a given medical image can be either Abnormal
  • An abnormal medical image can comprise of one or more than one abnormal region represented by candidate(s).
  • Sensitivity is defined as the accuracy with which abnormal conditions or positive classes are identified while Specificity is the accuracy with which normal conditions or negative classes are identified.
  • the target is to keep a high sensitivity (-100%) while the specificity can be low (FPs high) and in subsequent stages of the proposed cascaded multi-modular framework (refer Figure 1) the FPs are eliminated.
  • each candidate is represented by a set of descriptive features based on color, texture, shape, statistical, geometrical and other properties which are implicitly or explicitly embedded in these ROIs.
  • the multi-modular cascaded framework is to accommodate complex features for maintaining high accuracy of the classifiers in terms of sensitivity and specificity.
  • Many times extracting descriptive information from the candidates becomes very challenging and computationally intensive. It involves complex mathematical operations. So it is not feasible to extract these complex features if number of ROIs at any level is very high. With filtering through subsequent modules the number of FPs are reduced thus reducing the number of candidates to be processed and more and more complex features are being extracted corresponding to each candidate.
  • the deep learning classifier is not specific or limited to a CNN, it is one example of several solutions that are possible.
  • the first Module is the Candidate Generator (CG). This CG module generates the probable candidates or regions of interest (ROIs) corresponding to clinical/pathological abnormal regions. These ROIs can be actual abnormal regions called True Positives (TPs) or look-alikes of abnormal regions called False Positives (FPs).
  • TPs True Positives
  • FPs False Positives
  • a given medical image can be either Abnormal (Positive Class) or Normal (Negative Class). Sensitivity is defined as the accuracy with which abnormal conditions or positive classes are identified while Specificity is the accuracy with which normal conditions or negative classes are identified. So, for a high sensitivity and specificity number of TPs detected should be as high as possible and number of FPs should be as less as possible.
  • the target is to keep a high sensitivity (-100%) while the specificity can be low (FPs high) which is increased (meaning FPs reduced) in subsequent stages of the proposed cascaded multi-modular framework.
  • the CG Module's output is defined by the set of ⁇ Ri, Ni, Sm, Spi ⁇
  • the i th Module's output is defined by the set of ⁇ Ri, Ni, Sm, Spi ⁇ .
  • the whole technique involves a training phase followed by a testing phase. In the Training phase a set of Positive and Negative Class ROIs (data) are considered (usually 50% positive and 50% negative class for data balancing).
  • the invention is based under Semi-supervised learning technique, where some of the data is labeled and some are not.
  • the motivation for semi-supervised learning is due to the fact that for supervised learning in the training phase data class label (ground-truth) is required and is often costly to generate as it require manual intervention, whereas unlabeled data is generally not. So, in the proposed semi-supervised learning scheme only some of the modules require data labeling while other modules do not require this class labeling or supervision.
  • Under Supervised learning one is furnished with input (xl, x2, . ., xn) and output (yl, y2, . ., yn) and the task is to find a function or classifier model that approximates this xi->yi mapping in a generalizable fashion.
  • the output could be a class label (in classification) or a real number (in regression)— these are the "supervision" in supervised learning.
  • Unsupervised learning In case of Unsupervised learning, one receives inputs (xl , x2, . . , xn) but neither target outputs, nor rewards from its environment are provided. Based on the problem (classify, or predict) and background knowledge of the space sampled, one may use various methods: density estimation (estimating some underlying PDF for prediction), k-means clustering (classifying unlabeled real valued data), k-modes clustering (classifying unlabeled categorical data), etc.
  • density estimation estimating some underlying PDF for prediction
  • k-means clustering classifying unlabeled real valued data
  • k-modes clustering classifying unlabeled categorical data
  • ROIs/ candidates generated at the CG stage comprise of both TPs and FPs, that is some of the candidates although not representing the positive class is identified to be a positive class because of its similar kind of representation as that of the positive class.
  • MACHINE LEARNING BASED CLASSIFIER These candidates are then used as input for the second module (refer Figure 1).
  • a supervised learning mode entails that the class label of these ROIs is also provided. At these stages each ROI is represented by a set of descriptive features based on colour, texture, shape, statistical, geometrical and other properties which are implicitly or explicitly embedded in these ROIs.
  • Some of the modules in the proposed invention are similar to the second module. Inputs to these modules are output ROIs from the preceding stages. At each of these modules some descriptive features corresponding to each input ROI is extracted.
  • the idea of a multi-modular cascaded framework is to accommodate complex features for maintaining high accuracy of the classifiers in terms of sensitivity and specificity. Many times, extracting descriptive information from the ROIs becomes very challenging and computationally intensive. It involves complex mathematical operations. So, it is not feasible to extract these complex features if number of ROIs at any level is very high. With filtering through subsequent modules, the number of FPs are reduced thus reducing the number of ROIs to be processed. And more and more complex features are being extracted on each ROI.
  • a use case of such body part is the retina (or fundus) and anatomical locations are optic disc, macula, vessels.
  • Another use case is body part - lung (or parenchyma) and anatomical locations like collar bone, ribs, apical, basal, hilum and exterior pulmonary regions.
  • features are associated with weights depending on the probability of occurrence of abnormalities in these anatomical locations and also the probability of occurrence of FPs or abnormal look-alikes in these regions.
  • a use case - is the ROIs generated at the CG stage fall on retinal vessel crossing or even vessels and are wrongly classified as abnormal manifestations like haemorrhage which is bleed or aneurysm in the retina. So, in the proposed invention features of an ROI appearing on the vessel or the vessel crossing is associated with a lesser weight.
  • Pulmonary Tuberculosis manifestation is more probable in the apical region (top of the lung) than the hilum (vessel) region. Many times, there are ROIs generated at the CG stage which fall in the hilum region. Features corresponding with such ROIs are associated with a lesser weight.
  • This weighted features then allow the classifier to be biased towards ROIs TPs than FPs.
  • the anatomical region based weighted feature framework is illustrated in Figure 2. The anatomical regions are automatically identified based on some landmark detection, both simple and complex segmentation techniques (model based, regression etc).
  • the abnormal manifestations in the images can be varied.
  • the variation can be in terms of probability of occurrence, size of the manifested area (lesions), shape of the lesions etc.
  • a use case can be - haemorrhages in the retina have different sizes. For largesized haemorrhage it's more likely to have multiple candidates corresponding to one haemorrhage than for small haemorrhages. So, we treat them separately by constructing two classifiers based on size of haemorrhage and a gating classifier.
  • Expert 1 is a classifier reporting probability of ROIs/ candidates being TP given that these are of Typel (example of size 20-100 pixels).
  • ii. Expert 2 is a classifier reporting probability of ROIs/ candidates being TP given that these are of Type2 (example of size >100 pixels).
  • Gating classifier is used to probabilistically split the TP candidate space into two sub-spaces, Typel candidates versus Type2 candidates. iv. Each classifier computes probabilities for each ROE candidate. Each candidate is passed to all of the three branches, and receives 3 probabilities. These probabilities are merged by a combination rule.
  • the rule can be the Bayes or Dempster-Shafer or Product as illustrated in the flowchart of Figure 3 to provide one final output that is the probability of a candidate being a true positive (TP).
  • each TP candidate is resampled via scaling, translations, and rotations to increase the variation of the training data and avoid overfitting.
  • These multiple views of candidates are used to train a deep Convolutional Neural Network (CNN) classifier.
  • the CNN assigns probabilities to the random views of each ROI. These probabilities are averaged at each ROI to compute a final classification probability for each candidate.
  • This stage is not specific or limited to a CNN classifier, it is one example of several solutions that are possible.
  • the multi-modular cascaded classifier behaves as a highly discriminating process to discard challenging FPs while still achieving high sensitivity and specificity. Furthermore, not all of the features, aspects and advantages are necessarily required to practice the present disclosure. Thus, while the above detailed description has shown, described, and pointed out novel features of the invention as applied to various embodiments, it will be understood that various omissions, substitutions, and changes in the form and details of the apparatus or process illustrated may be made by those of ordinary skill in the technology without departing from the spirit of the invention. The inventions may be embodied in other specific forms not explicitly described herein. The disclosure described above are to be considered in all respects as illustrative only and not restrictive in any manner.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Medical Informatics (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Quality & Reliability (AREA)
  • Radiology & Medical Imaging (AREA)
  • Image Analysis (AREA)

Abstract

L'invention concerne un procédé pour réduire les faux positifs (FP)/les mimiques de type "hard mimics" de modèles cliniques dans une ou plusieurs images médicales. Le procédé comprend une structure en cascade multi-modulaire, la structure en cascade de modules réduisant les faux positifs successivement par filtrage par le biais de multiples modules, ce qui permet de réduire le nombre de régions d'intérêt ROI à traiter et d'augmenter la précision d'exécution.
PCT/IB2019/055934 2018-07-11 2019-07-11 Structure pour la réduction de faux positifs dans des images médicales Ceased WO2020012414A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
IN201811025961 2018-07-11
IN201811025961 2018-07-11

Publications (1)

Publication Number Publication Date
WO2020012414A1 true WO2020012414A1 (fr) 2020-01-16

Family

ID=69141689

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IB2019/055934 Ceased WO2020012414A1 (fr) 2018-07-11 2019-07-11 Structure pour la réduction de faux positifs dans des images médicales

Country Status (1)

Country Link
WO (1) WO2020012414A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111652277A (zh) * 2020-04-30 2020-09-11 平安科技(深圳)有限公司 假阳性过滤方法、电子装置及计算机可读存储介质

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2004074982A2 (fr) * 2003-02-14 2004-09-02 The University Of Chicago Procede de formation destine a des reseaux neurologiques artificiels de formation massive (mtann) en vue de detecter des anomalies dans des images medicales
US20090175514A1 (en) * 2004-11-19 2009-07-09 Koninklijke Philips Electronics, N.V. Stratification method for overcoming unbalanced case numbers in computer-aided lung nodule false positive reduction
EP1815431B1 (fr) * 2004-11-19 2011-04-20 Koninklijke Philips Electronics N.V. Reduction des faux positifs dans une detection assistee par ordinateur (cad) avec de nouvelles caracteristiques 3d

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2004074982A2 (fr) * 2003-02-14 2004-09-02 The University Of Chicago Procede de formation destine a des reseaux neurologiques artificiels de formation massive (mtann) en vue de detecter des anomalies dans des images medicales
US20090175514A1 (en) * 2004-11-19 2009-07-09 Koninklijke Philips Electronics, N.V. Stratification method for overcoming unbalanced case numbers in computer-aided lung nodule false positive reduction
EP1815431B1 (fr) * 2004-11-19 2011-04-20 Koninklijke Philips Electronics N.V. Reduction des faux positifs dans une detection assistee par ordinateur (cad) avec de nouvelles caracteristiques 3d

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
DOU ET AL.: "Multilevel Contextual 3-D CNNs for False Positive Reduction in Pulmonary Nodule Detection", vol. 64, no. 7, 26 September 2016 (2016-09-26), pages 1558 - 1567, XP055606004, Retrieved from the Internet <URL:https://ieeexplore.ieee.org/abstract/document/7576695> *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111652277A (zh) * 2020-04-30 2020-09-11 平安科技(深圳)有限公司 假阳性过滤方法、电子装置及计算机可读存储介质

Similar Documents

Publication Publication Date Title
Naeem et al. A CNN-LSTM network with multi-level feature extraction-based approach for automated detection of coronavirus from CT scan and X-ray images
Al-Masni et al. Simultaneous detection and classification of breast masses in digital mammograms via a deep learning YOLO-based CAD system
Wang et al. Detecting cardiovascular disease from mammograms with deep learning
Blanc et al. Artificial intelligence solution to classify pulmonary nodules on CT
Deepa et al. A survey on artificial intelligence approaches for medical image classification
Santos et al. Automatic detection of small lung nodules in 3D CT data using Gaussian mixture models, Tsallis entropy and SVM
Asadi et al. Efficient breast cancer detection via cascade deep learning network
Fathy et al. A deep learning approach for breast cancer mass detection
Mohanty et al. Retracted article: An improved data mining technique for classification and detection of breast cancer from mammograms
Choukroun et al. Mammogram Classification and Abnormality Detection from Nonlocal Labels using Deep Multiple Instance Neural Network.
Shahangian et al. Automatic brain hemorrhage segmentation and classification in CT scan images
He et al. Fetal cardiac ultrasound standard section detection model based on multitask learning and mixed attention mechanism
Bhaskar et al. Pulmonary lung nodule detection and classification through image enhancement and deep learning
Harouni et al. Precise segmentation techniques in various medical images
Sameer et al. Brain tumor segmentation and classification approach for MR images based on convolutional neural networks
Diamant et al. Chest radiograph pathology categorization via transfer learning
Ganeshkumar et al. Unsupervised deep learning-based disease diagnosis using medical images
Siddiqui et al. Computed tomography image Processing methods for lung nodule detection and classification: a review
Malik et al. Lung cancer detection at initial stage by using image processing and classification techniques
Gomathi An effective classification of benign and malignant nodules using support vector machine
Ewaidat et al. Identification of lung nodules CT scan using YOLOv5 based on convolution neural network
PJ et al. Hybrid deep learning enabled breast cancer detection using mammogram images
WO2020012414A1 (fr) Structure pour la réduction de faux positifs dans des images médicales
Hesse et al. Primary Tumor Origin Classification of Lung Nodules in Spectral CT using Transfer Learning
Nagaraj et al. The role of pattern recognition in computer-aided diagnosis and computer-aided detection in medical imaging: a clinical validation

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19835202

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19835202

Country of ref document: EP

Kind code of ref document: A1