WO2016032398A2 - Procédé et dispositif d'analyse d'une image - Google Patents
Procédé et dispositif d'analyse d'une image Download PDFInfo
- Publication number
- WO2016032398A2 WO2016032398A2 PCT/SG2015/050278 SG2015050278W WO2016032398A2 WO 2016032398 A2 WO2016032398 A2 WO 2016032398A2 SG 2015050278 W SG2015050278 W SG 2015050278W WO 2016032398 A2 WO2016032398 A2 WO 2016032398A2
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- lesion
- segmentation
- skin
- feature
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/44—Detecting, measuring or recording for evaluating the integumentary system, e.g. skin, hair or nails
- A61B5/441—Skin evaluation, e.g. for skin disorder diagnosis
- A61B5/444—Evaluating skin marks, e.g. mole, nevi, tumour, scar
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/0059—Measuring for diagnostic purposes; Identification of persons using light, e.g. diagnosis by transillumination, diascopy, fluorescence
- A61B5/0077—Devices for viewing the surface of the body, e.g. camera, magnifying lens
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/103—Measuring devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
- A61B5/1032—Determining colour of tissue for diagnostic purposes
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/68—Arrangements of detecting, measuring or recording means, e.g. sensors, in relation to patient
- A61B5/6887—Arrangements of detecting, measuring or recording means, e.g. sensors, in relation to patient mounted on external non-worn devices, e.g. non-medical devices
- A61B5/6898—Portable consumer electronic devices, e.g. music players, telephones, tablet computers
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/72—Signal processing specially adapted for physiological signals or for diagnostic purposes
- A61B5/7235—Details of waveform analysis
- A61B5/7253—Details of waveform analysis characterised by using transforms
- A61B5/726—Details of waveform analysis characterised by using transforms using Wavelet transforms
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/72—Signal processing specially adapted for physiological signals or for diagnostic purposes
- A61B5/7235—Details of waveform analysis
- A61B5/7264—Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/90—Determination of colour characteristics
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/25—Determination of region of interest [ROI] or a volume of interest [VOI]
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/28—Quantising the image, e.g. histogram thresholding for discrimination between background and foreground patterns
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/46—Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/56—Extraction of image or video features relating to colour
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10024—Color image
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20016—Hierarchical, coarse-to-fine, multiscale or multiresolution image processing; Pyramid transform
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20021—Dividing image into blocks, subimages or windows
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30088—Skin; Dermal
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30096—Tumor; Lesion
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/03—Recognition of patterns in medical or anatomical images
- G06V2201/032—Recognition of patterns in medical or anatomical images of protuberances, polyps nodules, etc.
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H50/00—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
- G16H50/20—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
Definitions
- This invention relates to a novel mobile imaging system.
- a smartphone imaging system that may be suitable for the early detection of melanoma.
- M M Ma l ignant melanom a
- BCC basal cell carcinoma
- SCC squamous cell carcinomas
- smartphones are equipped with multi-core CPUs and high resolution image sensors. All this creates the opportunity to use a smartphone to analyze a captured image for disease diagnosis and self-screening.
- an automatic melanoma detection system can be divided into three main stages of segmentation, feature extraction, and classification.
- Some algorithms have been investigated for dermoscopic images taken under well-controlled conditions, but there is little attention on smartphone captured images taken under loosely-controlled lighting and focal conditions.
- the segmentation stage aims to determine lesion region from captured images.
- lesion segmentation There are several common methods to perform lesion segmentation [25], [2]: histogram thresholding, clustering, edge-based, region-based, and active contours. Among these methods, histogram thresholding and region-based are most often used. Histogram thresholding methods use image histogram to determine one or more intensity values for separating pixels into groups. The most popular thresholding method for lesion segmentation is Otsu's method [16]. Region-based methods form different regions by using region merge or region split methods.
- the feature extraction stage aims to extract features that describe the lesion.
- There are many methods proposed such as pattern analysis, Menzies method, ELM 7-point checklist, etc. [24]. However, again, most of these methods are usually applied to images taken from a dermatoscope. For melanoma, the most important warning sign is a new or changing skin growth. It could be a new growth or a change in the color, size or shape of a spot on the skin.
- American Academy of Dermatology promoted a simple method called "ABCDE" [15] corresponding to Asymmetry of lesion, Border irregularity, Color variation, Diameter and Evolving.
- ABSCDE a simple method
- Border irregularity corresponding to Asymmetry of lesion, Border irregularity, Color variation, Diameter and Evolving.
- There are many methods used in the literature to capture color variation, border irregularity, asymmetry because computer-aided diagnosis systems usually perform diagnosis based on a single image. Evolving features are not used generally. The reviews can be found in [14], [
- a method for analysing an image of a lesion on the skin of a subject comprising: (a) identifying the lesion in the image by differentiating the lesion from the skin; (b) segmenting the image; and (c) selecting a feature of the image and comparing the selected feature to a library of pre-determined parameters of the feature, wherein the feature of the lesion belongs to any one selected from the group: colour, border, asymmetry and texture of the image.
- the proposed system has two major components.
- the first component is a fast and lightweight segmentation algorithm for skin detection and accurate lesion localization.
- the second component used to automatically assess the malignancy of the skin lesion image, incorporates new computational features to improve detection accuracy, new feature selection tools to enable on-device processing (no access to remote server / database being required) and a combined classification model.
- the image is processed prior to identifying the lesion in the image.
- Such a processing comprises down-sampling the image.
- segmenting the image further comprising a first segmenting and a second segmenting, the first segmenting determines an uncertain region on the image and the second segmenting refines the uncertain region to obtain segment boundary details.
- the first segmenting process may be a coarse segmenting to determine any uncertain regions of the image, and the second segmenting process may be a fine segmenting carried out on the coarse segmentation to refine the uncertain regions to obtain segment boundary details.
- Uncertain regions may be an image region in the original resolution image where pixel labels are uncertain after the first coarse segmentation. In an embodiment, the uncertain region is about +/- 2 pixels around the coarse segmentation region boundary.
- the second segmenting process to refine the uncertain region is carried out using a MST-based algorithm.
- each group is further divided into sub-groups and the feature selected is based on whether that feature is far from other features belonging to other sub-groups, but near to other features within the same sub-group.
- the lesion in the image is identified by comparing the colour of the skin to a library of pre-determined colours.
- segmenting the lesion further comprising removing segments of the lesion that are connected to the skin boundary.
- segmenting the image is a result of two segmentations: (a) a minimal intra-class-variance thresholding algorithm to locate smoothly-changing borders; and (b) a minimal- spanning-tree based algorithm to locate abruptly-changing borders.
- segmenting is carried out by a region-based method, i.e. group together pixels being neighbours and having similar values and split groups of pixels having dissimilar values.
- method further comprises quantifying the colour variation and border irregularity of the image of the lesion.
- Colour variation may be quantified by (a) dividing image into N-partitions, each partitions further divided into M-subparts; (b) calculating an average pixel value for each subpart and assigning a vector to the subpart; and (c) determining the maximum distance between the vectors, wherein the value of N is any value 4, 8, 12 or 16; and the value of M is any value 2, 4 or 8.
- Irregularity of the border is determined by (a) providing lines along the border; (b) determining the angles between two adjacent lines; and (c) determining the average and variance of the angles, wherein the number of lines chosen is any number 8, 12, 16, 20, 24 or 28.
- the lesion is present in a tissue having a dermal-epidermal junction and an epidermal layer.
- the present method may differentiate between histological subtypes of cutaneous melanoma.
- the lesion is an acral lentiginous melanoma.
- the method further comprising acquiring the image on a computing device and the analysis carried out on the same computing device.
- the image of the object is taken using a smartphone mobile device.
- images are unlike previous work focused on ELM images (epiluminescence microscopic or dermoscopic images), XLM (cross-polarization ELM) or TLM (side-transillumination ELM) that are captured in clinical environments with specialized equipment and skills.
- Images taken using a smartphone mobile device are simply visual images of the object "as is", i.e. topical appearance of the object. Using such images to evaluate the risk or likelihood of a disease or condition simply on the topical appearance of the object poses its own set of challenges which this invention seeks to overcome. These will be described in detail below.
- the method of the present invention may be used to evaluating the risk or likelihood of, or diagnose, a disease or condition.
- the disease is melanoma.
- the system comprising: (a) an image capturing device for capturing the image of an object; and (b) a processor for executing a set of instructions stored in the device for analysing the image, the set of instructions includes a library of algorithms stored in the device to carry out a method according to the first aspect of the invention.
- the object is a lesion on a patient's body and the disease is melanoma.
- the device further comprising a graphical user interface for indicating to a user the results of the analysis.
- melanoma diagnosis schemes can be classified into the following classes: manual methods, which require the visual inspection of an experienced dermatologist and automated (computed-aided) schemes that perform the assessment without human intervention.
- a different class, ca lled hybrid approaches, can be identified when dermatologists jointly combine the computer-based result, context knowledge (e.g., skin type, age, gender) and his experience during the final decision.
- an automatic melanoma analysis system can be constructed in four main phases. The first phase is the image acquisition which can be performed though different devices such as dermatoscope, spectroscope, standard digital camera or camera phone.
- the images acquired by these devices exhibit peculiar features and different qualities, which can significantly change the outcome of the analysis process.
- the second phase involves the skin detection, by removing artifacts (e.g., ruler, watch, hair, scar), and mole border localization.
- the third phase computes a compact set of discriminative features, describing the mole region.
- the fourth phase aims to build a classification model for the M M lesions based on the extracted features. It is worth pointing out that most of the existing approaches are mainly suitable for dermatoscopic or spectroscopic images and they do not provide a complete solution that integrates both the segmentation and classification steps. Dermoscopic images are acquired under controlled clinical conditions by employing a liquid medium (or a non-polarized light source) and magnifiers.
- This type of image includes features below the skin surface which cannot be captured with standard cameras. Therefore, these settings limit the generality and availability of dermatoscopic and spectroscopic systems since they do not consider the lesion localization and, in some cases, apply a complicated set-up.
- dermatoscopic and spectroscopic systems since they do not consider the lesion localization and, in some cases, apply a complicated set-up.
- mobile connected dermatoscopic devices such as Derm Lite (3Gen Inc, CA, USA) and HandyScope (FotoFinder Systems, Bad Bimbach, Germany). Although the usability and mobility is greatly increased, the cost to acquire such an additional device is expensive and not accessible to everyone.
- the features used to accurately classify MM from dermatoscopic images are devised in such a way that they can describe dermatologist-observed characteristics such as color variation, border irregularity, asymmetry, texture and shape.
- dermatologist-observed characteristics such as color variation, border irregularity, asymmetry, texture and shape.
- a model-based classification of the global dermatoscopic patterns have been proposed. The method employs a finite symmetric conditional Markov model in the color space and the resulted parameters are treated as features.
- a computer-readable medium including executable instructions to carry out a method according to the first aspect of the invention.
- an embodiment of the present invention relates to a computer storage product with a computer-readable medium having computer code thereon for performing various computer- implemented operations.
- the media and computer code may be those specially designed and constructed for the purposes of the present invention, or they may be of the kind well known and available to those having skill in the computer software arts.
- Examples of computer-readable media include, but are not limited to: magnetic media such as hard disks, floppy disks, and magnetic tape; optical media such as CD-ROMs and holographic devices; magneto-optical media such as floptical disks; and hardware devices that are specially configured to store and execute program code, such as application-specific integrated circuits ("ASICs"), programmable logic devices ("PLDs”) and ROM and RAM devices.
- Examples of computer code include machine code, such as produced by a compiler, and files containing higher-level code that are executed by a computer using an interpreter.
- an embodiment of the invention may be implemented using Java, C++, or other object-oriented programming language and development tools.
- Another embodiment of the invention may be implemented in hardwired circuitry in place of, or in combination with, machine- executable software instructions.
- the present invention is designed for the early detection of skin cancer using mobile imaging and on- device processing.
- smartphone-captured skin mole images may be used together with a detection computation that resides on the smartphone.
- Smartphone-captured images taken under loosely-controlled conditions introduce new challenges for skin cancer detection, while on-device processing is subject to strict computation and memory constraints. To address these challenges and to achieve high detection accuracy, we propose a system design that includes the following novel elements:
- New features to mathematically quantify the color variation and border irregularity of the skin mole are specific for skin cancer detection and are suitable for mobile imaging and on-device processing.
- the invention detects the malignancy of a skin mole by measuring its concentric partitions' color uniformity, and the algorithms used by the present system and method quantify this.
- the present invention also uses a new border irregularity descriptor that is robust to image noise suitable for our image modality and capturing environment.
- a classifier array and a classification result fusion procedure to compute the detection results • A classifier array and a classification result fusion procedure to compute the detection results.
- the present invention uses an array of classifiers and results fusion that is more appropriate for this invention.
- ELM epiluminescence microscopy
- processing at remote servers has several issues: (i) Privacy is compromised; in particular, mole checking involves images of body parts; (ii) Resource planning and set-up of the server infrastructure is required; (iii) Network connectivity and transmission delay may affect the diagnosis.
- On-device processing solves these issues.
- the present system is designed to enable accurate on-device detection under strict computation and memory constraints.
- system and method of the present invention focuses on two new features which can efficiently describe the color variation and border irregularity of lesion.
- Feature selection algorithms can be divided into two categories according to their evaluation criteria: wrapper and filter [13].
- Wrapper approach uses the performance of a predetermined classifier to evaluate the goodness of features.
- filter approach does not rely on any classifiers. The goodness of features is evaluated based on how much the relevance between them and class labels.
- wrapper approach because it is very fast which allows us to compare different methods.
- wrapper approach because it is more general than wrapper approach because it does not involve to any specific classifier.
- the relevance is usually characterized in terms of mutual information.
- the drawback of mutual information is that it only uses the probability of variables while ignoring the coordinate of variables which can help the classification.
- the final stage of automatic melanoma detection is to classify extracted features of lesions into either cancer or non-cancer.
- Many classification models can be used at this stage such as Support Vector Machine (SVM), nearest neighbor, discriminant analysis [14], [10], [2].
- Figure 1 is (a) a flowchart showing the segmentation procedure and (b) a block diagram of the coarse lesion localisation according to an embodiment of the present invention.
- Figure 2 shows the results of legion segmentation according to an embodiment of the present invention.
- Figure 3 shows the quantification of color variation and border irregularity as carried out by a method according to an embodiment of the present invention.
- Figure 4 shows the classification accuracy with different number of selected color features carried out by a classification method according to an embodiment of the present invention.
- Figure 5 shows an illustration of a screenshot of an application indicating results and diagnosis according to an embodiment of the invention.
- Figure 6 is a flow chart showing the system and method according to an embodiment of the present invention.
- Figure 7 shows two main concepts of the hierarchical segmentation: (a) the valid region of the ROIs together with the constraint used during the localization process and (b) the uncertainty problem of the border localization for a synthetic ROI.
- Figure 8 show LCF for two skin images converted to the gray scale and HSV color space.
- Figures (8a) and (8b) are for the benign nevus and figures (8d) and (8e) are for MM.
- the histograms shown in (8g)-(8j) count the number of pixels values in each bins.
- the black lines in (8c) and (8f) segment the lesion in partitions and subparts to calculate the CT feature.
- Figure 9 show segmentation evaluation for the Otsu (a), (b), the MST (c), (d), and the proposed (e), (f) methods.
- the green rectangle represents the GT
- the red rectangle denotes the SEG.
- the images under consideration are all M Ms.
- Figure 10 show 2D visualization of SVM output of the LCF after dimension reduction for SET1 (117 benign nevi and 67 MMs).
- Figure 11 show the analysis of the effect of downsampling on image segmentation,
- Refinement interval R is constructed so that it overlaps with Bx.
- R and Bx are adjusted to take into account the effect of quantization.
- Tick marks on the horizontal axes depict sampling positions.
- Figure 12 show the graph creation according to an embodiment of the present invention, (a) The pixels are represented in circle nodes.
- the colored nodes represent boundary certain pixels, and uncolored nodes represent uncertain pixels, (b) Virtual nodes ⁇ 1 are introduced and connected with boundary certain nodes with (-1)-weight edges; V 0 includes both certain nodes and uncertain nodes, (c) The final segmentation result is produced by tree merging using the MST edges (blue edges).
- Figure 13 show the accuracy, time, and memory usage on the single object dataset.
- (a-c) use MST to segment downsampled image
- (d-f) use Ncut to segment downsampled image
- (g-i) use multiscale Ncut to segment downsampled image. Results are the average of all images in database.
- the x-axis represents square of scale factor, i.e., the ratio of the number of pixels in the downsampled image to that of the original image.
- Memory usage is reported using Valgrind whenever it is feasible. However, for the memory comparison in (c) (original MST vs. MST in our framework), memory usage is too small to be reported accurately by Valgrind. Thus we analysis the program code to estimate the memory usage for this comparison.
- Figure 14 show the segmentation results for sample images from single object dataset.
- Row (a-b) use MST algorithm
- row (c-d) use Ncut algorithm
- row (e-f) use multiscale Ncut algorithm.
- first column the segmentation is applied on original image.
- second column first, the segmentation is applied on downsampled image. Then, our method is used for refinement.
- the third column shows the detail of two segmentations.
- Figure 15 show the accuracy, time, and memory performance on the BSDS500 dataset.
- (a-c) use MST to segment downsampled image
- (d-f) use Ncut to segment downsampled image
- (g-i) use multiscale Ncut to segment downsampled image.
- ODS optimal dataset scale
- Results are computed on the average of all images in database.
- the x-axis represents square of scale factor, i.e., the ratio of the number of pixels in the downsampled image to that of the original image.
- Figure 16 show segmentation results for sample images from BSDS500 dataset.
- Row (a-b) use MST algorithm
- row (c-d) use Ncut algorithm
- row (e-f) use multiscale Ncut algorithm.
- first column the segmentation is applied on original image.
- second column first, the segmentation is applied on downsampled image. Then, our method is used for refinement.
- the third column shows the detail of two segmentations.
- Figure 17 show the percentage of pixels on uncertain area.
- the left figure (a) shows the percentage on single object dataset, and the right (b) on BSDS500 dataset.
- FIG. 6 is a flowchart setting the general steps taken by the present method according to an embodiment of the present invention. The general steps are set out below:
- Figure 6 depicts the system design with the following processing stages:
- Pre-processing Direct processing the smartphone-captured image is computation and memory expensive, and this exceeds the capability of mobile devices.
- the input image is down-sampled, an approximate mole location is determined using the down-sampled image, and a region enclosing the mole is determined and cropped from the high-resolution input image.
- the approximate mole location can be determined using a minimal intra-class-variance thresholding algorithm, and/or a minimal- spanning-tree based algorithm, on the down-sampled image.
- the cropped region will be processed in the subsequent stages.
- Skin mole localization It is challenging to achieve accurate segmentation of skin lesions from smartphone-captured images under loosely-controlled lighting and focal conditions.
- Feature computation After localizing the skin mole, we characterize it by features belonging to four feature categories: color, border, asymmetry and texture. In addition, we propose a new color feature and a new border feature: a.
- New color feature For abnormal skin lesion, its color varies nonuniform from the center to the border. We propose a new feature to capture this characteristic. The lesion is first divided into N partitions (N sectors with equal degree) and each partition is further divided into M subparts. After that, each partition is described by a M -component vector where each component is the average of pixel values of a subpart. Finally, maximum distance between the vectors quantifies the color variation. This feature is called as color triangle feature.
- This proposed method is computed for gray scale, red and hue channel of the lesion.
- Values of N are chosen from ⁇ 4, 8, 12, 16 ⁇ .
- values of M are chosen from ⁇ 2, 4, 8 ⁇ .
- Feature selection is used to determine the optimal parameters.
- New border feature We propose a new method called border fitting to quantify the irregularity of lesion border (irregular skin mole indicates abnormal condition). First, the lesion border is approximated by piecewise straight-lines. After that, the angles between every two adjacent straight-lines are computed. Mean and variance of the angles are used to characterize the border irregularity. Number of lines is chosen from ⁇ 8, 12, 16, 20, 24, 28 ⁇ . Feature selection is used to determine the optimal parameter.
- Feature Selection There are many features extracted to describe color, border or texture of a skin lesion. Likely, some are noise features. Also, redundancy between features may reduce the classification rate. Hence, a feature selection that is done in offline mode to select only good features is necessary. Only selected features will be used in the system to determine if a lesion is cancer/non-cancer. Furthermore, feature selection has an important role in our mobile-based diagnosis system, where there are strict computational and memory constraints. By using a small number of features, it will have some advantages such as reduction of feature extraction time and storage requirements, reduction of training and testing time, and reduction of the complexity of classifier. Note that it has been recognized that the combinations of individually good features do not necessarily lead to good classification performance. Finding the best feature subset with at most m features from a set of M features necessitates examining W feature subsets:
- the method for early melanoma detection is based upon and extends the earlier work by determining the optimal color space for the segmentation skin lesion.
- the method also extends the analysis and evaluation of the early MM diagnosis system.
- a set of novel features are used to better classify the skin lesion images.
- a coarse model of the lesion is generated by merging different segmentation algorithms. Then, to outline the lesion contour, we employ a fine segmentation by using as input the coarse segmentation result. From the final segmented region, we extract four feature categories which accurately characterize the lesion color, shape, border and texture. To classify the skin lesion, a classifier is built for each feature category and then the final results is obtained by fusing their resu lts.
- the present invention first start out by localizing the skin lesion with a combination of fast skin detection and fusion of fast segmentation results.
- the segmentation process consists of two main steps.
- a mask of skin regions is generated using skin detection method.
- skin detection method we discard pixels from non-skin regions to simplify the image for subsequent processing step.
- second step we extract the lesion by using a combination of different segmentation methods.
- Our segmentation process consists of two main steps.
- a mask of skin regions is generated using the skin detection method.
- Fig. 1 shows the flowchart of the segmentation procedure and is described in detail below.
- the reason of doing skin detection first is to simplify the image, so an exact classification of skin and non-skin region is not needed as long as we extract a simple foreground and keep the whole lesion region inside.
- skin color model to detect skin pixels [5].
- model we followed the steps: we first collected, from the Internet, a set of skin/non-skin images to construct our skin detection dataset. Skin images are selected with different skin colors and various lighting conditions for model generalization. The skin color distribution is estimated by a Gaussian mixture model, differently to what others have done, i.e. using an elliptical distribution. Since the skin mole we want to detect may not have the skin color full identified, we use a filling method for all the holes inside the skin region.
- Skin images are selected with different skin colors and various lighting conditions.
- the skin color distribution is close to an elliptical distribution [11], so we detect skin pixels using an elliptical skin model on C b C r space [5], [11].
- As the skin mole we want to detect may not have skin color, we fill all the holes inside the skin region.
- Otsu's method is a general histogram thresholding method that can classify image pixels based on color intensity, and it may not detect clear edges on image, for example, the lesion boundary.
- Otsu's method is simple and takes much less time compared to other lesion segmentation methods [10].
- MST method is a fast region-based graph-cut method. It can run at nearly linear time complexity in the number of pixels. It is sensitive to clear edges but may not detect smooth changes of color intensity.
- color features widely used in the literature such as mean, variance of pixel values on several color channels.
- the used color channels are gray scale; red, green, blue (from BG image); hue and value (from HSV image).
- hue and value from HSV image.
- a histogram having 16 bins of pixel values in lesion is computed and number of non-zero bins is used as feature. This method is also applied on 6 channels mentioned above. Features achieved from these channel are called as num_gray, num_red, num_green, num_blue, num_hue and num_value.
- shape features such as compactness, solidity, convexity, variance of distances from border points to centroid of lesion [14].
- (c) Asymmetry Feature To compute the asymmetry of lesion shape, we follow the method in [2]. The major and minor axes (first and second principal components) of lesion region are determined. The lesion is rotated such that the principal axes are coincided the image (x and y) axes. The object was hypothetically folded about the x-axis and the area difference (Ax) between the two parts was taken as the amount of asymmetry about the x-axis. The same procedure was performed for the y-axis (so, we get Ay). The asymmetric feature is computed as where A is lesion area.
- GLCM Gray Level Co-occurrence Matrix
- GLCM should be dense.
- the pixel values are quantized to 32 and 64 levels. It means that we computed 8 texture features from two GLCM.
- Canny method To capture edge information in lesion, we also use Canny method to detect edges in lesion. Number of edge pixels are counted and normalized by lesion area. This number is used as feature. Totally, 9 features are extracted to describe the texture of lesion. 2.
- the feature selection problem is to find a set S having k features (k ⁇ n) such that it maximizes the relevance between C and S.
- the relevance is usually characterized in terms of Mutual Information (M l) [17], [1], [7]. Because the consideration all possible subsets having k features requires run, it is difficult for using exhausting search to find
- NM IFS Normalize Mutual Information Feature Selection
- Nl is normalized mutual information function and is defined as
- H is entropy function 2. (From information theory, l(X, Y ) ⁇ 0; l(X, Y ) ⁇ 1 if X or is binary variable; 0 ⁇ NI(X, Y) ⁇ 1)
- Ml usually is widely used in feature selection problem to measure the relevance between variables.
- Ml is a measure based on the probability functions. It is independent to coordinate of variable values which may be useful in classification context. For examples, in two categories classification, suppose that number of samples in each category are equal and there are two features f 1 f 2 which perfectly separate two categories. By Vapnik-Chervonenkis theory [21], the feature has larger margin between two categories will give a better generalization error. Hence, it should be better than another feature. However, by using Ml, it is easy to show that these features will have same M l value with class label (C) which equals to 1.
- C class label
- Fisher criterion may not be good incase (i) the distribution of the data in each class is not a Gaussian; (ii) mean values of classes are equal/approximate.
- N number of data points (samples); for each sample is the set of the most similar
- ⁇ e [0, 1] is weight that regulates to the importance of ANM. Note that M is normalized to [0, 1] before computing eq. (5).
- the database includes 81 color images provided by National Skin Center, Singapore. Number of cancer and non-cancer images are 29 and 52, respectively.
- FIG. 3 shows how the specific features of color variation and border irregularity is characterised and quantified.
- For abnormal skin lesion its color varies non-uniformly from the center to the border.
- the lesion is first divided into N partitions (N sectors with equal degree) and each partition is further divided into M subparts. After that, each partition is described by a M - component vector where each component is the average of pixel values of a subpart. Finally, maximum distance between the vectors quantifies the color variation. This feature is called as color triangle feature.
- This proposed method is computed for gray scale, red and hue channel of the lesion.
- Values of N are chosen from ⁇ 4, 8, 12, 16 ⁇ .
- values of M are chosen from ⁇ 2, 4, 8 ⁇ .
- Feature selection is used to determine the optimal parameters.
- border fitting to quantify the irregularity of lesion border (irregular skin mole indicates abnormal condition).
- the lesion border is first approximated by piecewise straight-lines. After that, the angles between every two adjacent straight-lines are computed. Mean and variance of the angles are used to characterize the border irregularity. Number of lines is chosen from ⁇ 8, 12, 16, 20, 24, 28 ⁇ . Feature selection is used to determine the optimal parameter.
- Table I shows selected features in each category when Ml-based criterion and our criterion are used in feature selection.
- the classification accuracy is given in Table II.
- Fig. 4(b) shows the classification accuracy with different number of selected border features.
- the M l- based criterion achieve highest accuracy 74.27% when only one feature is selected. By using Ml- based criterion, we have no chance to get a higher accuracy even more features are added.
- the highest accuracy of proposed criterion is 77.64% when 2 features are selected. From Table I, we can see that border fitting features are selected features for both Ml-based criterion and our criterion. This confirms the efficiency of proposed border fitting features.
- Table II also shows the accuracy when four classifiers corresponding to 4 feature categories are combined by sum rule.
- our criterion outperforms M l-based criterion.
- the average accuracy of M l-based criterion and our criterion are 90.09% and 93.61%, respectively.
- Our criterion also achieves a high accuracy (96.67%) for cancer samples. It is important in practice where a high accuracy detection for cancer is required.
- Our segmentation process consists of two main steps.
- a mask of skin regions is generated using the skin detection method.
- the second step we extract the lesion by using a hierarchical segmentation method.
- the reason of applying a skin detection procedure first is to filter the image from unwanted artifacts, so an exact classification of skin/non-skin regions are not needed as long as we extract the foreground and keep the whole lesion region within.
- an approach based on skin color model to detect skin pixels We choose this particular skin model since it is more discriminative, providing 32 skin and non-skin color maps of size 64x64x64 for each skin color.
- model we followed the steps: we first collected, from the Internet, a set of skin/non-skin images to construct our skin detection dataset. Skin images are selected with different skin colors and various lighting conditions for model generalization.
- the skin color distribution is estimated by a Gaussian mixture model, differently to what others have done, i.e. using an elliptical distribution. Since the skin mole we want to detect may not have the skin color full identified, we use a filling method for all the holes inside the skin region.
- the skin lesion images are converted the grayscale color space for the rest of the hierarchical segmentation.
- Coarse Lesion Localization There are several common methods used to perform lesion segmentation: histogram thresholding, clustering, edge-based, region-based, and active contours. Histogram thresholding use image histogram to determine one or more intensity values for separating pixels into groups. The most popular thresholding method for lesion segmentation is Otsu method, which is based on the maximum variance.
- Fig. 1(b) shows the flowchart of the coarse lesion localization procedure.
- Otsu's method is a general histogram thresholding method that can classify image pixels based on color intensity, and it may not detect clear edges on image, for example, the lesion boundary.
- Otsu's method is simple and takes much less time compared to other lesion segmentation methods.
- the MST method is a fast region-based graph-cut method. It can run at nearly linear time complexity in the number of pixels. It is sensitive to clear edges but may not detect smooth changes of color intensity. The parameters of the MST were adjusted such that we could get enough candidate OIs while avoiding over-segmentation near the skin mole region.
- we use an efficient MST algorithm that can run at nearly linear time complexity in the number of pixels.
- n ROi represents the total number of ROIs that are located in the valid region.
- the basic idea is to give central mole regions very high weights while penalizing mole regions near to boundary. When both x and y coordinates of the mole centroid are close to the image center, then [equation] is close to 1.
- the power 4 in the formula decide the penalty.
- Fig. 7(a) shows the valid region for the ROIs and the constraint used in the localization process.
- the coarse segmentation algorithm is applied in the first instance on the low-resolution image acquired by the mobile device, due to scars resources.
- the second phase after we obtain an approximate location of the lesion, using the low-resolution image as reference, we crop the corresponding ROI from the original high- resolution image. Since downsampling is a nonlinear operation, this mapping is not exact and generates an uncertainty related to contour localization.
- the border of a synthetic ROI, obtained after applying the coarse lesion segmentation, together with the actual contour are illustrated in Fig. 7(b).
- CT Color Triangle
- v i is the vector describing the / 'th partition.
- Fig. (8c) and Fig. (8f) show the benign and MM skin lesions partitioned, by black lines, into regions and subparts. As shown in the figures, the color variation is higher for the MM than the benign case, due to the growth patterns of the lesions.
- border Fitting To describe the irregularity of the border, we compute shape features such as compactness, solidity, convexity, and variance of distances from border points to centroid of lesion. We also propose a new feature, called Border Fitting, to quantify the border irregularity. The main idea is to approximate the lesion contour points by lines and then to calculate the angle between these lines. Regular borders tend to have smooth changes without significant modifications between consecutive data points, compared with irregular ones.
- the estimation error yi - y is the difference between the predicted coordinates of the segment line and the true coordinates.
- the estimation error in terms of root mean-square- error is the difference between the
- LAF Lesion Asymmetry Feature
- the lesion asymmetry can also reveal valuable information for the lesion categorization.
- the major and minor axes of lesion region i.e., the first and second principal components, are determined.
- the lesion is rotated such that the principal axes are coincided with the image axes x and y.
- the object was hypothetically folded to the x-axis and the area difference i.e., A x , between the two parts was taken as the amount of asymmetry corresponding to the x-axis.
- a x area difference between the two parts was taken as the amount of asymmetry corresponding to the x-axis.
- LTF Lesion Texture Feature
- GLCM gray level co-occurrence matrix
- LBP local binary patterns
- the GLCM of the entire lesion characterizes the texture by calculating how often pairs of pixel with specific brightness values and orientation occur in an image.
- GLCM-based texture description is one of the most well-known and widely used methods in the literature.
- GLCM is constructed by considering each two adjacent pixels in the horizontal direction.
- the features extracted from GLCM used to describe the lesion are contrast, energy, correlation and homogeneity.
- the GLCM should be a dense matrix.
- the pixel values are quantized to 32 and 64 levels. It means that we computed 8 texture features from two quantized GLCMs.
- edge map structure of the lesion
- Canny edge detector method To capture edge map (structure) of the lesion, we employed the Canny edge detector method. The number of edge pixels are counted and normal ized by total lesion area and the resulted number is used as an edge density feature.
- Another widely used texture descriptor that we employed for skin lesion analysis is LBP, which has shown prom ising results in many com puter vision application.
- LBP combines shape and statistical information by a histogram of LBP codes which resemble to microstructures in the image at various scales.
- the LBP is a scale invariant measure that describe the local structure in a 3 x 3 pixel block. The operator was further adapted to accommodate arbitrary block sizes, rotation invariance and m ultiresolution.
- LBP M LBP M
- mean(/) is the average value over the entire lesion image.
- P-1 bitwise shift operations of the circle i.e., a ri map
- the smallest value is selected.
- a feature histogram is generated that is measured with different radii for multiscale analysis.
- the feature selection problem is to find a set G c F (IGI ⁇ I FI) such that it maximizes the relevance between L and G.
- the relevance is usually characterized in terms of Mutual Information (M l). Considering all possible feature subsets requires an exhaustive search which is not recommended for a large feature set.
- NM IFS Normalize Mutual Information Feature Selection
- the next feature /, ⁇ F ⁇ G is chosen such that it maximizes the relevance of /, with the target class L and minimizes the redundancy between it and the selected features in G.
- / is selected as such that it maximises the following condition:
- M l is the mutual information, which measures the relevance between two random variables X and Y and is defined as
- NM I is the normalised mutual information and is defined as
- n° is the set of the most similar samples which are in the same class with i and n e ; is the 536 set of the most similar samples which are not in the same class with i.
- a ⁇ [0,1] is a weight factor that control the influence of ANM and M l in the proposed hybrid criterion. Note that, in order to have the same scale, we normalise Q to [0,1] before computing (9).
- kNN k nearest neighbor classifier
- the first round is a "Wizard-Of-Oz" task consisting of three diseases diagnosis, which was used to engage participants into the real self-diagnosis scenarios without limiting to one disease.
- the second session is an exploratory study based on a prototype that implements our mobile skin cancer self-diagnosis algorithm. After participants completed the first round, we presented the prototype to them and illustrated the design ideas that we had already incorporated or may want to incorporate.
- the datasets used in this Example 2 to evaluate the proposed scheme come from the National Skin Center (NSC) of Singapore, and consist of 184 (for the dataset called SET1) and 81 (called SET2, which is a subset of SET1) color images of skin mole lesions acquired by a professional photograph using a digital camera under different resolutions and sizes. Some of these images are challenging for the segmentation and classification due to the acquisition conditions (such as lightning and focus) and the presence of other anatomical features (e.g, eye, eyebrow, nail, etc.) near the skin lesion.
- the image dataset SETl is classified into two classes: benign nevus (117 images) and MM (67 images).
- the distribution of the classes for SET2 is benign nevi: 52 images, and MMS: 29 images.
- ALM acral lentiginous melanoma
- non-ALM acral lentiginous melanoma
- ALM are malignant skin lesions mostly found on palms, soles, under the nails and in the oral mucosa.
- the diagnosis of the melanoma cases were determined by histopathological examination or clinical agreement by several expert dermatologists from NSC. In order to obtain the ground truth (GT) ROI for each skin lesion an expert was used to manually annotate them.
- GT ground truth
- z-score (f - ⁇ )/ ⁇ , where ⁇ and ⁇ represent the mean and standard deviation of feature vector f for the entire dataset.
- the multiclass SVM model is devised by using a radial basis 628 function (RBF) kernel.
- the kernel function of the SVM model is optimized by the using grid search technique performed on 630 a dataset of randomly selected 25 samples from SETl (15 benign nevi and 10 MMs). In grid search, optimal values of kernel parameters (i.e., the cost and the free parameter SVM) are obtained by selecting various values of grid range and step size.
- TDR true detection rate
- Fig. 9 shows the segmentation results for two MM lesions. These are difficult cases where one of the algorithm fails (Otsu in Fig. 9(b), whilst MST in Fig. 9(c)) to localize the lesions ROI.
- the melanoma lesion shown in Fig. 9(a), Fig. 9(c) and Fig. 9(e) contains regions with different visual features located near the border. Furthermore, the illumination conditions change the color pattern of the skin and mole. This image was well segmented only by the Otsu method (Fig. 9(a)), while MST got trapped in one of the border regions of different color intensity.
- the novel feature selection tool is employed on the feature categories, i.e., 54 color features, 16 border features and 9 texture features. It is worth to point out that we have not applied the feature selection for the asymmetry category (since it contains only one feature) and LBPS due to the fact is already in a condensed binary format. In addition, to be able to apply the feature selection on the high dimensional LBP descriptors and learn the most dominant patterns we will require a large dataset which we were not able to obtain. In order to compute mutual information, features should be first discretized. To discretize each feature, the original interval is divided into a number of equal bins. Points falling outside the interval are assigned to extreme left or right bin.
- n° and n e are set to 50% number of samples of class containing the sample i, and in (9) is set to 0:4.
- PA ⁇ 4, 8, 12, 16 ⁇
- SP ⁇ 2, 4, 8 ⁇ .
- nt ⁇ 8, 12, 16, 20, 24, 28 ⁇ .
- Table IV The classification performance when M l-based criterion and our criterion are used for feature selection.
- the Sens, Spec and balanced Acc are computed for each feature category.
- For the texture category we use a combination of the GLCM-based and edge density features. The values in bold correspond to the best performance.
- the mutual information criterion achieves highest accuracy 90% when number of selected color features equals 4.
- the highest accuracy of proposed criterion is 92:09% when number of selected features is only 3.
- CT feature always appears among selected features which confirms the efficiency of proposed feature.
- the mutual information criterion achieves highest accuracy 74:27% when only one border feature is selected.
- the highest accuracy of the proposed criterion is 77:64% when 2 border features are selected. From table III, we can see that Border Fitting feature are selected for both criterions. This demonstrate the efficiency of proposed features.
- the ROIs are color images we applied the LBP-based texture descriptor on both the red channel and on the converted grayscale image. Furthermore, we resized the ROIs to 256 x 256 pixels using cubic interpolation with anti-aliasing enabled.
- R ⁇ 1, 2 ⁇
- P ⁇ 8, 16 ⁇
- map ri.
- Table V The classification performance of the proposed system on the SETl. The accuracy is computed for each feature category and also for the combination of the four categories. The length of each feature categories is also provided. The values in bold correspond to the best performance.
- ⁇ The one defined by the optimal texture features obtained using our feature selection criterion (see Table III), which consists of GLMC-based features and edge density.
- LBP features can be considered a viable solution for the skin cancer image classification task.
- LBPS and GLCM derived features are in their length which constitutes a important overhead for the overall system that needs to be deployed on mobile platform.
- the image taken by the mobile phone could have a big size, the image is resized to 512 pixels on its longest edge, using cubic interpolation with anti-aliasing enabled, before feeding it into the detection pipeline, in order to reduce the processing time and the memory footprint.
- the average processing time spent for each image is less than 5 seconds. It is worth to point out that, the mobile phone implementation of the algorithm has not been optimized (at instruction level) or explicitly parallelized using the available GPUs.
- a "Wizard-Of-Oz" mobile interface was created with three “false” diagnosis tasks, namely, psoriasis test, skin-cancer test and skin-allergy test. Participants were required to use a self-diagnosis mobile task. By choosing a specific test, participants were required to take a photo of the arm skin, and then to lunch the diagnosis. A progress bar was used to indicate the processing time, which was deliberately set to one minute. The participants were told that they could terminate the diagnosis at any point during the processing time by clicking the "Stop" button.
- the trustfulness of the diagnosis result stimulates the subjects to perform certain actions (e.g., reserve a hospital visit).
- the refinement algorithm should have linear complexity with the number of image pixels.
- We cast the refinem ent as a problem to propagate the label information into the uncertain region from the adjacent certain pixels.
- the first category is boundary detection-based approaches, which partition an image by discovering closed boundary contours.
- the second category is region-based approaches, which group together pixels being neighbours and having similar values and split groups of pixels having dissimilar value.
- Our proposed method can be seen as a region-based approach. Hence, we present here a brief of the state of the arts in region-based approaches.
- a weight value W(i, j) measures the similarity between pixels / ' and j. The higher W(i, j), the more sim ilarity between pixels i,j is. W can be com puted using the location/illumination/texture information of pixels.
- the graph-based methods can be further divided into two subcategories.
- the first subcategory uses global information for segmenting. They usually are graph cut-based methods such as Minimum cut, Normalized cut (Ncut), variants of Ncut.
- the second subcategory uses local information for segmenting such as Minimum Spanning Tree (MST)- based segmentation methods [13], [14], [15], [5], [16].
- MST Minimum Spanning Tree
- Graph cut-based methods try to segment image by optimizing some well-defined global objective functions. Wu and Leahy [11] defined a cut between two connected components as:
- Multiscale graph cut-based approaches In earlier work, at the beginning, they created a sparse graph e.g. each pixel connects to its four nearest neighbors. To find the minimal Ncuts in the graph, they recursively coarsened the graph using a weighted aggregation procedure in which they repeatedly selected smaller sets of representative pixels. The goal of these coarsening steps is to produce smaller and smaller graphs that well represent the same minimization problem. By using this process, segments that are distinct from their environment will emerge and they are detected at their appropriate size scale. After constructing the entire pyramid and detect segments at different levels of the pyramid, they scanned the pyramid from the top down performing relaxation sweeps to associate each pixel with the appropriate segment. Earlier work showed that the running time of their algorithm is linear to number of pixel of image.
- X [XI; XS]
- the multiscale Ncut segmentation can be written in the fol lowing fo rm
- two segments are repeatedly selected to consider for merging in a greedy way.
- the difference between two segments is the minimum weight connecting two segments; the internal difference of a segment S is the largest edge of MST of S.
- Two segments wil l be merged if the difference between two segments is less than or equal to the minimum of the internal difference of the two segments.
- the authors showed that their method can produce segments which are neither too coarse nor too fine. Because only local information is used to decide if a MST should be split or if two segments should be merged, MST-based methods are usually sensitive to noise. However an advantage of these methods is that it is faster than graph cut-based methods [2].
- the most recent MST-based segmentation method proposed in [5] can run with the complexity O(nlogn) where n is number of image pixels. If the weights of edges are integer values (e.g. the difference in intensity of pixels), their algorithm can run in O(n).
- Downsampling reduces the sampling rate of the discrete signal x d
- downsampling with a scaling factor ⁇ , ⁇ ⁇ 1, reduces the sampling frequency to (the sampling interval becomes The signal
- factor A is usually implemented as a 2-step process: (i) first, the signal is passed into a lowpass filter (anti-aliasing filter) with cut-off frequency of , to remove the high-frequency signal
- the filtered signal is decimated by keeping only samples that are - ⁇ apart.
- the lowpass filtering in the first step smears the region boundary, so it complicates the computation of pixels where labels are uncertain. Effect of downsampling on ramp boundary
- G(.) is a Gaussian function
- the signal would be low-pass filtered with a cut-off frequency of in the first step of downsampling.
- a widely-used Gaussian filter is used for low-pass filtering.
- the Gaussian low-pass filter with cut-off frequency o an impulse response :
- the low-pass filtered signal y(l) is the output of the convolution between x(l) and h(l):
- the low-pass filtered signal y(l) is illustrated in left-bottom of Fig. 11. Note that o y > ⁇ ⁇ , the low-pass filtered boundary y(l) spreads over a larger extent compared to x(l).
- the refinement interval can be ⁇ 3 pixels around the coarse segmentation region boundary (after mapping back to the original image sampling grid). From (21), it is clear that R increases with decreasing . With more aggressive downsampling for coarse segmentation, more pixels need to be refined subsequently. Alternatively, as ⁇ ⁇ 1 when we do downsampling, it can be shown that
- the refinement interval can be ⁇ 2 pixels around the coarse segmentation region boundary. Note that (21) computes the size (half width) of the refinement interval w.r.t. the original image sampling grid, while (22) computes the size w.r.t. the downsampled image sampling grid.
- roof boundary we analyzed the effect of downsampling on ramp boundary. Here we give the analysis for another type of boundary, roof boundary.
- the continuous roof boundary can be modeled mathematically by:
- G(.) is a Gaussian function
- the boundary steepness depends on x.
- MST-based method in particular, Kruskal-like algorithm
- the algorithm sorts all edges in the graph by non-decreasing weight order (line 1st in Algorithm 1).
- the algorithm firstly considers each node of graph as an individual tree (line 2nd to 4th). Then, each edge on the graph will be examined, in the order of non-decreasing weight, to check if two different trees connected by this edge should be merged. Two different trees will be merged if at least one of them does not contain a virtual node (line 8th 463 ).
- the algorithm will result in several disjunct trees. Each tree shall contain exactly one virtual node. All nodes in each tree will be labeled by the label of virtual node belonging to that tree (line 13th to 15th ). Thus, the algorithm is similar to Kruskal algorithm. The major difference is that two different sets will not be merged if both of them contain virtual nodes (line 8th ). Effectively, we would not merge two sets having different labels.
- constraint (i) Because the weight of edges connecting boundary certain pixels and virtual nodes equal to -1, the certain pixels will be merged with the virtual node (of the same label) first. Hence, at the end of the algorithm, the certain pixels remain with the same labels. So constraint (i) is satisfied.
- the first one is a single- object dataset, i.e., ground truth images have only foreground and background. This dataset contains 100 images along with ground truth segmentations.
- the second one is BSDS500 dataset with region benchmarks. BSDS500 contains 200 testing images.
- the ground truth for each image contains several boundary maps drawn by different people, forming a soft boundary map as ground truth image.
- (21) and (22) are used to locate uncertain pixels around the coarse segmentation region boundaries. A pixel is marked as a segment boundary pixel if its label is different from any its 8-connected neighbors' labels.
- the segmentation algorithms (MST, Ncut and multiscale Ncut) for downsampled images, and our method for refinement are implemented in C, and run in Matlab using mex file. Time is measured using Matlab command. Memory usage is measured using Valgrind , a widely used profiling tool that can report memory usage.
- Ncut and multiscale Ncut algorithm has only one parameter to adjust: the number of partitions in image. For the Ncut algorithm, to make a fair comparison, we use 5 partitions which give the best performance in original image. This parameter is also used for downsampled image. For multiscale Ncut, we also find that 5 partitions give the best performance in original image. We keep this parameter for downsampled image.
- Fig. 13(a-c) show the accuracy, computation time, and memory usage when MST is used to segment downsampled image.
- the downsampled segmentation error is smaller than that error in the original image. The reason is that when images are down-sampled, the noise may be reduced. Thus, segmentation can provide more accurate shape information.
- ⁇ 2 ⁇ [0.0623, 0.3] it means that the scale factor ⁇ [0.25, 0.55], our method is comparable with MST in the accuracy while requiring much less time and memory. Results show that, even for a very efficient algorithm like MST, time and memory can be still saved by using our proposed approach.
- the results (accuracy, time and memory) of our method and Ncut are shown in Fig. 13(d-f).
- FIG. 14 shows the results of two sample images from the database for subjective comparison.
- MST the boundaries produced by MST on original image and our method are smooth and almost the same.
- Row (c) shows that our method get almost the same result as Ncut.
- An interesting result can be found in row (d).
- the using of Ncut on original image produced a not good result (it missed boundary on bottom-right part of the object) while our method can correct this error. This is consistent with results in Fig. 13(d) where our method gives a better Fscore at some scale factors.
- row (e-f) we can see that boundaries around the object produced by multiscale Ncut and our method are smooth and almost the same.
- segmentation covering Crossing
- PRI Probabilistic Rand Index
- VI Variation of Information
- the Covering metric represents an evaluation of the pixel-wise classification task in recognition.
- PRI compares the compatibility of assignments between pairs of elements in the clusters.
- VI measures the distance between machine segmentation and ground-truth segmentation in terms of their average conditional entropy.
- ODS Covering metric
- OIS PRI and VI metric
- ODS and OIS PRI and VI metric
- Optimal dataset scale means that we use the same parameter setting for all images in the dataset to get optimal segmentation result.
- Optimal image scale (OIS) means that we use optimal parameter setting for each image in the dataset to get optimal segmentation result. Best means that we find the image with best segmentation result in the dataset and report the result of this single image. For Covering and PRI metric, higher ODS (OIS, or Best) value indicates better segmentation result, while for VI metric, lower ODS (or OIS) value indicates better segmentation result.
- Region benchmarks of BSDS500 dataset The Tables l-V show the region benchmarks of BSDS500 under a series of resize factors.
- region-based metrics represent the accuracy for region-based segmentation in different aspects. Covering metric is the best overlap ratio between segmentation results and ground-truth results. PRI is a metric to compare segmentation results with several ground truth results. VI gives the information difference between segmentation results and the ground-truth. We measure our method in these metrics to give comprehensive results for region-based segmentation.
- Fig. 16 shows the results of two sample images from the database.
- MST the boundaries produced by MST on original image and our method are both smooth. At some region, our method ignore detail boundaries, but the larger boundaries of main objects in the images are preserved.
- Row (c) shows that our method get almost the same result as Ncut.
- row (d) the result of our method is slightly different from Ncut, both results are able to detect main boundaries on image.
- row (e-f) we can see that boundaries around the object produced by multiscale Ncut and our method are smooth and almost the same.
- Fig. 17 shows the percentage of pixels on uncertain area. This percentage represents the ratio of pixels that need to be refined in our method. From Fig. 17(a) and Fig. 17(b) (calculated from single object dataset and BSDS500, respectively), we can see that as resize factor increases, the percentage drops, indicating that fewer number of pixels in the image need to be refined. For MST algorithm, the percentage is higher than Ncut and multiscale Ncut algorithm.
- the present invention relates to a mobile imaging system for early diagnosis of melanoma.
- the invention relates to capturing images using a smartphone and having a detection system that runs entirely on the smartphone. Smartphone-captured images taken under loosely-controlled conditions introduce new challenges for melanoma detection, while processing performed on the smartphone is subject to strict computation and memory constraints.
- the invention includes a system that computes selected visual features from a user-captured skin lesion image, and analyzes them to estimate the likelihood of malignance, all on an off-the-shelf smartphone.
- the main characteristics of the proposed approach are: an efficient segmentation scheme by combining fast skin detection and a multiscale lightweight segmentation, a new set of features which efficiently capture the color variation and border irregularity of the segmented lesions and a hybrid criterion for selecting the most discriminative features.
- the experimental results proves the efficiency of the prototype in accurate segmenting and classification of the skin lesion in camera phone images.
- it could be employed by the general public for preliminary self-screening or it can assist the physicians (like a personal assistant) in the diagnosis. Whilst there has been described in the foregoing description preferred embodiments of the present invention, it will be understood by those skilled in the technology concerned that many variations or modifications in details of design or construction may be made without departing from the present invention.
- Roberto Battiti Using mutual information for selecting features in supervised neural net learning. IEEE Transactions on Neural Networks, 5:537-550, 1994.
- Keinosuke Fukunaga Introduction to Statistical Pattern Recognition (2nd Ed.). Academic Press Professional, Inc., San Diego, CA, USA, 1990.
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Veterinary Medicine (AREA)
- Animal Behavior & Ethology (AREA)
- Public Health (AREA)
- Surgery (AREA)
- Molecular Biology (AREA)
- Heart & Thoracic Surgery (AREA)
- Biophysics (AREA)
- Pathology (AREA)
- Biomedical Technology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Multimedia (AREA)
- Artificial Intelligence (AREA)
- Physiology (AREA)
- Signal Processing (AREA)
- Psychiatry (AREA)
- Dermatology (AREA)
- Mathematical Physics (AREA)
- Fuzzy Systems (AREA)
- Dentistry (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Radiology & Medical Imaging (AREA)
- Quality & Reliability (AREA)
- Evolutionary Computation (AREA)
- Measuring And Recording Apparatus For Diagnosis (AREA)
- Image Analysis (AREA)
Priority Applications (4)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| AU2015307296A AU2015307296B2 (en) | 2014-08-25 | 2015-08-25 | Method and device for analysing an image |
| SG11201701478SA SG11201701478SA (en) | 2014-08-25 | 2015-08-25 | Method and device for analysing an image |
| EP15836149.3A EP3185758A4 (fr) | 2014-08-25 | 2015-08-25 | Procédé et dispositif d'analyse d'une image |
| US15/507,107 US10499845B2 (en) | 2014-08-25 | 2015-08-25 | Method and device for analysing an image |
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| SG10201405182WA SG10201405182WA (en) | 2014-08-25 | 2014-08-25 | Method and system |
| SG10201405182W | 2014-08-25 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2016032398A2 true WO2016032398A2 (fr) | 2016-03-03 |
Family
ID=55400793
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/SG2015/050278 Ceased WO2016032398A2 (fr) | 2014-08-25 | 2015-08-25 | Procédé et dispositif d'analyse d'une image |
Country Status (5)
| Country | Link |
|---|---|
| US (1) | US10499845B2 (fr) |
| EP (1) | EP3185758A4 (fr) |
| AU (1) | AU2015307296B2 (fr) |
| SG (3) | SG10201405182WA (fr) |
| WO (1) | WO2016032398A2 (fr) |
Cited By (32)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| GR1009346B (el) * | 2017-12-08 | 2018-08-13 | Νικολαος Χρηστου Πετρελλης | Μεθοδος και φορητο συστημα κατηγοριοποιησης φωτογραφιων |
| EP3373204A1 (fr) * | 2017-03-08 | 2018-09-12 | Casio Computer Co., Ltd. | Appareil d'identification, procédé et programme d'identification |
| EP3563350A1 (fr) * | 2016-12-29 | 2019-11-06 | Universita' Degli Studi Di Padova | Procédé et dispositif de cartographie tridimensionnelle de la peau d'un patient pour la prise en charge du diagnostic du mélanome |
| CN110532969A (zh) * | 2019-09-02 | 2019-12-03 | 中南大学 | 基于多尺度图像分割的斜坡单元划分方法 |
| CN110598532A (zh) * | 2019-07-31 | 2019-12-20 | 长春市万易科技有限公司 | 一种树木病虫害监控系统及方法 |
| CN110600122A (zh) * | 2019-08-23 | 2019-12-20 | 腾讯医疗健康(深圳)有限公司 | 一种消化道影像的处理方法、装置、以及医疗系统 |
| US10531825B2 (en) * | 2016-10-14 | 2020-01-14 | Stoecker & Associates, LLC | Thresholding methods for lesion segmentation in dermoscopy images |
| CN112085743A (zh) * | 2020-09-04 | 2020-12-15 | 厦门大学 | 一种肾肿瘤的图像分割方法 |
| CN112489062A (zh) * | 2020-12-10 | 2021-03-12 | 中国科学院苏州生物医学工程技术研究所 | 基于边界及邻域引导的医学图像分割方法及系统 |
| CN113255704A (zh) * | 2021-07-13 | 2021-08-13 | 中国人民解放军国防科技大学 | 一种基于局部二值模式的像素差卷积边缘检测方法 |
| US11403069B2 (en) | 2017-07-24 | 2022-08-02 | Tesla, Inc. | Accelerated mathematical engine |
| CN114863140A (zh) * | 2022-03-01 | 2022-08-05 | 武汉理兆源信息科技有限公司 | 一种色纺织物图像相似度计算方法 |
| US11409692B2 (en) | 2017-07-24 | 2022-08-09 | Tesla, Inc. | Vector computational unit |
| US11487288B2 (en) | 2017-03-23 | 2022-11-01 | Tesla, Inc. | Data synthesis for autonomous control systems |
| US11537811B2 (en) | 2018-12-04 | 2022-12-27 | Tesla, Inc. | Enhanced object detection for autonomous vehicles based on field view |
| US11561791B2 (en) | 2018-02-01 | 2023-01-24 | Tesla, Inc. | Vector computational unit receiving data elements in parallel from a last row of a computational array |
| US11562231B2 (en) | 2018-09-03 | 2023-01-24 | Tesla, Inc. | Neural networks for embedded devices |
| US11567514B2 (en) | 2019-02-11 | 2023-01-31 | Tesla, Inc. | Autonomous and user controlled vehicle summon to a target |
| US11610117B2 (en) | 2018-12-27 | 2023-03-21 | Tesla, Inc. | System and method for adapting a neural network model on a hardware platform |
| US11636333B2 (en) | 2018-07-26 | 2023-04-25 | Tesla, Inc. | Optimizing neural network structures for embedded systems |
| US11665108B2 (en) | 2018-10-25 | 2023-05-30 | Tesla, Inc. | QoS manager for system on a chip communications |
| US11681649B2 (en) | 2017-07-24 | 2023-06-20 | Tesla, Inc. | Computational array microprocessor system using non-consecutive data formatting |
| US11734562B2 (en) | 2018-06-20 | 2023-08-22 | Tesla, Inc. | Data pipeline and deep learning system for autonomous driving |
| US11748620B2 (en) | 2019-02-01 | 2023-09-05 | Tesla, Inc. | Generating ground truth for machine learning from time series elements |
| US11790664B2 (en) | 2019-02-19 | 2023-10-17 | Tesla, Inc. | Estimating object properties using visual image data |
| US11816585B2 (en) | 2018-12-03 | 2023-11-14 | Tesla, Inc. | Machine learning models operating at different frequencies for autonomous vehicles |
| US11841434B2 (en) | 2018-07-20 | 2023-12-12 | Tesla, Inc. | Annotation cross-labeling for autonomous control systems |
| US11893393B2 (en) | 2017-07-24 | 2024-02-06 | Tesla, Inc. | Computational array microprocessor system with hardware arbiter managing memory requests |
| US11893774B2 (en) | 2018-10-11 | 2024-02-06 | Tesla, Inc. | Systems and methods for training machine models with augmented data |
| US12014553B2 (en) | 2019-02-01 | 2024-06-18 | Tesla, Inc. | Predicting three-dimensional features for autonomous driving |
| US12307350B2 (en) | 2018-01-04 | 2025-05-20 | Tesla, Inc. | Systems and methods for hardware-based pooling |
| US12462575B2 (en) | 2021-08-19 | 2025-11-04 | Tesla, Inc. | Vision-based machine learning model for autonomous driving with adjustable virtual camera |
Families Citing this family (107)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| SG10201405182WA (en) * | 2014-08-25 | 2016-03-30 | Univ Singapore Technology & Design | Method and system |
| US10275935B2 (en) | 2014-10-31 | 2019-04-30 | Fyusion, Inc. | System and method for infinite synthetic image generation from multi-directional structured image array |
| US10726560B2 (en) * | 2014-10-31 | 2020-07-28 | Fyusion, Inc. | Real-time mobile device capture and generation of art-styled AR/VR content |
| US10262426B2 (en) | 2014-10-31 | 2019-04-16 | Fyusion, Inc. | System and method for infinite smoothing of image sequences |
| CA2915650C (fr) * | 2014-12-25 | 2019-02-12 | Casio Computer Co., Ltd. | Dispositif d'aide au diagnostic pour lesions, procede de traitement d'image dans ce meme dispositif et support de stockage d'un programme associe a cette meme methode |
| KR101580075B1 (ko) * | 2015-01-23 | 2016-01-21 | 김용한 | 병변 영상 분석을 통한 광 치료 장치, 이에 이용되는 병변 영상 분석에 의한 병변 위치 검출방법 및 이를 기록한 컴퓨팅 장치에 의해 판독 가능한 기록 매체 |
| EP3268870A4 (fr) * | 2015-03-11 | 2018-12-05 | Ayasdi, Inc. | Systèmes et procédés de prédiction de résultats utilisant un modèle d'apprentissage de prédiction |
| US10147211B2 (en) | 2015-07-15 | 2018-12-04 | Fyusion, Inc. | Artificially rendering images using viewpoint interpolation and extrapolation |
| US11006095B2 (en) | 2015-07-15 | 2021-05-11 | Fyusion, Inc. | Drone based capture of a multi-view interactive digital media |
| US12261990B2 (en) | 2015-07-15 | 2025-03-25 | Fyusion, Inc. | System and method for generating combined embedded multi-view interactive digital media representations |
| US10242474B2 (en) | 2015-07-15 | 2019-03-26 | Fyusion, Inc. | Artificially rendering images using viewpoint interpolation and extrapolation |
| US10222932B2 (en) | 2015-07-15 | 2019-03-05 | Fyusion, Inc. | Virtual reality environment based manipulation of multilayered multi-view interactive digital media representations |
| US10852902B2 (en) | 2015-07-15 | 2020-12-01 | Fyusion, Inc. | Automatic tagging of objects on a multi-view interactive digital media representation of a dynamic entity |
| US11095869B2 (en) | 2015-09-22 | 2021-08-17 | Fyusion, Inc. | System and method for generating combined embedded multi-view interactive digital media representations |
| KR102490438B1 (ko) * | 2015-09-02 | 2023-01-19 | 삼성전자주식회사 | 디스플레이 장치 및 그 제어 방법 |
| US11783864B2 (en) | 2015-09-22 | 2023-10-10 | Fyusion, Inc. | Integration of audio into a multi-view interactive digital media representation |
| US10192129B2 (en) | 2015-11-18 | 2019-01-29 | Adobe Systems Incorporated | Utilizing interactive deep learning to select objects in digital visual media |
| US11568627B2 (en) | 2015-11-18 | 2023-01-31 | Adobe Inc. | Utilizing interactive deep learning to select objects in digital visual media |
| FR3046692B1 (fr) * | 2016-01-07 | 2018-01-05 | Urgo Recherche Innovation Et Developpement | Analyse numerique d'une image numerique representant une plaie pour sa caracterisation automatique |
| US10181188B2 (en) * | 2016-02-19 | 2019-01-15 | International Business Machines Corporation | Structure-preserving composite model for skin lesion segmentation |
| US10674953B2 (en) * | 2016-04-20 | 2020-06-09 | Welch Allyn, Inc. | Skin feature imaging system |
| US10304220B2 (en) * | 2016-08-31 | 2019-05-28 | International Business Machines Corporation | Anatomy segmentation through low-resolution multi-atlas label fusion and corrective learning |
| US10568695B2 (en) * | 2016-09-26 | 2020-02-25 | International Business Machines Corporation | Surgical skin lesion removal |
| US11202017B2 (en) | 2016-10-06 | 2021-12-14 | Fyusion, Inc. | Live style transfer on a mobile device |
| US10437879B2 (en) | 2017-01-18 | 2019-10-08 | Fyusion, Inc. | Visual search using multi-view interactive digital media representations |
| US20180227482A1 (en) | 2017-02-07 | 2018-08-09 | Fyusion, Inc. | Scene-aware selection of filters and effects for visual digital media content |
| WO2018209057A1 (fr) | 2017-05-11 | 2018-11-15 | The Research Foundation For The State University Of New York | Système et procédé associés à la prédiction de la qualité de segmentation d'objets dans l'analyse de données d'image copieuses |
| US10313651B2 (en) | 2017-05-22 | 2019-06-04 | Fyusion, Inc. | Snapshots at predefined intervals or angles |
| US11069147B2 (en) | 2017-06-26 | 2021-07-20 | Fyusion, Inc. | Modification of multi-view interactive digital media representation |
| SG10201706752XA (en) * | 2017-08-17 | 2019-03-28 | Iko Pte Ltd | Systems and methods for analyzing cutaneous conditions |
| EP3669375B1 (fr) * | 2017-08-18 | 2024-12-25 | The Procter & Gamble Company | Systèmes et procédés d'identification de taches hyperpigmentées |
| CN107563123A (zh) * | 2017-09-27 | 2018-01-09 | 百度在线网络技术(北京)有限公司 | 用于标注医学图像的方法和装置 |
| WO2019070775A1 (fr) * | 2017-10-03 | 2019-04-11 | Ohio State Innovation Foundation | Système et procédé de segmentation d'image et d'analyse numérique pour notation d'essai clinique dans une maladie de la peau |
| AU2017415626B2 (en) * | 2017-10-17 | 2020-08-06 | Kronikare Pte Ltd | System and method for facilitating analysis of a wound in a target subject |
| US10990859B2 (en) * | 2018-01-25 | 2021-04-27 | Emza Visual Sense Ltd | Method and system to allow object detection in visual images by trainable classifiers utilizing a computer-readable storage medium and processing unit |
| CN119207765A (zh) * | 2018-02-02 | 2024-12-27 | 莫勒库莱特股份有限公司 | 用于伤口分析的计算机实现的方法 |
| US10592747B2 (en) | 2018-04-26 | 2020-03-17 | Fyusion, Inc. | Method and apparatus for 3-D auto tagging |
| US11244195B2 (en) * | 2018-05-01 | 2022-02-08 | Adobe Inc. | Iteratively applying neural networks to automatically identify pixels of salient objects portrayed in digital images |
| US10725629B2 (en) * | 2018-06-25 | 2020-07-28 | Google Llc | Identifying and controlling smart devices |
| US11315321B2 (en) * | 2018-09-07 | 2022-04-26 | Intel Corporation | View dependent 3D reconstruction mechanism |
| US10674152B2 (en) | 2018-09-18 | 2020-06-02 | Google Llc | Efficient use of quantization parameters in machine-learning models for video coding |
| US11025907B2 (en) * | 2019-02-28 | 2021-06-01 | Google Llc | Receptive-field-conforming convolution models for video coding |
| US10869036B2 (en) | 2018-09-18 | 2020-12-15 | Google Llc | Receptive-field-conforming convolutional models for video coding |
| US11282208B2 (en) | 2018-12-24 | 2022-03-22 | Adobe Inc. | Identifying target objects using scale-diverse segmentation neural networks |
| CN109529202B (zh) * | 2018-12-29 | 2023-04-07 | 佛山科学技术学院 | 一种激光祛斑的系统及方法 |
| US10803594B2 (en) * | 2018-12-31 | 2020-10-13 | Beijing Didi Infinity Technology And Development Co., Ltd. | Method and system of annotation densification for semantic segmentation |
| TWI769370B (zh) * | 2019-03-08 | 2022-07-01 | 太豪生醫股份有限公司 | 病灶偵測裝置及其方法 |
| US11263467B2 (en) * | 2019-05-15 | 2022-03-01 | Apical Limited | Image processing |
| US11475277B2 (en) * | 2019-05-16 | 2022-10-18 | Google Llc | Accurate and interpretable classification with hard attention |
| CN110222700A (zh) * | 2019-05-30 | 2019-09-10 | 五邑大学 | 基于多尺度特征与宽度学习的sar图像识别方法及装置 |
| CN110533644B (zh) * | 2019-08-22 | 2023-02-03 | 深圳供电局有限公司 | 一种基于图像识别的绝缘子检测方法 |
| CN110490210B (zh) * | 2019-08-23 | 2022-09-30 | 河南科技大学 | 一种基于紧致通道间t采样差分的彩色纹理分类方法 |
| US10764471B1 (en) * | 2019-09-27 | 2020-09-01 | Konica Minolta Business Solutions U.S.A., Inc. | Customized grayscale conversion in color form processing for text recognition in OCR |
| CN110764824A (zh) * | 2019-10-25 | 2020-02-07 | 湖南大学 | 一种gpu上的图计算数据划分方法 |
| US11250563B2 (en) * | 2019-10-31 | 2022-02-15 | Tencent America LLC | Hierarchical processing technique for lesion detection, classification, and segmentation on microscopy images |
| CN110956200A (zh) * | 2019-11-05 | 2020-04-03 | 哈尔滨工程大学 | 一种轮胎花纹相似度检测方法 |
| CN110866908B (zh) * | 2019-11-12 | 2021-03-26 | 腾讯科技(深圳)有限公司 | 图像处理方法、装置、服务器及存储介质 |
| BR112022011111A2 (pt) * | 2019-12-09 | 2022-08-23 | Janssen Biotech Inc | Método para determinar a gravidade da doença da pele com base na porcentagem de área de superfície corporal coberta por lesões |
| CN111144479A (zh) * | 2019-12-25 | 2020-05-12 | 福州数据技术研究院有限公司 | 基于图像处理的中医面色识别方法 |
| TWI743693B (zh) * | 2020-02-27 | 2021-10-21 | 國立陽明交通大學 | 良性腫瘤發展趨勢評估系統、其伺服計算機裝置及計算機可讀取的儲存媒體 |
| US11328406B2 (en) * | 2020-03-05 | 2022-05-10 | Siemens Aktiengesellschaft | System and method for automated microstructure analysis |
| US10811138B1 (en) * | 2020-03-11 | 2020-10-20 | Memorial Sloan Kettering Cancer Center | Parameter selection model using image analysis |
| CN111401275B (zh) * | 2020-03-20 | 2022-11-25 | 内蒙古工业大学 | 一种用于识别草地边缘的信息处理方法和装置 |
| CN111436962B (zh) * | 2020-04-13 | 2023-05-26 | 重庆工程职业技术学院 | 用于海量医学影像数据分布收集设备及其工作方法 |
| CN111507892B (zh) * | 2020-04-15 | 2022-03-15 | 广西科技大学 | 一种图像细化方法及系统 |
| CN111986210B (zh) * | 2020-07-29 | 2022-11-04 | 天津大学 | 一种医学影像小病灶分割方法 |
| US11335004B2 (en) | 2020-08-07 | 2022-05-17 | Adobe Inc. | Generating refined segmentation masks based on uncertain pixels |
| CN112698380A (zh) * | 2020-12-16 | 2021-04-23 | 南京大学 | 一种适用于强背景噪声下低能质子束的束流截面处理方法 |
| US11676279B2 (en) | 2020-12-18 | 2023-06-13 | Adobe Inc. | Utilizing a segmentation neural network to process initial object segmentations and object user indicators within a digital image to generate improved object segmentations |
| JP2024506297A (ja) * | 2021-02-04 | 2024-02-13 | ヒンジ ヘルス, インコーポレイテッド | 異なる解剖学的領域に関する療法のための患者中心筋骨格(msk)処置システムおよび関連付けられるプログラム |
| CN112907526B (zh) * | 2021-02-07 | 2022-04-19 | 电子科技大学 | 基于lbf的卫星望远镜镜片表面疵病检测方法 |
| US11875510B2 (en) | 2021-03-12 | 2024-01-16 | Adobe Inc. | Generating refined segmentations masks via meticulous object segmentation |
| US11798136B2 (en) | 2021-06-10 | 2023-10-24 | Bank Of America Corporation | Automated teller machine for detecting security vulnerabilities based on document noise removal |
| CN115482243A (zh) * | 2021-06-16 | 2022-12-16 | 西南科技大学 | 一种基于特征融合的肺实质自动分割方法及其实现方式 |
| US12207935B2 (en) * | 2021-07-02 | 2025-01-28 | The Board Of Trustees Of Western Michigan University | Quantitative image-based disorder analysis for early detection of melanoma type features |
| CN113724238B (zh) * | 2021-09-08 | 2024-06-11 | 佛山科学技术学院 | 基于特征点邻域颜色分析的陶瓷砖色差检测与分级方法 |
| CN113744255B (zh) * | 2021-09-08 | 2025-07-15 | 温州市人民医院 | 皮肤镜图像的分割方法、分割网络及分割网络构建方法 |
| US11900605B2 (en) | 2021-09-30 | 2024-02-13 | Merative Us L.P. | Methods and systems for detecting focal lesions in multi-phase or multi-sequence medical imaging studies |
| US12020400B2 (en) | 2021-10-23 | 2024-06-25 | Adobe Inc. | Upsampling and refining segmentation masks |
| US12465325B2 (en) | 2021-10-28 | 2025-11-11 | VisOvum Ltd. | Ultrasonic endocavitary imaging system and method |
| US12333714B2 (en) | 2021-12-09 | 2025-06-17 | Merative Us L.P. | Estimation of b-value in prostate magnetic resonance diffusion weighted images |
| CN114648494B (zh) * | 2022-02-28 | 2022-12-06 | 扬州市苏灵农药化工有限公司 | 基于工厂数字化的农药悬浮剂生产控制系统 |
| EP4496508A1 (fr) * | 2022-03-21 | 2025-01-29 | 360Medlink Inc. | Procédé et système de surveillance d'une zone de traitement d'un patient |
| CN114693703A (zh) * | 2022-03-31 | 2022-07-01 | 卡奥斯工业智能研究院(青岛)有限公司 | 皮肤镜图像分割模型训练、皮肤镜图像识别方法及装置 |
| CN114451870A (zh) * | 2022-04-12 | 2022-05-10 | 中南大学湘雅医院 | 色素痣恶变风险监测系统 |
| CN114882246B (zh) * | 2022-04-29 | 2025-05-02 | 浪潮(北京)电子信息产业有限公司 | 一种图像特征的识别方法、装置、设备和介质 |
| CN114581345B (zh) * | 2022-05-07 | 2022-07-05 | 广州骏天科技有限公司 | 一种基于自适应线性灰度化的图像增强方法及系统 |
| CN114998606B (zh) * | 2022-05-10 | 2023-04-18 | 北京科技大学 | 一种基于极化特征融合的弱散射目标检测方法 |
| GB202207799D0 (en) * | 2022-05-26 | 2022-07-13 | Moletest Ltd | Image processing |
| TR2022008605A2 (tr) * | 2022-05-26 | 2022-06-21 | Sakarya Ueniversitesi Rektoerluegue | Bi̇r yara taki̇p si̇stemi̇ |
| CN114882038B (zh) * | 2022-07-12 | 2022-09-30 | 济宁鸿启建设工程检测有限公司 | 一种建筑外墙保温类材料检测方法及检测设备 |
| CN115100201B (zh) * | 2022-08-25 | 2022-11-11 | 淄博齐华制衣有限公司 | 一种阻燃纤维材料的混纺缺陷检测方法 |
| WO2024044815A1 (fr) * | 2022-08-29 | 2024-03-07 | St Vincent's Institute Of Medical Research | Procédés de classification améliorés pour apprentissage automatique |
| CN116542987B (zh) * | 2023-04-19 | 2024-06-04 | 翼存(上海)智能科技有限公司 | 一种图像裁剪方法、装置、电子设备及存储介质 |
| CN116188880B (zh) * | 2023-05-05 | 2023-07-18 | 中国科学院地理科学与资源研究所 | 基于遥感影像和模糊识别的耕地分类的方法及其系统 |
| CN116894820B (zh) * | 2023-07-13 | 2024-04-19 | 国药(武汉)精准医疗科技有限公司 | 一种色素性皮肤病分类检测方法、装置、设备及存储介质 |
| US20250022161A1 (en) * | 2023-07-13 | 2025-01-16 | Flawless Holdings Limited | Differentiable maps for landmark localization |
| CN117197083B (zh) * | 2023-09-11 | 2025-01-28 | 苏州可帮基因科技有限公司 | 病理图像的质量管控方法及设备 |
| CN117408995B (zh) * | 2023-12-11 | 2024-05-24 | 东莞市时实电子有限公司 | 基于多特征融合的电源适配器外观质量检测方法 |
| US12217422B1 (en) * | 2024-02-02 | 2025-02-04 | BelleTorus Corporation | Compute system with skin disease identification mechanism and method of operation thereof |
| CN118429330B (zh) * | 2024-07-02 | 2024-09-06 | 深圳市家鸿口腔医疗股份有限公司 | 一种牙冠表面缺损检测方法及系统 |
| CN119323587B (zh) * | 2024-08-26 | 2025-11-18 | 科大讯飞华南人工智能研究院(广州)有限公司 | 一种医疗影像分割方法 |
| CN120014255A (zh) * | 2024-09-26 | 2025-05-16 | 自然资源部国土卫星遥感应用中心 | 一种光伏电站场景知识约束的光伏面板精细化识别分割方法 |
| CN119006833B (zh) * | 2024-10-25 | 2025-01-24 | 中国石油大学(华东) | 基于视觉大模型的遥感图像分割方法、系统、设备及介质 |
| CN120013906B (zh) * | 2025-01-22 | 2025-10-31 | 郑州大学第一附属医院 | 一种复杂主动脉病变智能分析方法、系统及存储介质 |
| CN119785976A (zh) * | 2025-03-10 | 2025-04-08 | 南昌大学第二附属医院 | 一种基于调q激光的色斑美容智能控制方法及系统 |
| CN120070899B (zh) * | 2025-04-28 | 2025-10-17 | 三亚中心医院(海南省第三人民医院、三亚中心医院医疗集团总院) | 一种用于皮肤镜图像分割模型的训练方法及系统 |
Family Cites Families (19)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US5016173A (en) * | 1989-04-13 | 1991-05-14 | Vanguard Imaging Ltd. | Apparatus and method for monitoring visually accessible surfaces of the body |
| EP1011446A1 (fr) | 1997-02-28 | 2000-06-28 | Electro-Optical Sciences, Inc. | Systemes et procedes d'imagerie multispectrale et de caracterisation d'un tissu cutane |
| US8543519B2 (en) * | 2000-08-07 | 2013-09-24 | Health Discovery Corporation | System and method for remote melanoma screening |
| IL138123A0 (en) * | 2000-08-28 | 2001-10-31 | Accuramed 1999 Ltd | Medical decision support system and method |
| US7613335B2 (en) * | 2003-02-12 | 2009-11-03 | The University Of Iowa Research Foundation | Methods and devices useful for analyzing color medical images |
| WO2008064120A2 (fr) | 2006-11-17 | 2008-05-29 | Amigent, Inc. | Procédé pour afficher les mesures et les changements dans le temps des images de surface de la peau |
| US20090245603A1 (en) * | 2007-01-05 | 2009-10-01 | Djuro Koruga | System and method for analysis of light-matter interaction based on spectral convolution |
| US7894651B2 (en) | 2007-03-02 | 2011-02-22 | Mela Sciences, Inc. | Quantitative analysis of skin characteristics |
| US8213695B2 (en) * | 2007-03-07 | 2012-07-03 | University Of Houston | Device and software for screening the skin |
| US20080253627A1 (en) * | 2007-04-11 | 2008-10-16 | Searete LLC, a limited liability corporation of | Compton scattered X-ray visualization, imaging, or information provider using image combining |
| US20110040192A1 (en) * | 2009-05-21 | 2011-02-17 | Sara Brenner | Method and a system for imaging and analysis for mole evolution tracking |
| US8330807B2 (en) * | 2009-05-29 | 2012-12-11 | Convergent Medical Solutions, Inc. | Automated assessment of skin lesions using image library |
| WO2011087807A2 (fr) | 2009-12-22 | 2011-07-21 | Health Discovery Corporation | Système et procédé de dépistage de mélanome à distance |
| JP6211534B2 (ja) * | 2011-12-21 | 2017-10-11 | シャハーフ,キャサリン,エム. | 組織表面を整列させる病変を撮像するためのシステム |
| DE102012204063B4 (de) * | 2012-03-15 | 2021-02-18 | Siemens Healthcare Gmbh | Generierung von Visualisierungs-Befehlsdaten |
| EP2831813B1 (fr) | 2012-03-28 | 2019-11-06 | University of Houston System | Procédés et logiciel pour le criblage et le diagnostic de lésions de la peau et de maladies de plante |
| SG10201405182WA (en) * | 2014-08-25 | 2016-03-30 | Univ Singapore Technology & Design | Method and system |
| US9959486B2 (en) * | 2014-10-20 | 2018-05-01 | Siemens Healthcare Gmbh | Voxel-level machine learning with or without cloud-based support in medical imaging |
| WO2016151951A1 (fr) * | 2015-03-23 | 2016-09-29 | オリンパス株式会社 | Dispositif d'observation à ultrasons, procédé de fonctionnement de dispositif d'observation à ultrasons, et programme de fonctionnement de dispositif d'observation à ultrasons |
-
2014
- 2014-08-25 SG SG10201405182WA patent/SG10201405182WA/en unknown
-
2015
- 2015-08-25 WO PCT/SG2015/050278 patent/WO2016032398A2/fr not_active Ceased
- 2015-08-25 US US15/507,107 patent/US10499845B2/en not_active Expired - Fee Related
- 2015-08-25 SG SG11201701478SA patent/SG11201701478SA/en unknown
- 2015-08-25 AU AU2015307296A patent/AU2015307296B2/en not_active Ceased
- 2015-08-25 SG SG10201901655QA patent/SG10201901655QA/en unknown
- 2015-08-25 EP EP15836149.3A patent/EP3185758A4/fr not_active Withdrawn
Non-Patent Citations (2)
| Title |
|---|
| GAO J ET AL.: "A novel multiresolution color image segmentation technique and ist application to dermatoscopic image segmentation", IMAGE PROCESSING, vol. 3, 10 September 2000 (2000-09-10), pages 408 - 411, XP010529490 |
| See also references of EP3185758A4 |
Cited By (56)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US10531825B2 (en) * | 2016-10-14 | 2020-01-14 | Stoecker & Associates, LLC | Thresholding methods for lesion segmentation in dermoscopy images |
| EP3563350A1 (fr) * | 2016-12-29 | 2019-11-06 | Universita' Degli Studi Di Padova | Procédé et dispositif de cartographie tridimensionnelle de la peau d'un patient pour la prise en charge du diagnostic du mélanome |
| EP3373204A1 (fr) * | 2017-03-08 | 2018-09-12 | Casio Computer Co., Ltd. | Appareil d'identification, procédé et programme d'identification |
| AU2018200322B2 (en) * | 2017-03-08 | 2019-10-31 | Casio Computer Co., Ltd. | Identification apparatus, identification method and program |
| CN108573485B (zh) * | 2017-03-08 | 2022-10-14 | 卡西欧计算机株式会社 | 识别装置、识别方法以及程序存储介质 |
| CN108573485A (zh) * | 2017-03-08 | 2018-09-25 | 卡西欧计算机株式会社 | 识别装置、识别方法以及程序存储介质 |
| US11004227B2 (en) | 2017-03-08 | 2021-05-11 | Casio Computer Co., Ltd. | Identification apparatus, identification method and non-transitory computer-readable recording medium |
| US11487288B2 (en) | 2017-03-23 | 2022-11-01 | Tesla, Inc. | Data synthesis for autonomous control systems |
| US12020476B2 (en) | 2017-03-23 | 2024-06-25 | Tesla, Inc. | Data synthesis for autonomous control systems |
| US11681649B2 (en) | 2017-07-24 | 2023-06-20 | Tesla, Inc. | Computational array microprocessor system using non-consecutive data formatting |
| US12086097B2 (en) | 2017-07-24 | 2024-09-10 | Tesla, Inc. | Vector computational unit |
| US11893393B2 (en) | 2017-07-24 | 2024-02-06 | Tesla, Inc. | Computational array microprocessor system with hardware arbiter managing memory requests |
| US11403069B2 (en) | 2017-07-24 | 2022-08-02 | Tesla, Inc. | Accelerated mathematical engine |
| US12216610B2 (en) | 2017-07-24 | 2025-02-04 | Tesla, Inc. | Computational array microprocessor system using non-consecutive data formatting |
| US11409692B2 (en) | 2017-07-24 | 2022-08-09 | Tesla, Inc. | Vector computational unit |
| GR1009346B (el) * | 2017-12-08 | 2018-08-13 | Νικολαος Χρηστου Πετρελλης | Μεθοδος και φορητο συστημα κατηγοριοποιησης φωτογραφιων |
| US12307350B2 (en) | 2018-01-04 | 2025-05-20 | Tesla, Inc. | Systems and methods for hardware-based pooling |
| US12455739B2 (en) | 2018-02-01 | 2025-10-28 | Tesla, Inc. | Instruction set architecture for a vector computational unit |
| US11797304B2 (en) | 2018-02-01 | 2023-10-24 | Tesla, Inc. | Instruction set architecture for a vector computational unit |
| US11561791B2 (en) | 2018-02-01 | 2023-01-24 | Tesla, Inc. | Vector computational unit receiving data elements in parallel from a last row of a computational array |
| US11734562B2 (en) | 2018-06-20 | 2023-08-22 | Tesla, Inc. | Data pipeline and deep learning system for autonomous driving |
| US11841434B2 (en) | 2018-07-20 | 2023-12-12 | Tesla, Inc. | Annotation cross-labeling for autonomous control systems |
| US12079723B2 (en) | 2018-07-26 | 2024-09-03 | Tesla, Inc. | Optimizing neural network structures for embedded systems |
| US11636333B2 (en) | 2018-07-26 | 2023-04-25 | Tesla, Inc. | Optimizing neural network structures for embedded systems |
| US12346816B2 (en) | 2018-09-03 | 2025-07-01 | Tesla, Inc. | Neural networks for embedded devices |
| US11562231B2 (en) | 2018-09-03 | 2023-01-24 | Tesla, Inc. | Neural networks for embedded devices |
| US11983630B2 (en) | 2018-09-03 | 2024-05-14 | Tesla, Inc. | Neural networks for embedded devices |
| US11893774B2 (en) | 2018-10-11 | 2024-02-06 | Tesla, Inc. | Systems and methods for training machine models with augmented data |
| US11665108B2 (en) | 2018-10-25 | 2023-05-30 | Tesla, Inc. | QoS manager for system on a chip communications |
| US11816585B2 (en) | 2018-12-03 | 2023-11-14 | Tesla, Inc. | Machine learning models operating at different frequencies for autonomous vehicles |
| US12367405B2 (en) | 2018-12-03 | 2025-07-22 | Tesla, Inc. | Machine learning models operating at different frequencies for autonomous vehicles |
| US12198396B2 (en) | 2018-12-04 | 2025-01-14 | Tesla, Inc. | Enhanced object detection for autonomous vehicles based on field view |
| US11908171B2 (en) | 2018-12-04 | 2024-02-20 | Tesla, Inc. | Enhanced object detection for autonomous vehicles based on field view |
| US11537811B2 (en) | 2018-12-04 | 2022-12-27 | Tesla, Inc. | Enhanced object detection for autonomous vehicles based on field view |
| US12136030B2 (en) | 2018-12-27 | 2024-11-05 | Tesla, Inc. | System and method for adapting a neural network model on a hardware platform |
| US11610117B2 (en) | 2018-12-27 | 2023-03-21 | Tesla, Inc. | System and method for adapting a neural network model on a hardware platform |
| US11748620B2 (en) | 2019-02-01 | 2023-09-05 | Tesla, Inc. | Generating ground truth for machine learning from time series elements |
| US12223428B2 (en) | 2019-02-01 | 2025-02-11 | Tesla, Inc. | Generating ground truth for machine learning from time series elements |
| US12014553B2 (en) | 2019-02-01 | 2024-06-18 | Tesla, Inc. | Predicting three-dimensional features for autonomous driving |
| US11567514B2 (en) | 2019-02-11 | 2023-01-31 | Tesla, Inc. | Autonomous and user controlled vehicle summon to a target |
| US12164310B2 (en) | 2019-02-11 | 2024-12-10 | Tesla, Inc. | Autonomous and user controlled vehicle summon to a target |
| US12236689B2 (en) | 2019-02-19 | 2025-02-25 | Tesla, Inc. | Estimating object properties using visual image data |
| US11790664B2 (en) | 2019-02-19 | 2023-10-17 | Tesla, Inc. | Estimating object properties using visual image data |
| CN110598532A (zh) * | 2019-07-31 | 2019-12-20 | 长春市万易科技有限公司 | 一种树木病虫害监控系统及方法 |
| CN110598532B (zh) * | 2019-07-31 | 2022-09-13 | 长春市万易科技有限公司 | 一种树木病虫害监控系统及方法 |
| CN110600122B (zh) * | 2019-08-23 | 2023-08-29 | 腾讯医疗健康(深圳)有限公司 | 一种消化道影像的处理方法、装置、以及医疗系统 |
| CN110600122A (zh) * | 2019-08-23 | 2019-12-20 | 腾讯医疗健康(深圳)有限公司 | 一种消化道影像的处理方法、装置、以及医疗系统 |
| CN110532969B (zh) * | 2019-09-02 | 2022-12-27 | 中南大学 | 基于多尺度图像分割的斜坡单元划分方法 |
| CN110532969A (zh) * | 2019-09-02 | 2019-12-03 | 中南大学 | 基于多尺度图像分割的斜坡单元划分方法 |
| CN112085743A (zh) * | 2020-09-04 | 2020-12-15 | 厦门大学 | 一种肾肿瘤的图像分割方法 |
| CN112489062A (zh) * | 2020-12-10 | 2021-03-12 | 中国科学院苏州生物医学工程技术研究所 | 基于边界及邻域引导的医学图像分割方法及系统 |
| CN112489062B (zh) * | 2020-12-10 | 2024-01-30 | 中国科学院苏州生物医学工程技术研究所 | 基于边界及邻域引导的医学图像分割方法及系统 |
| CN113255704A (zh) * | 2021-07-13 | 2021-08-13 | 中国人民解放军国防科技大学 | 一种基于局部二值模式的像素差卷积边缘检测方法 |
| CN113255704B (zh) * | 2021-07-13 | 2021-09-24 | 中国人民解放军国防科技大学 | 一种基于局部二值模式的像素差卷积边缘检测方法 |
| US12462575B2 (en) | 2021-08-19 | 2025-11-04 | Tesla, Inc. | Vision-based machine learning model for autonomous driving with adjustable virtual camera |
| CN114863140A (zh) * | 2022-03-01 | 2022-08-05 | 武汉理兆源信息科技有限公司 | 一种色纺织物图像相似度计算方法 |
Also Published As
| Publication number | Publication date |
|---|---|
| EP3185758A2 (fr) | 2017-07-05 |
| AU2015307296B2 (en) | 2020-05-21 |
| US10499845B2 (en) | 2019-12-10 |
| SG10201405182WA (en) | 2016-03-30 |
| SG10201901655QA (en) | 2019-03-28 |
| EP3185758A4 (fr) | 2018-10-24 |
| AU2015307296A1 (en) | 2017-03-23 |
| SG11201701478SA (en) | 2017-03-30 |
| US20170231550A1 (en) | 2017-08-17 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| AU2015307296B2 (en) | Method and device for analysing an image | |
| Li et al. | A comprehensive review of computer-aided whole-slide image analysis: from datasets to feature extraction, segmentation, classification and detection approaches | |
| Oliveira et al. | Computational methods for pigmented skin lesion classification in images: review and future trends | |
| Do et al. | Accessible melanoma detection using smartphones and mobile image analysis | |
| Wang et al. | Automatic cell nuclei segmentation and classification of cervical Pap smear images | |
| US11875479B2 (en) | Fusion of deep learning and handcrafted techniques in dermoscopy image analysis | |
| Xu et al. | Automated analysis and classification of melanocytic tumor on skin whole slide images | |
| Zortea et al. | A simple weighted thresholding method for the segmentation of pigmented skin lesions in macroscopic images | |
| Doyle et al. | Cascaded discrimination of normal, abnormal, and confounder classes in histopathology: Gleason grading of prostate cancer | |
| Xu et al. | Automatic nuclei detection based on generalized laplacian of gaussian filters | |
| Do et al. | Early melanoma diagnosis with mobile imaging | |
| Pan et al. | Cell detection in pathology and microscopy images with multi-scale fully convolutional neural networks | |
| Cavalcanti et al. | Macroscopic pigmented skin lesion segmentation and its influence on lesion classification and diagnosis | |
| Taha et al. | Automatic polyp detection in endoscopy videos: A survey | |
| Hao et al. | Breast cancer histopathological images recognition based on low dimensional three-channel features | |
| Xu et al. | Using transfer learning on whole slide images to predict tumor mutational burden in bladder cancer patients | |
| Sarwar et al. | Segmentation of cervical cells for automated screening of cervical cancer: a review | |
| Vocaturo et al. | On the usefulness of pre-processing step in melanoma detection using multiple instance learning | |
| Pennisi et al. | Melanoma detection using delaunay triangulation | |
| George et al. | Automatic psoriasis lesion segmentation in two-dimensional skin images using multiscale superpixel clustering | |
| Sreelatha et al. | A survey work on early detection methods of melanoma skin cancer | |
| Alzamili et al. | A comprehensive review of deep learning and machine learning techniques for early-stage skin cancer detection: Challenges and research gaps | |
| Figueiredo et al. | Unsupervised segmentation of colonic polyps in narrow-band imaging data based on manifold representation of images and Wasserstein distance | |
| Fauzi et al. | Segmentation and management of chronic wound images: A computer-based approach | |
| Wu et al. | Automatic skin lesion segmentation based on supervised learning |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 15836149 Country of ref document: EP Kind code of ref document: A2 |
|
| NENP | Non-entry into the national phase |
Ref country code: DE |
|
| ENP | Entry into the national phase |
Ref document number: 2015307296 Country of ref document: AU Date of ref document: 20150825 Kind code of ref document: A |
|
| REEP | Request for entry into the european phase |
Ref document number: 2015836149 Country of ref document: EP |
|
| WWE | Wipo information: entry into national phase |
Ref document number: 2015836149 Country of ref document: EP |