[go: up one dir, main page]

WO2020242341A1 - Procédé pour séparer et classer des types de cellules sanguines à l'aide de réseaux neuronaux convolutifs profonds - Google Patents

Procédé pour séparer et classer des types de cellules sanguines à l'aide de réseaux neuronaux convolutifs profonds Download PDF

Info

Publication number
WO2020242341A1
WO2020242341A1 PCT/RU2019/000687 RU2019000687W WO2020242341A1 WO 2020242341 A1 WO2020242341 A1 WO 2020242341A1 RU 2019000687 W RU2019000687 W RU 2019000687W WO 2020242341 A1 WO2020242341 A1 WO 2020242341A1
Authority
WO
WIPO (PCT)
Prior art keywords
blood cells
image
convolutional neural
blood
neural networks
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/RU2019/000687
Other languages
English (en)
Russian (ru)
Inventor
Александр Михайлович ГРОМОВ
Вадим Сергеевич КОНУШИН
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
"lab Kmd" LLC
Original Assignee
"lab Kmd" LLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by "lab Kmd" LLC filed Critical "lab Kmd" LLC
Publication of WO2020242341A1 publication Critical patent/WO2020242341A1/fr
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/69Microscopic objects, e.g. biological cells or cellular parts

Definitions

  • This technical solution in general, relates to the field of computing and medicine, and in particular to a method for the isolation and classification of blood cell types using deep convolutional neural networks.
  • Automated processing and analysis of medical images is a universal tool for medical diagnostics.
  • the classification of blood cells in a microscopic image is, in computer vision terms, an object recognition task.
  • Blood is a complex functional system that provides timely delivery of oxygen and nutrients to tissue cells and the removal of metabolic products from organs and interstitial spaces.
  • the blood system subtly reacts to the effects of environmental factors with a set of specific and non-specific components.
  • An important characteristic of the physiology and pathology of the blood system is the quantitative and qualitative composition of the erythrocyte population.
  • the technical problem to be solved by the claimed technical solution is the creation of a computer-implemented method for the isolation and classification of blood cell types using deep convolutional neural networks, which is characterized in an independent claim. Additional embodiments of the present invention are presented in the dependent claims.
  • the technical result consists in automatic detection and classification of types of blood cells using deep convolutional neural networks.
  • the specified technical result is achieved due to the implementation of a computer-implemented method for the isolation and classification of types of blood cells using deep convolutional neural networks, which consists in performing the stages at which:
  • Normal blood cells are isolated and cut from the image, and the border blood cells are excluded from further analysis;
  • blood cells are classified according to types, while: a set of images is obtained for each image of the cut blood cell using the augmentation method; the set of images obtained for each cell is analyzed and, according to this set, each blood cell is classified by type.
  • the detection of blood cells is determined by the coordinates of the upper left corner, the width and height of the cell.
  • normal blood cells are selected with the coordinates of the bounding rectangle.
  • a deep convolutional neural network is pre-trained based on two datasets: ImageNet 22k and Place 365.
  • a single-stage RetinaNet detector is used to detect blood cells in an image.
  • FIG. 1 illustrates a computer-implemented method for the isolation and classification of blood cell types using deep convolutional neural networks
  • FIG. 2 illustrates a block diagram of the claimed solution
  • FIG. 3 illustrates a detailed description of the detector architecture
  • FIG. 4 illustrates an example of FPN construction
  • FIG. 5 illustrates an example of generating anchors
  • FIG. 6 illustrates an example of visualization of the operation of the augmentation method
  • FIG. 7 illustrates an example of a general arrangement of a computing device. DETAILED DESCRIPTION OF THE INVENTION
  • This technical solution can be implemented on a computer in the form of an automated system (AS) or a computer-readable medium containing instructions for performing the above method.
  • AS automated system
  • the technical solution can be implemented as a distributed computer system.
  • CNN convolutional neural network
  • a deep learning system can combine learning algorithms with and without a teacher, while searching for cells and their further classification — learning with a teacher;
  • Deep neural networks are currently becoming one of the most popular machine learning methods. They show better results compared to alternative methods in areas such as speech recognition, natural language processing, computer vision, medical informatics, etc.
  • One of the reasons for the successful application of deep neural networks is that the network automatically extracts important features from the data. necessary to solve the problem.
  • Augmentation (Test time augmentation - TTL) - transformation of images: rotations, compression, adding noise, magnification, data augmentation, resizing, changing colors, changing the scale, cropping. This is a way to increase the quality of the classifier by averaging the predictions for the image and augmentations of the given image.
  • Blood cells are cells that make up the blood and are formed in the red bone marrow during hematopoiesis. There are three main types of blood cells: erythrocytes (red blood cells), leukocytes (white blood cells), and platelets (platelets).
  • Diagnostics plays an important role in medicine. A timely accurate diagnosis facilitates the choice of a treatment method and significantly increases the likelihood of a patient's recovery.
  • the use of neural networks is one of the ways to improve the efficiency of medical diagnostics.
  • the present invention is directed to providing a computer-implemented method for isolating and classifying blood cell types using deep convolutional neural networks.
  • the recognition of pathological cells can be divided into two stages - detection (detection) of cells and classification of cells.
  • the claimed computer-implemented method for isolating and classifying types of blood cells using deep convolutional neural networks (100) is implemented as follows: In step (101), an image containing blood cells is obtained.
  • step (102) blood cells are detected on the obtained image.
  • the detection of each blood cell is characterized by four numbers, namely, the coordinates of the upper left corner, the width and height of the cell. And everything is counted in pixels.
  • MobileNet-128 The basis of this network is the MobileNet-128 network (the architecture of the MobileNet-128 network is described in the article “MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications” https://arxiv.org/abs/1704.04861), since this network has a more different structure, which allows you to use it to obtain output files of the trained network of a smaller size.
  • RetinaNet is a single unified network consisting of a main neural network structure (NN) for feature extraction, and two auxiliary ones for specific tasks (an example of the RetinaNet architecture is given in the article "Focal Loss for Dense Object Detection” https://arxiv.org/abs/ 1708.02002).
  • the main neural network is responsible for calculating the feature map of objects throughout the input image and is an independent convolution network.
  • the first auxiliary NS (localization NS) performs classification at the output of the main NS; the second auxiliary NN (classification NN) performs regression convolution for the bounding box.
  • the FeaturePyramideNet (FPN) architecture is used to generate spatial feature maps.
  • FPN FeaturePyramideNet
  • Focal Loss function described in the article "Focal Loss for Dense Object Detection” (https://arxiv.org/abs/1708.02002) is used.
  • the claimed solution for the detection of blood cells uses a one-stage detector of the RetinaNet family.
  • Object detection is the output of the four coordinates of the rectangle into which the object of interest is inscribed.
  • FIG. 3 The architecture of a one-stage RetinaNet detector is shown in FIG. 3.
  • the Features Pyramid Netrowk (FPN) built on the MobileNet-128 architecture is used as a backbone.
  • the use of MobileNet-128 gave an increase in the speed of work, while allowing not to worsen the results of metrics.
  • TO the FPN output is joined by two subnets, the first responsible for the classification of anchors, the second for their regression.
  • the FPN Feature Pyramid Network
  • the pyramid consists of 5 levels - Pz, P4, Ps, Rb, P7.
  • the first 3 levels are connected to Cs, C 4 , Cs through a convolutional layer with 256 filters of size 1 X 1.
  • Cs, C4, C 5 correspond to the feature maps of the Mobilenet-128 network, after 3, 4 and 5 sub-sampling layers, each of which reduces input image by 8, 16 and 32 times.
  • Ps is obtained by applying a 256-filter convolutional layer of 1 X 1 size to C5.
  • P4 is obtained by element-wise addition of the result of applying a convolutional layer with 256 filters of size 1 X 1 to Q and the result of doubling Ps, followed by applying a convolutional layer with 256 filters of size 3 X 3 and a convolution step of 1.
  • P3 is obtained in the same way only it is connected to P4 and O (Fig. 4).
  • Pb is obtained by applying a 256-filter convolutional layer of 3 X 3 size and a convolution step of 2 to Ps.
  • P7 is obtained by applying the activation function ReLU and then applying a convolutional layer with 256 filters of size 3 X 3 and a convolution step of 2 to Pb.
  • FIG. 5 shows an example of anchor generation.
  • Each cell is a pixel in the output feature map, a predefined set of anchors is generated for each pixel. This example generates 4 anchors per pixel.
  • Anchors Since RetinaNet is a single-stage detector, unlike Faster R-CNN, where hypotheses are generated by a separate RPN neural network, each pixel of the feature map obtained after FPN (5 maps in total) is assigned a predetermined set of anchors.
  • Anchors have the size 32 2 , 64 2 , 128 2 , 256 2 , 512 2 at the levels P3, P 4 , Ps, Pb, P7, respectively.
  • Three aspect ratios of the anchors are used - ⁇ 1: 1, 1: 2, 2: 1 ⁇ , and 3 scale factors -
  • a total of 9 anchors are generated for each pixel in the feature map, the size of the anchors depends on the level of the pyramid.
  • Each anchor is associated with a vector of length 4 - a regression problem, and a vector of length K, where K is the number of classes, - a classification problem.
  • the anchor is related to the reference rectangles based on the IOU criterion (intersection to union ratio), if IOU is greater than 0.5, then it is considered that the anchor coincides with the reference rectangle, if IOU is less than 0.4, then the anchor refers to the background, otherwise the anchor is ignored.
  • Classification network This network consists of 4 consecutive convolutional layers with 256 3 X 3 filters, each layer is followed by a ReLU activation layer, the last layer is a convolutional layer with K * A number of filters, where A is the number of generated anchors per pixel, and K is the number classes.
  • border cell - a cell located on the border of the image, only half or a third of this cell can be seen. These border cells cannot be classified.
  • step (104) normal blood cells are isolated and cut from the image, and the border blood cells are excluded from further analysis. At the same time, normal blood cells are distinguished by the coordinates of the bounding rectangle.
  • step (105) blood cell types are classified.
  • a set of images is obtained for each image of the excised blood cell using the augmentation method.
  • TTA Test time augmentation approach
  • the architecture used is an averaging of the Resnet-50 and Resnet-101 architectures. It contains 71 layers, hereinafter referred to as Resnet-71.
  • Layer 2 a sequence of convolutions - 64 convolutions, 1 by 1, 64 convolutions, 3 by 3, 256 convolutions, 1 by 1, this sequence is repeated 3 times.
  • Layer 3 - sequence of convolutions - 128 convolutions, 1 by 1, 128 convolutions, 3 by 3, 512 convolutions, 1 by 1, this sequence is repeated 4 times.
  • the convolution has a size of 3 by 3, the step is 2.
  • Layer 4 - a sequence of convolutions - 256 convolutions, 1 by 1, 256 convolutions, 3 by 3, 1024 convolutions, 1 by 1, this sequence is repeated 12 times.
  • the step is 2.
  • Layer 5 a sequence of convolutions - 512 convolutions, 1 by 1, 512 convolutions, 3 by 3, 1024 convolutions, 1 by 1, this sequence is repeated 3 times.
  • the convolution has a size of 3 by 3, the step is 2.
  • Layer 7 is a fully connected layer with the number of elements equal to the number of blood cells.
  • This neural network is first trained on two sets of images - the ImageNet 22k datasets (http://image-net.org/) and the P1ace365 dataset - ImageNet 22k and P1ace365 (http://places2.csail.mit.edu/download. html).
  • the method of TTA augmentation is used - test time augmentation - the classification of not just one image, but a set of images obtained from one, by rotating, displaying and cutting out a part of the image from the original.
  • a schematic visualization of the work of which is presented in figure 6.
  • TTA augmentation method
  • a data processing device which is a computer or system (or means such as a central / graphics processor or microprocessor) that reads and executes a program written to a memory device to perform the functions of the above-described embodiment (s ) implementation, and the method shown in FIG. 1, the steps of which are performed by a computer or apparatus by, for example, reading and executing a program stored in a memory device to perform the functions of the above-described embodiment (s).
  • a program is written to a computer, for example, via a network or from a recording medium of various types serving as a storage device (for example, a computer-readable medium).
  • FIG. 7 a general diagram of a computing device (700) will now be presented with which aspects of the present invention may be implemented.
  • the device (700) contains, combined using a universal bus (710), such components as: at least one processors (701), at least at least one memory (702), data storage means (703), input / output interfaces (704), I / O means (705), networking means (706).
  • a universal bus (710) such components as: at least one processors (701), at least at least one memory (702), data storage means (703), input / output interfaces (704), I / O means (705), networking means (706).
  • the processor (701) performs all the basic computational operations necessary for the operation of the device (700) or the functionality of one or more of its components.
  • the processor (701) executes the necessary machine-readable instructions contained in the main memory (702).
  • Memory (702) can represent one or more devices of various types, such as: RAM, ROM, or their combinations and contains the necessary program logic that provides the required functionality, and an operating system that organizes the interaction interface and data processing protocols. HDD, SSD disks, flash memory, etc. can be used as ROM.
  • the data storage medium (703) can be performed in the form of HDD, SSD disks, raid array, network storage, flash memory, optical information storage devices (CD, DVD, MD, Blue-Ray disks), etc.
  • the means (703) allows performing long-term storage of various types of information, for example, the aforementioned files with user data sets, a database containing records of time intervals measured for each user, user identifiers, etc.
  • Interfaces (704) are standard means for connecting and working with a computer device, for example, USB, RS232, RJ45, LPT, COM, HDMI, PS / 2, Lightning, FireWire, etc.
  • interfaces (704) depends on the specific implementation of the device (700), which can be a personal computer, mainframe, server cluster, thin client, smartphone, laptop, be part of a bank terminal, ATM, etc.
  • a mouse should be used.
  • the hardware implementation of the mouse can be any known.
  • Connecting the mouse to the computer can be either wired, in which the mouse connecting cable is connected to the PS / 2 or USB port located on the system unit of the desktop computer, or wireless, in which the mouse exchanges data via a wireless channel, for example, a radio channel. with a base station, which, in turn, is directly connected to the system unit, for example, to one of the USB ports.
  • a base station which, in turn, is directly connected to the system unit, for example, to one of the USB ports.
  • I / O can also be used: joystick, display (touchscreen), projector, touchpad, keyboard, trackball, light pen, speakers, microphone, etc.
  • Networking tools (706) are selected from a device that provides network reception and transmission of data, for example, Ethernet card, WLAN / Wi-Fi module, Bluetooth module, BLE module, NFC module, IrDa, RFID module, GSM modem (2G, 3G, 4G, 5G), etc.
  • the means (705) provide the organization of data exchange via a wired or wireless data transmission channel, for example, WAN, PAN, LAN, Intranet, Internet, WLAN, WMAN or GSM.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Molecular Biology (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

La présente invention se rapporte au domaine des techniques informatiques et de la médecine, et concerne notamment un procédé de séparation et de classification de types de cellules sanguines à l'aide de réseaux neuronaux convolutifs profonds. Le résultat technique consiste en une détection et classification automatiques des types de cellules sanguines à l'aide de réseaux neuronaux convolutifs profonds. L'invention concerne un procédé mis en oeuvre par ordinateur de séparation et de classification de types de cellules sanguines à l'aide de réseaux neuronaux convolutifs profonds, lequel consiste à effectuer les étapes suivantes: obtenir une image contenant des cellules sanguines; effectuer une détection dans l'image obtenue de cellules sanguines; différencier les cellules sanguines normales et limites; séparer les cellules sanguines normales et les découper de l'image tandis que les cellules sanguines limites sont exclues de l'analyse ultérieure; effectuer ensuite une classification des cellules sanguines en fonction des types; obtenir en outre un ensemble d'images pour chaque image de cellule sanguine découpée en utilisant un procédé d'argumentation; analyser l'ensemble d'images obtenu pour chaque cellule et classer en fonction dudit ensemble chaque cellule sanguine en fonction du type.
PCT/RU2019/000687 2019-05-27 2019-09-27 Procédé pour séparer et classer des types de cellules sanguines à l'aide de réseaux neuronaux convolutifs profonds Ceased WO2020242341A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
RU2019116212 2019-05-27
RU2019116212A RU2732895C1 (ru) 2019-05-27 2019-05-27 Метод для выделения и классификации типов клеток крови с помощью глубоких сверточных нейронных сетей

Publications (1)

Publication Number Publication Date
WO2020242341A1 true WO2020242341A1 (fr) 2020-12-03

Family

ID=72922323

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/RU2019/000687 Ceased WO2020242341A1 (fr) 2019-05-27 2019-09-27 Procédé pour séparer et classer des types de cellules sanguines à l'aide de réseaux neuronaux convolutifs profonds

Country Status (2)

Country Link
RU (1) RU2732895C1 (fr)
WO (1) WO2020242341A1 (fr)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112508951A (zh) * 2021-02-03 2021-03-16 中国科学院自动化研究所 用于确定内质网表型的方法及产品和用于药物筛选的方法
CN112597852A (zh) * 2020-12-15 2021-04-02 深圳大学 细胞分类方法、装置、电子设备及存储介质
CN114742803A (zh) * 2022-04-20 2022-07-12 大连工业大学 一种结合深度学习与数字图像处理算法的血小板聚集检测方法
WO2023284117A1 (fr) * 2021-07-12 2023-01-19 武汉大学 Instrument d'analyse de sang, et système et procédé d'analyse et de reconnaissance de sang
CN116452947A (zh) * 2023-04-07 2023-07-18 东南大学 一种基于多尺度融合和可变形卷积的跨域故障检测方法
RU2814825C1 (ru) * 2023-04-03 2024-03-05 Мария Вячеславовна Кузнецова Способ определения адгезивной активности лактобактерий и/или бифидобактерий пробиотического или аутопробиотического препарата и способ индивидуального подбора указанных препаратов

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103020639A (zh) * 2012-11-27 2013-04-03 河海大学 一种白细胞自动识别计数方法
US20160350914A1 (en) * 2015-05-28 2016-12-01 Tokitae Llc Image analysis systems and related methods
CN107423815A (zh) * 2017-08-07 2017-12-01 北京工业大学 一种基于计算机的低质量分类图像数据清洗方法
CN109255364A (zh) * 2018-07-12 2019-01-22 杭州电子科技大学 一种基于深度卷积生成对抗网络的场景识别方法
CN109554432A (zh) * 2018-11-30 2019-04-02 苏州深析智能科技有限公司 一种细胞类型分析方法、分析装置及电子设备
US20190114771A1 (en) * 2017-10-12 2019-04-18 Baidu Online Network Technology (Beijing) Co., Ltd. Method and apparatus for acquiring information

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102884550B (zh) * 2010-05-06 2016-03-16 皇家飞利浦电子股份有限公司 用于动态灌注ct的图像数据配准
KR101995764B1 (ko) * 2017-08-25 2019-07-03 (주)뉴옵틱스 혈구 감별 장치 및 방법
US10460440B2 (en) * 2017-10-24 2019-10-29 General Electric Company Deep convolutional neural network with self-transfer learning
CN109360198A (zh) * 2018-10-08 2019-02-19 北京羽医甘蓝信息技术有限公司 基于深度学习的骨髓细胞分类方法及分类装置

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103020639A (zh) * 2012-11-27 2013-04-03 河海大学 一种白细胞自动识别计数方法
US20160350914A1 (en) * 2015-05-28 2016-12-01 Tokitae Llc Image analysis systems and related methods
CN107423815A (zh) * 2017-08-07 2017-12-01 北京工业大学 一种基于计算机的低质量分类图像数据清洗方法
US20190114771A1 (en) * 2017-10-12 2019-04-18 Baidu Online Network Technology (Beijing) Co., Ltd. Method and apparatus for acquiring information
CN109255364A (zh) * 2018-07-12 2019-01-22 杭州电子科技大学 一种基于深度卷积生成对抗网络的场景识别方法
CN109554432A (zh) * 2018-11-30 2019-04-02 苏州深析智能科技有限公司 一种细胞类型分析方法、分析装置及电子设备

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112597852A (zh) * 2020-12-15 2021-04-02 深圳大学 细胞分类方法、装置、电子设备及存储介质
CN112597852B (zh) * 2020-12-15 2024-05-24 深圳大学 细胞分类方法、装置、电子设备及存储介质
CN112508951A (zh) * 2021-02-03 2021-03-16 中国科学院自动化研究所 用于确定内质网表型的方法及产品和用于药物筛选的方法
WO2023284117A1 (fr) * 2021-07-12 2023-01-19 武汉大学 Instrument d'analyse de sang, et système et procédé d'analyse et de reconnaissance de sang
CN114742803A (zh) * 2022-04-20 2022-07-12 大连工业大学 一种结合深度学习与数字图像处理算法的血小板聚集检测方法
RU2814825C1 (ru) * 2023-04-03 2024-03-05 Мария Вячеславовна Кузнецова Способ определения адгезивной активности лактобактерий и/или бифидобактерий пробиотического или аутопробиотического препарата и способ индивидуального подбора указанных препаратов
CN116452947A (zh) * 2023-04-07 2023-07-18 东南大学 一种基于多尺度融合和可变形卷积的跨域故障检测方法

Also Published As

Publication number Publication date
RU2732895C1 (ru) 2020-09-24

Similar Documents

Publication Publication Date Title
JP7583041B2 (ja) 組織画像分類用のマルチインスタンス学習器
US11901077B2 (en) Multiple instance learner for prognostic tissue pattern identification
CN116580394B (zh) 一种基于多尺度融合和可变形自注意力的白细胞检测方法
US12228723B2 (en) Point-of-care-computational microscopy based-systems and methods
US11756318B2 (en) Convolutional neural networks for locating objects of interest in images of biological samples
RU2732895C1 (ru) Метод для выделения и классификации типов клеток крови с помощью глубоких сверточных нейронных сетей
US11164316B2 (en) Image processing systems and methods for displaying multiple images of a biological specimen
Reddy Deep learning-based detection of hair and scalp diseases using CNN and image processing
JP7705405B2 (ja) 生体試料中のオブジェクトの体系的特性評価
CN113822846B (zh) 医学图像中确定感兴趣区域的方法、装置、设备及介质
Foucart et al. Artifact identification in digital pathology from weak and noisy supervision with deep residual networks
CN120431393A (zh) 孤独症谱系障碍的分类方法、装置、电子设备及存储介质
Barua et al. A deep learning approach for automated classification of Corneal Ulcers
Campanella Diagnostic Decision Support Systems for Computational Pathology in Cancer Care
Şahbaz Tumor detection in breast cancer histopathological images using convolutional neural networks
Aof et al. An Innovative Leukemia Detection System using Blood Samples via a Microscopic Accessory
Khan et al. A Cloud Edge Collaboration of Food Recognition Using Deep Neural Networks
Odeh COMPUTER VISION AND MACHINE LEARNING APPROACHES FOR INTERPRETATION OF BREAST HISTOPATHOLOGY
WO2025227106A1 (fr) Appareil de commande d'algorithmes de traitement d'image dans une interface graphique

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19930268

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19930268

Country of ref document: EP

Kind code of ref document: A1