[go: up one dir, main page]

WO2018019355A1 - Procédé et dispositif d'analyse assistée par ordinateur d'au moins un second vecteur d'entrée d'un système cible - Google Patents

Procédé et dispositif d'analyse assistée par ordinateur d'au moins un second vecteur d'entrée d'un système cible Download PDF

Info

Publication number
WO2018019355A1
WO2018019355A1 PCT/EP2016/067627 EP2016067627W WO2018019355A1 WO 2018019355 A1 WO2018019355 A1 WO 2018019355A1 EP 2016067627 W EP2016067627 W EP 2016067627W WO 2018019355 A1 WO2018019355 A1 WO 2018019355A1
Authority
WO
WIPO (PCT)
Prior art keywords
vector
processing
input
vectors
layer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/EP2016/067627
Other languages
German (de)
English (en)
Inventor
Alexander Michael Gigler
Ralph Grothmann
Stefanie VOGL
Hans-Georg Zimmermann
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Siemens AG
Siemens Corp
Original Assignee
Siemens AG
Siemens Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Siemens AG, Siemens Corp filed Critical Siemens AG
Priority to EP16751207.8A priority Critical patent/EP3472759A1/fr
Priority to PCT/EP2016/067627 priority patent/WO2018019355A1/fr
Priority to CN201680089552.1A priority patent/CN109716359A/zh
Publication of WO2018019355A1 publication Critical patent/WO2018019355A1/fr
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/211Selection of the most significant subset of features
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0499Feedforward networks
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/082Learning methods modifying the architecture, e.g. adding, deleting or silencing nodes or connections
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/09Supervised learning
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/69Microscopic objects, e.g. biological cells or cellular parts
    • G06V20/698Matching; Classification
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/20ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10048Infrared image
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30088Skin; Dermal
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30096Tumor; Lesion

Definitions

  • the invention relates to a method and a device for computer-aided analysis of at least one second input vector of a target system.
  • an input vector of a target system to a computer-aided ana ⁇ lyse, thereby to determine certain properties of the target system.
  • tissue In particular, in the medical field it is often necessary to be able to make a statement about (biological) tissue.
  • the target system is the tissue and the input vector comprises acquired information about the tissue.
  • analysis or evaluation of such an input vector is difficult and complicated, it is desirable to determine as simply as possible the properties of the target system, such as tumor tissue, from the input vector.
  • An object of the present invention is to provide a method and a device which make it possible to model an input vector of a target system as simply as possible.
  • the invention relates to a method for computer-aided configuration of a deep neuron len Network on the basis of a training system with the following Ver ⁇ method steps:
  • the deep neural network is a feed-forward network which
  • the first input vectors are each transmitted via the input layer to one of the hidden layers as first processing vectors
  • the first processing vectors are each transmitted between the hidden layers, each for a hidden layer a respective first data transformation of the transmitted first processing vectors are performed, in the respective first data transformation a dimension reduction of the first processing vectors takes place for a respective hidden layer
  • the hidden layers are configured on the basis of the dimensionally reduced first processing vectors and the respective associated first output vectors.
  • computer can be construed as broadly as possible to cover in particular all electronic devices with data processing properties , Computers can thus be, for example, personal computers, servers, handheld computer systems, pocket PC devices, mobile devices and other communication devices that can handle computer-aided data, processors and other electronic data processing equipment.
  • “computer-aided” can be understood to mean, for example, an implementation of the method in which, in particular, a processor carries out at least one method step of the method.
  • a processor can be understood as meaning, for example, a machine or an electronic circuit.
  • a processor may, in particular, be a main processor (Central Processing Unit, CPU), a microprocessor or a microcontroller, for example an application-specific integrated circuit or a digital signal processor, possibly in combination with a memory unit for storing program instructions, etc act.
  • a processor may, for example, also be an integrated circuit (IC), in particular an FPGA (field programmable gate array) or an ASIC (application-specific integrated circuit). or a DSP (Digital Signalprozes ⁇ sor, engl. Digital signal Processor) act.
  • a processor can be understood as a virtualized processor or a soft CPU.
  • a programmable processor which is equipped with configuration steps for the execution of said method according to the invention or is configured with configuration steps such that the programmable processor the features of the method according to the invention, the component, the security module, or other aspects and Operaas ⁇ pects of the invention implemented.
  • a "memory unit" can be understood as meaning, for example, a memory in the form of random access memory (RAM) or a hard disk. or a storage unit for storing program instructions.
  • the processor is specially set up to execute the program instructions in such a way that the processor carries out functions in order to implement the method according to the invention or a step of the method according to the invention.
  • a “System”, and in particular a training system and / or a target system, a technical system, a biological ⁇ ULTRASONIC system or a chemical system invention in connection with the ER, for example, are understood.
  • a system can thus, in particular the behavior or properties of describe biological tissues, oils or lubricants that are, in particular, calculable from measured parameters (input vectors) of the system by means of a trained deep neural network
  • the system in question may in particular be a complex system, wherein an analysis of measured properties, is very difficult to carry out especially in the form of input vectors, the complex system.
  • a "real-time" can be understood, for example, as performing an analysis during a patient's operation, wherein the analysis of (second) input vectors preferably takes place immediately after the detection of these input vectors, and wherein the analysis is preferably not or only minimally brings Ver ⁇ prolongation of the duration of the operation with them.
  • an analysis and their duration may be "real time” means that voted preferably reliably within a reserved time, for example during an operation or operation of a technical system, results may lie ⁇ remote ,
  • an “input vector” can be ver ⁇ were related to the invention, for example, measured properties of the system.
  • input vectors of the training system used for training referred to as first input vectors.
  • second input vectors More particularly, to analyzed ⁇ Rende input vectors, for example, measured properties of the Target system, referred to as second input vectors.
  • classification may, for example, be understood to mean an analysis of input vectors, in particular of the second input vectors, with respect to predetermined sought-after properties tissue or tumor tissue ⁇ be "understood.
  • a classification result can be understood to mean the second output vectors which result in an analysis of the second input vectors by means of the trained deep neural network.
  • an output vector can-making in relation to the ER, for example, a result of analysis of the entranc ⁇ bevektors be understood in particular an output vector comprises at least one analysis result or a requested egg ⁇ genschaft, such as whether it is tumor tissue. or healthy tissue of the (target) system and a jeweili ⁇ ger input vector comprises in particular at least one A ⁇ output variable which will affect the desired property or the Analyseer ⁇ result.
  • output vectors of the training system which are used for training, called ers ⁇ te output vectors.
  • the inventive method is particularly suited to determine a tissue type, for example, tumor tissue or healthy tissue, or a probability of disease and especially in the form of a (second) output vector setmonizu ⁇ .
  • output vectors comprising in particular an analysis result of the target system, or of a second ⁇ transfer vectors are referred to as second output vectors.
  • first data transformation may each have a data transformation for the dimensional reduction of a respective processing vector for each ⁇ wells a hidden layer of deep neural network to be understood in conjunction with the invention.
  • a "second data transformation” may be understood in connection with the invention, one data transformation to data reconstruction for each dimension-reduced processing vector, which was calculated for each a respective hidden layer of deep neural network.
  • a "training system” can be understood as meaning, for example, one or more training systems which have a similar or identical system behavior first input ⁇ vectors include.
  • the training system and whose behavior is modeled with respect to the first input vectors in response to the first output vectors in training the tie ⁇ fen neural network.
  • the first output ⁇ vector comprises a first analysis result of the input vectors of the same modality. This means, for example, that in particular NIR spectra can be used as the first input vectors, but the first output vectors for training need not be a direct analysis of the NIR spectra.
  • a target system may, for example, be understood to mean a system to be analyzed, for example, a target system may be biological tissue to determine whether it is tumor tissue or healthy tissue.
  • input vectors to be analyzed for the target system are referred to as second input vectors.
  • a second input vector of Zielsys ⁇ tems can also be called a dummy record.
  • the training system and the target system may be different systems or identical systems.
  • a "model” may, for example, be understood as a data structure, in particular for modeling a system.
  • a "feed-forward network” can be understood, for example, to mean a neural network having a plurality of layers connected to one another.
  • the bonded layers include an input ⁇ layer for inputting input vectors, at least a comparable stuck layer for modeling of a system (eg. B. Trai ⁇ beginnings system) or for analyzing input vectors of a system (eg., Target system), and an output layer for Ausga ⁇ be a result of the analyzing.
  • the individual layers each comprise one or more neurons, which are configured, for example, when training the deep neural network.
  • a "deep neural network”, for example, a neural network or a neural network can in connection with the invention will be understood that in contrast to conventional neural networks comprising more than one hidden layer.
  • the Informationsverar ⁇ processing takes place (in particular the analysis of an input vector ) in front- preferably in many consecutive hidden layers.
  • a transmission of proces ⁇ tung vectors between the hidden layers, preferably only to the directly subsequent hidden layer takes place.
  • the analysis of the input vector can (in particular ⁇ sondere the information processing) are carried out in cascade or chained in the hidden layers. In particular, thus a cascaded processing input ⁇ vectors or vectors processing is made possible.
  • a deep neural network may comprise a plurality of consecutive hidden layers, which in particular perform cascaded information processing.
  • hierarchical information processing in the hidden layers can thereby be realized.
  • analysis results can be transmitted directly to the output layer (eg. As a feature extraction) of a respective layer, or it is at ⁇ play, a specific analysis result of a particular layer, in particular, a predetermined criterion (for. Example, reaching a certain dimension of the processing vector) that transmits output layer.
  • a concatenation or concatenation of the hidden layers can be understood,
  • a result of information processing of a respective hidden layer serves as an input vector or processing vector for the respective layer of the subsequent hidden layer.
  • “Cascaded processing” may be understood in connection with the invention, for example a concatenated processing entranc ⁇ bevektoren or processing vectors.
  • the respective first data transformation can either be an integral part of the respective hidden layer concerned (for example the first hidden layer) or can be a preprocessing step for the particular hidden layer concerned.
  • the respective Subject Author ⁇ fenden hidden layer of dimensionally reduced input vector ⁇ or dimensionally reduced processing vector is transmitted to the subsequent hidden layer. This will especially in the analysis or information processing in the hidden layers
  • a feature extraction for the respective dimension-reduced input vector or the dimension-reduced processing vector can be performed in each case.
  • feature extraction may be helpful in configuring the deep neural network or configuring the hidden layers to configure the hidden layers based on the first output vectors.
  • the first output vectors for the respective hidden layers can comprise known features.
  • the comparable infected layers can be performed, for example, using a ⁇ Ver equalization of the known characteristics of the extracted features of the respective layer.
  • a weighting and / or deleting and / or inserting of neurons in the hidden layers can be understood .
  • weights of corresponding connectors of the neurons can also be adapted during the configuration
  • the method is particularly advantageous in that a tumor occurs in critical tissue (eg, brain).
  • critical tissue eg, brain
  • the correct classification of the border areas between healthy and affected cells / tissue is crucial.
  • provide diagnostic imaging procedures that are applied in advance of the operation for which Opera ⁇ tion planning a significant contribution, but can only be used for limited representation and identification of the tumor in particular during the operation.
  • NIR near-infrared spectroscopy
  • tissue properties can additionally be analyzed which, in particular, reflect the spectral properties of the present tissue during the operation in real time and can be used, for example, to distinguish the tumor cells from the surrounding healthy tissue.
  • recorded spectra can be used as the first input vectors and tissue sections that indicate whether han ⁇ delt to tumor tissue, are used as the first output vectors.
  • Insbesonde- re capable coached deep neural network for automatic ⁇ tarrae differentiation of different cell / tissue types are used and thus, for example, provide the operating ⁇ the doctor an additional decision support for the Abgren ⁇ Zung between tumor and non-tumor cells available ,
  • the method according to the invention is advantageous in analyzing the spectra of healthy cells or tissue and tumor cells / tumor tissue, which differ in particular only by complex, difficult-to-identify properties.
  • the erfindungsge ⁇ Permitted method is able to identify these preferably exactly difficult to identify properties and in particular ⁇ sondere for solving the actual classification task (analysis, whether they are healthy tissue or tumor tissue han ⁇ delt) to be used.
  • the method according to the invention is able to compensate for measurement errors in the recorded spectra or first input vectors. In particular, this avoids lengthy measurement error correction of the input vectors.
  • the process of the invention is able, on the one hand, for example, the recorded raw data from the NIR spectroscopy (input vectors) preparation steps without further preprocessing, for example, measurement error corrections work and ver ⁇ other hand, for example, the complex classification ⁇ fikationsaufgabe (distinction between healthy tissue or tumor ⁇ tissue) to solve with very good performance, for example in mög ⁇ lichst short time to real-time.
  • the deep neural network can be trained, for example, with a test data set with the inventive method to obtain very good classification preferably ⁇ results (output vector) on a second input vector.
  • an application of the trained deep neural network as a pure input-output Rela ⁇ tion on second input vectors by implementing an appropriate software can be easily implemented.
  • the inventive method is preferably not brain tumors on overall and fixed the like but can be expan ⁇ tern particular more types of tissue such as bone.
  • health data can also be analyzed in order to determine a probability of an occurrence of a specific disease, for example diabetes, for example on the basis of genetic markers and / or medication income and / or previous illnesses and / or laboratory values
  • the method is preferably not limited to medical applications, but in particular also spectral data or input vectors of liquids or gases can be examined.
  • the ers ⁇ th input vectors measured spectra of the training system, with the first output vectors are already carried out Analy ⁇ setent for the spectra.
  • the measured spectra include spectra of tissue and the performed analysis results indicate whether a spectrum is to be assigned to healthy tissue or to tumor tissue.
  • the measured spectra include spectra of oils or lubricants
  • the analysis results performed include a quality of the oils or lubricants.
  • the method is particularly advantageous in order to make, for example, a statement about the quality, in particular an expected error of extracted features of a dimension-reduced first processing vector of a hidden layer, for example, the relevant hidden ⁇ th layer.
  • the determined error is taken into account for configuring the respective relevant hidden layers or the respective subsequent hidden layer to which the dimension-reduced first processing vector is transmitted.
  • the method is particularly advantageous in order to thereby improve an accuracy of the classification result.
  • the training system is modeled more accurately, and in particular, a quality of features extracted, for example, by the hidden layer from the processing vector (s) can be improved. In this way, for example, errors in the extracted features can be reduced.
  • a number of hidden layers is determined from a dimension to be achieved of the dimension-reduced first input vector.
  • the method is particularly advantageous in order to keep the number of hidden layers for a particular classification task as low as possible and to avoid the unnecessary calculation ⁇ special cuts.
  • the invention relates to a method for computer-assisted analyzing at least one second input vector of a target system with the following Ver ⁇ method steps:
  • the deep neural network is a feed-forward network which
  • Input vector for the target system comprising a plurality of consecutive hidden layers that model the target system
  • Output vector for the target system comprises evaluating the second input vector, wherein the second input vector is transmitted in each case via the input ⁇ layer to one of the hidden layers as a two-ter processing vector, the second vector processing each transmitted between the hidden layers,
  • the second output vector is determined using a dimensionsre ⁇ nineteend second processing vector
  • the method is particularly advantageous in that a tumor occurs in critical tissue (eg, brain).
  • critical tissue eg, brain
  • the correct classification of the border areas between healthy and affected cells / tissue is crucial.
  • provide diagnostic imaging procedures that are applied in advance of the operation for which Opera ⁇ tion planning a significant contribution, but can only be used for limited representation and identification of the tumor in particular during the operation.
  • NIR near-infrared spectroscopy
  • tissue properties can additionally be analyzed which, in particular, reflect the spectral properties of the present tissue during the operation in real time and can be used, for example, to distinguish the tumor cells from the surrounding healthy tissue.
  • the method according to the invention is advantageous in analyzing the spectra of healthy cells or tissue and tumor cells / tumor tissue, which differ in particular only by complex, difficult-to-identify properties.
  • the erfindungsge- Permitted method is capable preferably precisely identify these difficult to identify properties and in particular ⁇ sondere for solving the actual Klasstechnischsaufga ⁇ be, for example, an analysis of whether it is healthy tissue ⁇ be or tumor tissue to be used.
  • the inventive method is able to compensate for measurement errors in the recorded spectra or first input vectors. In particular, this avoids lengthy measurement error correction of the input vectors.
  • the second input vectors or the second processing vectors undergo a plurality of hidden layers for analysis, wherein the dimension reduction of the respective second input vectors or of the respective second processing vectors is carried out in each case for each hidden layer.
  • an analysis of the second processing vectors can then be carried out, for example.
  • the dimension of the second processing vectors is thus gradually reduced in communicating between the hidden layers, especially a cascaded transfer, preferably up to the second processing vector is reached a predetermined re ⁇ -induced dimension and transmitted to the output layer as the second output vector.
  • the method according to the invention is able, on the one hand, for example, the recorded raw data of the NIR spectroscopy (input vectors) without further Vorverarbei ⁇ processing steps, for example, measurement error corrections to ver ⁇ work and on the other hand, for example, the complex classification ⁇ task (differentiation of healthy tissue or tumor tissue) to solve with very good performance.
  • an application of the trained deep neural network as a pure input-output relation on the second input vectors can be easily realized by an implementation in suitable software.
  • the method according to the invention is preferably not restricted to brain tumors and the like, but can in particular be applied to other types of tissue such as e.g. Expand bones.
  • Health data for example, can also be analyzed with the method according to the invention in order, for example, to ascertain a probability of occurrence of a specific disease, for example diabetes, on the basis of genetic markers and / or medication revenues and / or previous illnesses and / or laboratory values
  • the method is preferably not limited to medical applications, but in particular also spectral data or input vectors of liquids or gases can be examined.
  • the second input vector is a measured spectrum of the target system.
  • the measured spectrum is a spectrum of tissue, wherein the second output vector in particular indicating whether it is healthy tissue Ge ⁇ or tumor tissue.
  • the measured spectrum is a spectrum of oils and / or lubricants, wherein the second output vector indicative of a particular quality ⁇ ty of oils or lubricants.
  • the method is particularly advantageous in order to make, for example, a statement about the quality, in particular an expected error of extracted features of a dimension-reduced first processing vector of a hidden layer, for example, the relevant hidden ⁇ th layer.
  • the determined error, for configuring the respective relevant hidden layer or the respective subsequent hidden layer, to which the dimension-reduced first processing vector is transmitted is taken into account.
  • the method is particularly advantageous in that thereby an accuracy of the classification result is improved.
  • the target system and its behavior is modeled more accurately, and in particular a Quality of features that are extracted, for example, by the layer ⁇ exposed from the processing vector / s can be improved. In this way, for example, errors in the extracted features can be reduced.
  • the trained deep neural network further during a Analy ⁇ se of second input vectors of a target system.
  • a number of hidden layers is determined from a dimension to be achieved of the dimensionally-reduced second processing vector.
  • the method is particularly advantageous in order to keep the number of hidden layers for a particular classification task as low as possible and to avoid the unnecessary calculation ⁇ special cuts.
  • the invention relates to a configuration apparatus for the computer-aided configuration of a deep neural network based on a training system, comprising:
  • a first provisioning module for providing training data with predetermined first input vectors and predetermined first output vectors for the training system
  • the deep neural network is a feed-forward network, which
  • a first training module for training the deep neural network based on the training data, wherein the first input vectors are each transmitted via the input layer to one of the hidden layers as first processing vectors, the first processing vectors are each transmitted between the hidden layers, each for a hidden layer a respective first data transformation of the transmitted first processing vectors are performed, in the respective first data transformation a dimension reduction of the first processing vectors takes place for a respective hidden layer,
  • the hidden layers are configured based on the dimensionally reduced first processing vectors and the respective associated first output vectors.
  • the invention relates to a Ana ⁇ lytic apparatus for computer-aided analyzing at least one second input vector of a target system comprising: a third supplying module for supplying the second input vector for the target system;
  • a fourth provisioning module for providing a trained deep neural network, wherein
  • the neural network is a feed-forward network, which
  • an input layer for inputting the second input vector for the target system comprising a plurality of consecutive hidden layers modeling the target system
  • the second input vector is transmitted in each case via the input ⁇ layer to one of the hidden layers as second processing vector, the second processing vector is transferred between the hidden layers,
  • a fifth provisioning module for providing the second output vector.
  • a variant of the computer program product with program instructions for configuring a creation device for example a 3D printer or for creating processors and / or devices, claimed, wherein the creation device is configured with the program commands such that said configuration device according to the invention ⁇ direction and / or the analysis device is created.
  • a provision device for storing and / or providing the computer program product is claimed .
  • the provisioning device is, for example, a data carrier which stores and / or makes available the computer program product.
  • the provisioning device is, for example, a network service, a computer system, a server system, in particular a distributed computer system, a cloud-based computer system and / or virtual computer system which Computerpro ⁇ program product preferably in the form of a data stream stores and / or provides.
  • This provision takes place, for example, as a download in the form of a program data block and / or command data block, preferably as a file, in particular as a download file, or as a data stream, in particular as a download data stream, of the complete computer program product.
  • This provision for example, but also as a partial download SUC ⁇ gen, which consists of several parts, in particular through a peer-to-peer network downloaded or is provided as a data stream.
  • Such a computer program product is read into a system using the provision device in the form of the data carrier, for example, and executes the program instructions so that the method according to the invention is executed on a computer or the configuration device is configured in such a way that the configuration device according to the invention and / or created the analyzer.
  • FIG. 1 shows a flowchart of a first exemplary embodiment of the method according to the invention for the computer-aided configuration of a deep neural network on the basis of a training system
  • FIG. 2 shows a flow chart of a further exemplary embodiment of the method according to the invention for computer-aided analysis of at least one second input vector of a target system; 3a-b data structures of a deep neural network of further embodiments of the invention;
  • the following exemplary embodiments have at least one processor and / or a memory device in order to implement or execute the method.
  • FIG. 1 shows a flow chart of a first exemplary embodiment of the method according to the invention for the computer-aided configuration of a deep neural network on the basis of a training system.
  • the method comprises a second method step 120 for providing the deep neural network, wherein the deep neural network is a feed-forward network.
  • the feed-forward network includes an input layer for inputting the first ⁇ A transfer vectors and a plurality of successive layers comparable infected for modeling the training system.
  • the method comprises a third method step 130 for training the deep neural network on the basis of the training data, wherein the first input vectors are each transmitted via the input layer to one of the hidden layers as first processing vectors.
  • the first processing vectors are additionally each be transferred between the hidden layers and are each a hidden layer having a respective first data transformation of transmitted via ⁇ first processing vectors performed.
  • a dimension reduction of the first processing vectors for a particular hidden layer takes place and the hidden layers are configured on the basis of the dimensionally reduced first processing vectors and the respective associated first output vectors.
  • FIG. 2 shows a flow chart of a further embodiment of the method according to the invention for computer-aided analysis of at least one second input vector of a target system.
  • 2 shows a method for computer-aided analysis of at least one second input vector of a target system with a first method step 210 for providing the second input vector for the target system.
  • the method includes a second method step 220 for providing a trained deep neural network, wherein the deep neural network is a feed-forward network.
  • the feed-forward network includes an input layer for entering the second input vector for the target system, a plurality of consecutive hidden layers modeling the target system, and an output layer for outputting a second output vector for the target system.
  • the method includes a third step 230 to evaluate the second input vector, said second entranc ⁇ bevektor is transmitted as the second processing vector respectively using the input layer on one of the ver ⁇ inserted layers and the second processing vector is in each case transmitted between the hidden layers.
  • a respective first trans- is performed formation of the transferred second processing vector addition in each case for a hidden layer, and it is carried out at the respective first data ⁇ transformation a dimensional reduction of the second processing vector each for a respective hidden layer.
  • the second output vector is additionally determined on the basis of a dimension-reduced second processing vector.
  • the dimension-reduced second processing vector can be, for example, a multiply dimensionally reduced second processing vector.
  • the multi-dimension-reduced second processing vector that is to say a second processing of the vector was gradually processed by multiple hidden layers, to, for example, the Distancefachdimensions- reduced second vector processing has reached a predetermined dimen sion ⁇ .
  • the method includes a fourth method step 240 for providing the second output vector.
  • a processor is specially adapted to execute program instructions in such a way as to enable Processor performs functions according to the invention Ver ⁇ drive the, or at least to implement one of the steps of the inventive method ⁇ SEN.
  • the method explained in FIG. 1 and / or in the FIG. 2 process is particularly suitable in an operation, in particular of malignant tumors to ermit ⁇ stuffs whether indeed preferably all infected cells were removed from the affected area.
  • This has the advantage, in particular, of preventing the tumor from spreading or re-forming.
  • This is particularly of ent ⁇ critical importance when the tumor occurs in critical tissue, that is, when by the removal of healthy cells can cause permanent damage to the patient (z. B. with brain tumors).
  • the correct classification of the boundaries between healthy and affected cells is crucial.
  • imaging diagnostic procedures that are used in the run-up to surgery can make an important contribution to surgical planning.
  • these imaging diagnostic procedures can be limited to imaging and identification of the tumor during surgery.
  • NIR near Inf ⁇ rarot
  • the recorded spectra can be used as (second) input vectors for automated differentiation of the different cell types.
  • the invention provides the surgeon to a ⁇ additional decision support in the demarcation between Tu ⁇ tomorrow and non-tumor cells (or tissue) are available.
  • differentiation of tumor cells and healthy tissue from recorded NIR spectra is difficult with conventional methods, as the distinguishing features are difficult to identify in the NIR spectra.
  • the output vector can be brought into a predetermined form with the aid of a corresponding configuration of the deep neural network. It is possible, for example, to bring the output vector into a form of a probability statement about the affiliation, for example healthy or diseased tissue, of the measured sample.
  • the deep neural network in particular as an input / output relation on second input vectors, as explained in FIG. 2, preferably apply real-time implementation using suitable software.
  • the architecture allows the deep neural network, in particular by the plurality of successive hidden layers for modeling the training system, a smooth extension of the Klas ⁇ shuisaufgabe to other tissue types. It would be conceivable, for example, additionally to introduce a classification of bone material or fatty tissue.
  • inventive methods to be able, in particular, the spectra of healthy cells and Tu ⁇ morzellen to differentiate, distinguish the particular difficulties identifiable characteristics.
  • conventional clustering methods based on similarity analyzes are not suitable for this.
  • FIG. 3 a shows a data structure of a deep neural network of a further exemplary embodiment of the invention, with FIG. 3 a in particular representing a possible architecture of the deep neural network.
  • this architecture of a deep neural network can be used by the method according to the invention from FIG. 1 or the method according to the invention from FIG. 2.
  • Figure 3a shows an input layer E for input vector input, a plurality of consecutive hidden layers, and an output layer 0 for output of an output vector.
  • the plurality of hidden layers comprises a total of four layers, in particular a first hidden layer VS1, a second hidden layer VS2, a third hidden layer VS3 and a fourth hidden layer VS4.
  • the first processing vector is performed to reduce its dimensions.
  • the first hidden layer VS1 analyzes the dimension-reduced first processing vector and extracts by means of a function a from this, for example, features y. These features y are then compared with the already ex ⁇ trah striv features of a predetermined first output vector and based on the determined differences (or also called the error) to be configured, the neurons of the first layer.
  • the dimension-reduced first processing vector of the subsequent hidden layer (second hidden layer VS2) is transmitted.
  • second hidden layer VS2 for example, a data transformation B, another Dimensionsre ⁇ production of dimensionally reduced first processing vector which hide from the first layer VS1 was transmitted dimensionally reduced, is carried out by means of a first data transformation.
  • the second hidden layer VS2 analyzes the multiple (twice) dimension-reduced first processing vector and extracted by means of a function B of this example, again features ⁇ y. These features y are then compared with other already extracted features of the predetermined first output vector, and based on the determined differences, the neurons of the second layer VS2 are configured.
  • the output layer 0 is in particular repeatedly (4 times) passed dimensionally reduced first processing vector, compared with the first output vector and possibly determined differences between the multi-dimensional reduced first proces ⁇ weight vector and the first output vector.
  • the output layer neurons or the output layer can then be configured ent ⁇ speaking the determined differences.
  • An analysis of a second input vector differs in particular in that, for example, the features are extracted in the hidden layers without, for example, undertaking a further configuration of the hidden layers or determining an error between the extracted features and a predetermined first output vector.
  • the extracted features may be included in the output vector, for example, for ⁇ Ana lysis or it is for the output vector, for example, only the extracted features of the last hidden layer, in this embodiment, the fourth hidden layer VS4 considered. It is also conceivable that, for example, a multi-dimensi ⁇ onsredu avoider processing vector of the output layer is transmitted as output vector when one particular was reached before ⁇ given dimension of the multi-dimensional reduced processing vector.
  • the hidden layers for deep neural network training or for analysis of input vectors, the input vectors or the processing vectors preferably only transmit the subsequent directly adjacent layer.
  • the input layer can either be the input vectors directly to a given hidden layer, for example the first hidden layer.
  • te layer VS1 pass or a matching hidden
  • the deep neural architecture may be used
  • FIG. 3a Network be expanded from Fig. 3a. This is illustrated for example in FIG. 3b.
  • Tion in addition to the configuration already explained in Fig. 3a may be calculated in Fig. 3b from the respective dimension-reduced vector processing of a hidden layer, each having a second data transformation for each of the corresponding hidden layer, a reconstructed proces ⁇ tung vector.
  • the second data transformation attempts to undo the dimension reduction for the particular hidden layer.
  • the original processing vector ie the vector processing for the corresponding hidden layer, which has not yet dimensionally reduced, compared with the reconstructed processing vector, and if necessary differences are determined Zvi ⁇ exploiting that or error between them.
  • the error detected can then for example be used to improve the respective hidden layer for which the proces ⁇ tung vector has been dimensionally reduced by the respective hidden layer is adjusted based on the error or configured, for example, so insbesonde ⁇ re the neurons of the corresponding Hidden layer can be configured and weights adjusted by corresponding connectors of neurons. This readjustment or configuration can also be carried out for the subsequent hidden layer.
  • FIG. 1 In detail, FIG. 1
  • 3b shows how from the dimension-reduced processing vector, which was dimension-reduced for the first hidden layer VS1 by means of the first data transformation, by means of a second data transformation, for example a data transformation AT, the original processing vector Tl, ie the processing vector, still exists is not sheet for the first hidden layer VS1 dimensionsredu ⁇ reconstructed. Subsequently, an error ldl between the original processing vector and the reconstructed processing vector is determined. On the basis of the error, for example, the relevant hidden
  • Layer ie the first hidden layer VS1, or the subsequent hidden layer, ie the second hidden layer VS2, can be configured more precisely and the deep neural network can be better trained.
  • the accuracy with which the classification output is solved can be improved. In other words, so that in particular ⁇ sondere the error of the output vector is as far as possible mini ⁇ mized.
  • This error finding and improving the configuration of the hidden layers is then applied analogously to the other hidden layers.
  • the method can be applied to all hidden layers or, for example, applied to only a part of the hidden layers.
  • a second data transformation such as the data transformation BT
  • FIG. 4 shows a configuration device of a further embodiment of the invention.
  • FIG. 4 shows a configuration device for the computer-aided configuration of a deep neural network on the basis of a training system.
  • the configuration device comprises a first provisioning module 410, a second provisioning module 420 and a first training module 430 communicatively connected to a first bus 405.
  • the configuration device may additionally comprise a further component or a plurality of further components, such as a processor, a memory unit, an input device, in particular a computer keyboard or a computer mouse, or a monitor.
  • the corresponding component (s) may, for example, be communicatively connected to the other modules of the configuration device via the first bus 405.
  • the first provisioning module 410 is configured to provide training data having predetermined first input vectors and predetermined first output vectors for the training system.
  • the first provision module 410 can be implemented for example with ⁇ means of the processor, the memory unit and a first program component, which provide for example, by executing program instructions, the training data.
  • the second provisioning module 420 is set up to provide the deep neural network, the deep neuron network nale network is a feed-forward network.
  • the feedforward network includes an input layer for inputting the first input ⁇ vectors and a plurality of successive hidden layers for modeling the training system.
  • the second provision module 420 can be implemented for example with ⁇ means of the processor, the memory unit and a second program component, which provide for example, by executing program instructions the deep neural network.
  • the first training module 430 is adapted for training the deep neural network using the training data, where ⁇ layer at the first input vectors respectively via the input is transferred to one of the hidden layers as the first proces ⁇ tung vectors, and transmit the first processing vectors in each case between the hidden layers become. Additionally, the first training module 430 is to be rich ⁇ tet perform a respective first data transformation of the transmitted first processing vectors for each of a hidden layer, and a dimensional reduction of the first processing vectors is carried out at the respective first data transformation for each of a respective one hidden layer.
  • the first training module 430 is to be rich ⁇ tet, to configure the hidden layers on the basis of th first dimensionsreduzier- processing vectors and the respective corresponding first output vectors.
  • the first training module 430 can be implemented, for example, by means of the processor and a third program component which, for example, train the deep neural network by executing program instructions.
  • Fig. 5 shows an analysis device of another embodiment of the invention.
  • FIG. 5 shows an analysis device for computer-aided analysis of at least one second input vector of a target system.
  • the analysis device comprises a third provisioning module 510, a fourth READY ⁇ averaging module 520, a first evaluation module 530 and a fifth deployment module 540, the medium-via a second bus 505 are communicatively connected to each other.
  • the analysis device can, for example, in addition to grasp ⁇ such as a processor, a uniform Speicherein-, an input device, in particular a computer keyboard or a computer mouse, a monitor or a further component or more further components.
  • the corresponding component (s) may be communicatively coupled to the other modules of the analyzer via the second bus 505.
  • the third provisioning module 510 is configured to provide the second input vector to the target system.
  • the third provisioning module 510 can be implemented, for example, by means of the processor, the memory unit and a fourth program component, which provide the second input vector, for example by executing program instructions.
  • the fourth provisioning module 520 is adapted for providing egg ⁇ nes deep trained neural network, wherein the deep neural network is a feedforward network.
  • the feed-forward network includes an input layer for inputting the second input vector for the target system, a plurality of consecutive hidden layers modeling the target system, and an output layer for outputting a second output vector for the target system.
  • the fourth provisioning module 520 may be implemented for example with ⁇ means of the processor, the memory unit and a fifth program component, which provide for example, by executing program instructions, the trained neural network deep.
  • the first evaluation module 530 is set up to evaluate the second input vector, wherein the second input vector in each case via the input layer to one of the hidden
  • Layers is transmitted as a second processing vector and the second processing vector is transferred between the hidden layers, respectively.
  • the first evaluation module 530 is set up so that a respective first data transformation of the transmitted second processing vector is performed for each hidden layer and that in the respective first data transformation a dimension reduction of the second processing vector takes place for a particular hidden layer.
  • the first evaluation module 530 is configured such that the second output vector is determined using a dimensionsredu ⁇ ed second processing vector.
  • the first evaluation module 530 may be, for example, by means of the processor and a sixth program component implemen ⁇ advantage that analyze for example, by executing program instructions the second input vector by means of the deep trainier- th neural network.
  • the fifth provisioning module 540 is configured to provide the second output vector.
  • the fifth deployment module 540 may be implemented for example with ⁇ means of the processor, the memory unit and a fifth program component, for example, provide the second output vector by executing program instructions.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Mathematical Physics (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Databases & Information Systems (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Public Health (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Pathology (AREA)
  • Epidemiology (AREA)
  • Primary Health Care (AREA)
  • Measuring And Recording Apparatus For Diagnosis (AREA)

Abstract

La présente invention concerne un procédé d'analyse assistée par ordinateur d'au moins un second vecteur d'entrée d'un système cible. Le procédé comprend une étape de fourniture (210) du second vecteur d'entrée pour le système cible. Le procédé comprend une étape de fourniture (220) d'un réseau neuronal profond formé, le réseau neuronal profond étant un réseau non bouclé qui comprend une couche d'entrée pour entrer le second vecteur d'entrée pour le système cible, une pluralité de couches successives cachées qui modélisent le système cible, et une couche de sortie pour la sortie d'un second vecteur de sortie pour le système cible. Le procédé comprend une étape d'évaluation (230) du second vecteur d'entrée, le second vecteur d'entrée étant transmis à chaque fois par le biais de la couche d'entrée à une des couches cachées en tant que second vecteur de traitement tandis que le second vecteur de traitement est transmis entre chacune des couches cachées. Pour chaque couche cachée est exécutée une première transformation de données correspondante du second vecteur de traitement transmis au cours de laquelle est effectuée une réduction de dimensions du second vecteur de traitement pour chaque couche cachée concernée, le second vecteur de sortie étant déterminé à l'aide d'un second vecteur de traitement de dimension réduite. Le procédé comprend une étape de fourniture (240) du second vecteur de sortie.
PCT/EP2016/067627 2016-07-25 2016-07-25 Procédé et dispositif d'analyse assistée par ordinateur d'au moins un second vecteur d'entrée d'un système cible Ceased WO2018019355A1 (fr)

Priority Applications (3)

Application Number Priority Date Filing Date Title
EP16751207.8A EP3472759A1 (fr) 2016-07-25 2016-07-25 Procédé et dispositif d'analyse assistée par ordinateur d'au moins un second vecteur d'entrée d'un système cible
PCT/EP2016/067627 WO2018019355A1 (fr) 2016-07-25 2016-07-25 Procédé et dispositif d'analyse assistée par ordinateur d'au moins un second vecteur d'entrée d'un système cible
CN201680089552.1A CN109716359A (zh) 2016-07-25 2016-07-25 用于计算机辅助地分析目标系统的至少一个第二输入矢量的方法和设备

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/EP2016/067627 WO2018019355A1 (fr) 2016-07-25 2016-07-25 Procédé et dispositif d'analyse assistée par ordinateur d'au moins un second vecteur d'entrée d'un système cible

Publications (1)

Publication Number Publication Date
WO2018019355A1 true WO2018019355A1 (fr) 2018-02-01

Family

ID=56684600

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2016/067627 Ceased WO2018019355A1 (fr) 2016-07-25 2016-07-25 Procédé et dispositif d'analyse assistée par ordinateur d'au moins un second vecteur d'entrée d'un système cible

Country Status (3)

Country Link
EP (1) EP3472759A1 (fr)
CN (1) CN109716359A (fr)
WO (1) WO2018019355A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114004330A (zh) * 2021-09-26 2022-02-01 苏州浪潮智能科技有限公司 一种基于特征值补全的推荐系统及方法

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6477516B1 (en) * 2000-05-16 2002-11-05 Intevep, S.A. System and method for predicting parameter of hydrocarbon with spectroscopy and neural networks
DE102007001026A1 (de) * 2007-01-02 2008-07-03 Siemens Ag Verfahren zur rechnergestützten Steuerung und/oder Regelung eines technischen Systems
DE102011081197A1 (de) * 2011-08-18 2013-02-21 Siemens Aktiengesellschaft Verfahren zur rechnergestützten Modellierung eines technischen Systems
WO2015054666A1 (fr) * 2013-10-10 2015-04-16 Board Of Regents, The University Of Texas System Systèmes et procédés d'analyse quantitative d'images histopathologiques au moyen de schémas d'ensemble de multi-classificateurs
EP3016033A1 (fr) * 2014-10-29 2016-05-04 Ricoh Company, Ltd. Systeme de traitement de l'information, appareil de traitement de l'information et procede de traitement de l'information

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6477516B1 (en) * 2000-05-16 2002-11-05 Intevep, S.A. System and method for predicting parameter of hydrocarbon with spectroscopy and neural networks
DE102007001026A1 (de) * 2007-01-02 2008-07-03 Siemens Ag Verfahren zur rechnergestützten Steuerung und/oder Regelung eines technischen Systems
DE102011081197A1 (de) * 2011-08-18 2013-02-21 Siemens Aktiengesellschaft Verfahren zur rechnergestützten Modellierung eines technischen Systems
WO2015054666A1 (fr) * 2013-10-10 2015-04-16 Board Of Regents, The University Of Texas System Systèmes et procédés d'analyse quantitative d'images histopathologiques au moyen de schémas d'ensemble de multi-classificateurs
EP3016033A1 (fr) * 2014-10-29 2016-05-04 Ricoh Company, Ltd. Systeme de traitement de l'information, appareil de traitement de l'information et procede de traitement de l'information

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114004330A (zh) * 2021-09-26 2022-02-01 苏州浪潮智能科技有限公司 一种基于特征值补全的推荐系统及方法
CN114004330B (zh) * 2021-09-26 2023-11-14 苏州浪潮智能科技有限公司 一种基于特征值补全的推荐系统及方法

Also Published As

Publication number Publication date
EP3472759A1 (fr) 2019-04-24
CN109716359A (zh) 2019-05-03

Similar Documents

Publication Publication Date Title
DE102015212953B4 (de) Künstliche neuronale Netze zur Klassifizierung von medizinischen Bilddatensätzen
DE112018002822T5 (de) Klassifizieren neuronaler netze
EP3540632B1 (fr) Procédé pour la classification des échantillons tissulaires
EP3282271A1 (fr) Procede destine a regler et/ou a adapter une valeur de parametre d'au moins un parametre d'un protocole de resonance magnetique pour au moins une sequence de resonance magnetique
DE112019005308T5 (de) Erzeugungsvorrichtung, -verfahren und -programm für gewichtetes bild, bestimmerlernvorrichtung, -verfahren und -programm, bereichsextraktionsvorrichtung, -verfahren und -programm und bestimmer
DE102014224656A1 (de) Verfahren und Vorrichtung zum Segmentieren eines medizinischen Untersuchungsobjekts mit quantitativen MR-Bildgebungsmethoden
DE102021124256A1 (de) Mobile ki
DE112011102209T5 (de) Datenstörung für eine Einrichtung zur Waferinspektion oder -Metrologie
WO2018019355A1 (fr) Procédé et dispositif d'analyse assistée par ordinateur d'au moins un second vecteur d'entrée d'un système cible
WO2010054646A2 (fr) Procédé de production de données d'image pour une représentation d'un instrument, ainsi que système correspondant
EP3126819B1 (fr) Dispositif et procédé servant à analyser un échantillon afin de diagnostiquer une tumeur de la prostate
DE102021204550A1 (de) Verfahren zum Erzeugen wenigstens eines Datensatzes zum Trainieren eines Algorithmus maschinellen Lernens
WO2001080235A1 (fr) Procede permettant de determiner un enregistrement caracteristique pour un signal de donnees
DE102011075738A1 (de) Verfahren und Vorrichtung zur Ermittlung eines robusten Ablationsprogramms
EP3605404A1 (fr) Procédé et dispositif d'entraînement d'une routine d'apprentissage mécanique permettant de commander un système technique
DE102014224916B4 (de) Verfahren zur rechnergestützten Analyse eines oder mehrerer Gewebeschnitte des menschlichen oder tierischen Körpers
EP3701428B1 (fr) Procédé et dispositif destinés à améliorer la robustesse d'un système d'apprentissage par machine
DE19608734C1 (de) Verfahren zur Klassifikation einer meßbaren Zeitreihe, die eine vorgebbare Anzahl von Abtastwerten aufweist, insbesondere eines elektrischen Signals, durch einen Rechner und Verwendung des Verfahrens
DE102020106857A1 (de) Mikroskopiesystem und verfahren zum verarbeiten von mikroskopbildern
WO2021160208A1 (fr) Dispositif et procédé d'examen d'échantillons métalliques
DE112018004558T5 (de) Modellieren von genetischen krankheiten
WO2025008219A1 (fr) Procédé mis en oeuvre par ordinateur pour réduire des fichiers journaux entachés d'erreur à partir d'un système
DE102023123907A1 (de) Verfahren zur Erkennung von Instrumenten in Bilddaten und Vorrichtung zur Analyse von Bilddaten
DE102023116019A1 (de) Oszilloskop mit einem hauptkomponentenanalysator
DE10317717B4 (de) Verfahren zur Diagnose von Erkrankungen unter Verwendung von Indikatorstoffen

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16751207

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2016751207

Country of ref document: EP

Effective date: 20190118

NENP Non-entry into the national phase

Ref country code: DE