WO2020193481A1 - Procédé et dispositif d'apprentissage et de réalisation d'un réseau neuronal artificiel - Google Patents
Procédé et dispositif d'apprentissage et de réalisation d'un réseau neuronal artificiel Download PDFInfo
- Publication number
- WO2020193481A1 WO2020193481A1 PCT/EP2020/058017 EP2020058017W WO2020193481A1 WO 2020193481 A1 WO2020193481 A1 WO 2020193481A1 EP 2020058017 W EP2020058017 W EP 2020058017W WO 2020193481 A1 WO2020193481 A1 WO 2020193481A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- neurons
- artificial neural
- layer
- specific
- neural network
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/084—Backpropagation, e.g. using gradient descent
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/0464—Convolutional networks [CNN, ConvNet]
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/0495—Quantised networks; Sparse networks; Compressed networks
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/082—Learning methods modifying the architecture, e.g. adding, deleting or silencing nodes or connections
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/09—Supervised learning
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
Definitions
- the invention relates to a method and a device for training and for producing an artificial neural network.
- One of the difficulties is to choose an architecture of the network structure of the artificial neural network so that it is possible to use a
- Another difficulty is to choose an architecture of the network structure of the artificial neural network so that it can be used on a system with limited resources such as memory or
- Computing capacity is possible.
- a control unit in a vehicle, a smartphone or a tablet it is essential, for example, to choose the number of neurons per network layer as low as possible in order to create a small artificial neural network that still has sufficient performance for the underlying problem.
- the use of artificial neural networks to solve the same problem on different devices therefore requires different device-specific models with different network structures, which under the given Boundary conditions represent an optimal compromise between resource requirements and performance.
- Device-unspecific model switched off, for example, a previously specified number of neurons in each layer of the artificial neural network. However, this means that no device-specific models can be trained and produced for different devices.
- a method for producing a plurality of device-specific artificial neural networks provides that, in a first iteration, depending on a general network architecture that includes a plurality of neurons, neurons of a device-specific network architecture for one of the plurality
- device-specific artificial neural networks are determined stochastically, with a training data point for an output of the artificial neural network by the neurons of the device-specific
- Network architecture is propagated forward, the weights of the neurons of the device-specific network architecture being determined for a second iteration by backpropagation as a function of the output, and the weights of the other neurons being retained for the second iteration.
- the model training for several devices on which the function of the artificial neural network is to be carried out is carried out iteratively.
- a network architecture, which is to be optimized, is determined, in particular stochastically, for each training data point. The so produced
- Device-specific artificial neural networks have an optimized network architecture for a given problem, taking into account boundary conditions due to resource restrictions.
- a Pareto-optimal solution with regard to the size of the artificial neural network compared to its performance can thus be selected without the artificial neural network having to be retrained for this purpose. Alternatively, you can retrain in order to further increase performance.
- In order to take the resource restrictions into account for example, by specifying the number of neurons for a certain device-specific artificial neural network, it is systematically determined how many neurons are required per layer in order to achieve a desired performance on the device.
- the training process for the various device-specific artificial neural networks is not much more complex than the normal training of an artificial neural network.
- a plurality of device-specific artificial neural networks are preferably produced depending on the general network architecture, with the neurons being selected from a plurality of training data points for each training data point, which define the device-specific artificial neural network to be trained in this training data point, with the neurons for each training data point , which define the device-specific artificial neural network to be trained in this training data point, training data are propagated forward, the weights of the neurons of the device-specific artificial neural network to be trained being determined by backpropagation as a function of the output, and the weights of the other neurons being maintained.
- the neurons that are the neurons for a device-specific layer of the device-specific network architecture are preferably selected from a layer of the general network architecture. A subset of the neurons of the entire artificial neural network thus forms the neurons of the same layer of the device-specific artificial neural network.
- the neurons that are used in the general network architecture and in the are preferably selected from the general network architecture
- Device-specific network architecture form a group of neurons which define the same function in the device-specific network architecture as in the general network architecture. This leaves logical groups of
- Neurons e.g. Forming filters in a convolutional neural network.
- a priority is preferably defined for a neuron, the weights for the neuron being determined as a function of the priority in the backpropagation or remaining unchanged.
- the priority gives the neurons a natural order.
- the training process is designed in such a way that neurons with a higher priority are used more frequently in the training process than those with a lower priority. Seen clearly, the network is conditioned in the training process to encode important information in the neurons of higher priority. In other words, the information is that in
- the neuron is preferably assigned a parameter which defines the priority, the parameter being compared with a threshold value in a comparison, and the weights for the neuron being determined as a function of a result of the comparison or remaining unchanged.
- This parameter can be easily evaluated during training based on the comparison with the threshold value.
- An ordinal number which characterizes the priority is preferably defined for each of the plurality of neurons, with only the neurons in the backpropagation can be determined whose ordinal number is below an upper limit for the ordinal numbers.
- the ordinal number gives the neurons a natural order.
- the training process is designed in such a way that neurons with a smaller ordinal number are used more frequently in the training process than those with a high ordinal number.
- Important information is thus encoded in the lower order neurons. This means that the higher the order of a neuron, the more specific the information encoded there. This means that important information is only encoded in the weights of the neurons that correspond to a low ordinal number.
- Neurons that functionally belong together are preferably defined by the same ordinal number. Logical groups of neurons that fulfill a function in the network architecture can thus be taken into account.
- a separate threshold value or a separate upper limit is defined only for the neurons of this layer.
- the neurons are given their own natural order in each layer.
- the general network architecture is preferably designed as a deep artificial neural network, in particular with at least one layer, which is designed as a fully connected layer or as a convolutional layer. This is a preferred network architecture; the network architecture can additionally or alternatively comprise other layer types.
- a method for producing a classification device in particular for a robot, a tool or an at least partially autonomous vehicle, provides that a device-specific artificial neural network is determined according to the method according to one of the preceding claims, the network architecture of the device-specific artificial neural network thus determined without further training is transferred to the classifier. Separate retraining is not necessary, but could lead to better prediction. This enables the same functionality to be efficiently implemented on devices that provide different resources for the functionality.
- a method for training a plurality of device specific artificial neural networks comprising a plurality of layers is also provided.
- the method for training comprises the following steps in a first iteration for a first training data point of a first batch from a plurality of batches of an epoch of training data: stochastically determining a value for a layer of the plurality of layers, the value being dependent on a maximum number of neurons for this layer is determined, determining a plurality of neurons from the neurons of the layer and depending on the value, forward propagation of the batch by the plurality of neurons, determining an output of the artificial neural network or the layer, backward propagation, in particular with
- a batch describes a subset of the training data on the basis of which the gradient for the update of the weights is calculated.
- a batch contains training data points.
- Network architectures that differ in the number of neurons in the layers are trained synchronously.
- the value is preferably determined to be positive and less than or equal to the maximum number of neurons.
- the methodology of how the values are selected is stochastic; the choice of the distribution of the values is not specified and can be specified by the user. In particular, depending on the maximum number of neurons, the value can be sampled uniformly from an interval between one and the maximum number of neurons. Provision is preferably made for a maximum number of neurons to be specified for each of the layers, in particular before the start of training. This means that device-specific requirements with regard to resources are taken into account.
- Each neuron of the artificial neural network is preferably assigned an ordinal number, the ordinal number assigned to a neuron being compared with the value in a comparison, and it being determined depending on the result of the comparison whether a neuron is part of a
- the training process takes place as with a fixed, predetermined one
- FIG. 1 shows a schematic illustration of an artificial neural network
- FIG. 2 shows a schematic illustration of steps in a method for training the artificial neural network
- FIG. 3 shows a schematic representation of steps in a method for producing a device-specific artificial neural network.
- FIG. 1 shows a schematic illustration of an artificial neural network 100.
- the artificial neural network 100 includes a general one
- Network architecture with an input layer 102, at least one hidden layer 104 and an output layer 106.
- a hidden layer 104 is shown in FIG. 1, but several hidden layers can be provided.
- the neurons that are arranged in a hidden layer are assigned ordinal numbers. In the example, one of the ordinal numbers 1, 2, 3, 4, 5, 6, 7, 8 is assigned to each neuron in the hidden layer 104. The same atomic number can also be assigned to groups of neurons. In the example, the ordinal numbers indicate a priority.
- the general network architecture is designed, for example, as a deep artificial neural network.
- the at least one hidden layer 104 is designed, for example, as a fully connected layer or as a convolutional layer. This is a preferred general network architecture; the general network architecture may additionally or alternatively be others
- Layer types include.
- FIG. 2 shows a schematic representation of steps in a method for training the artificial neural network.
- the method for training the artificial neural network is based on the assumption that the artificial neural network comprises a plurality of layers,..., L k .
- An ordinal number is assigned to each neuron of the artificial neural network.
- a variety of epochs of training data arranged in batches are used for training.
- each neuron in a layer L is assigned a unique ordinal number.
- the ordinal numbers start at 1 and are
- the neurons of a filter can form a group
- the latter can be achieved by assigning the same ordinal number to the neurons in a group.
- a maximum number of neurons m can be provided for each of the layers L before the start of the training.
- the example provides only this maximum number of neurons m, in each of the Layers U to use.
- every neuron becomes its
- the training process is carried out in the same way as with a fixed, predetermined network architecture.
- the artificial neural network is trained in a monitored manner. That is, for training
- Input-output pairs (c; y,) are available.
- the training data x are fed into the network and the output of the network, ie the prediction p (x), is compared with a desired result y using a so-called cost function.
- C C ⁇ (p (x); y, which can be parameterized via a parameter set Q if necessary, a measure is given that indicates how far the prediction deviates from the basic truth. If the cost function is differentiable,
- the gradients of C with reference to the weights of the neural network can be determined for each training pair (x; y,). The gradients can then be used to assign the weights of the neural network according to a predetermined rule
- backpropagation This process is referred to below as backpropagation.
- backpropagation There are many forms of backpropagation.
- the gradients can be averaged over several training examples before the weights are updated.
- the update rule for updating the weights may also vary.
- regularizations can be applied and included in the cost function.
- the procedure described below is independent of the exact design of the weight optimization process, i.e. the methods described below are universal in this regard and independent of the relevant characteristics of the training process.
- an epoch with a large number of batches is selected in a step 202.
- a step 204 is then carried out.
- step 204 a batch is selected from the plurality of batches of the epoch.
- a step 206 is then carried out.
- step 206 a value u, for one layer U of the plurality of layers,
- L k is determined.
- the value u is determined for this layer L as a function of a maximum number of neurons m.
- the value u is determined so that it is positive and less than or equal to the maximum number of neurons m ,.
- the user can specify exactly how values u are selected. For example, u, uniform is sampled from the interval N n [1; m,]. A step 208 is then carried out.
- a plurality of neurons is determined from the neurons of the layer L and depending on the value u.
- only those neurons are used in layer L whose ordinal number is less than or equal to Ui.
- the ordinal number that is assigned to a neuron is compared with the value u in a comparison and, depending on the result of the comparison, it is determined whether a neuron is part of a device-specific one
- the device-specific network is therefore stochastic. For the plurality of neurons, forward propagation of the batch is carried out by the plurality of neurons to determine an output of the artificial neural network or the U-layer. A step 210 is then carried out.
- step 210 a deviation of the output from an expected output is determined in a backward propagation.
- a gradient is calculated in order to determine new weights for the multiplicity of neurons.
- a step 212 is then carried out.
- step 212 only the weights of the plurality of neurons are determined depending on the output.
- step 214 is then carried out.
- step 214 it is checked whether a termination criterion is met. If that
- step 202 is carried out.
- Training data point in a forward pass and the associated backward pass only updated the weights of the neurons used for that in this
- the number of neurons is not given directly, but rather a logical group of neurons, e.g. represent a filter in a convolutional network to assign a common ordinal number.
- FIG. 3 shows a schematic representation of steps in a method for producing a device-specific artificial neural network.
- the method for producing the device-specific artificial neural network provides in a step 302 that the device-specific network architecture for the device-specific artificial neural network is determined as a function of the general network architecture. In particular, the number of neurons for a certain device-specific artificial neural network is systematically determined. The example defines how many neurons per layer are necessary to achieve the desired performance on a target device.
- a step 304 is then carried out.
- the neurons that are used for the device-specific network architecture are selected from the plurality of neurons as a function of the specified number.
- a number of neurons is selected from a layer of the general network architecture, which are the neurons for a device-specific layer of the device-specific
- neurons are selected from the general network architecture which form a group of neurons in the general network architecture and in the device-specific network architecture which define the same function in the device-specific network architecture as in the general network architecture. This leaves logical groups of neurons, e.g. Forming filters in a convolutional neural network.
- a priority is defined for the neurons in the example.
- the weights for a neuron are determined depending on the priority in the backpropagation or remain unchanged.
- the priority gives the neurons a natural order.
- the priorities and the training process are designed in such a way that neurons with a higher priority are used more frequently in the training process than those with a lower priority. This encodes important information in the weights belonging to neurons of higher priority.
- a parameter is assigned to the neuron in the example that defines the priority.
- the parameter is compared, for example, in a comparison with a threshold value which indicates the priority from which a neuron is to be selected for coding important information.
- the weights for a specific neuron are determined either updated or remain unchanged depending on a result of the comparison with the threshold value.
- an ordinal number is defined in particular as a parameter for each of the plurality of neurons.
- the ordinal number characterizes the priority.
- the threshold forms an upper limit.
- Backpropagation only determines the updates of the weights for the neurons whose ordinal number is below the upper bound for the
- Ordinal numbers lies.
- the ordinal number gives the neurons a natural order.
- the training process is designed in such a way that neurons with a smaller ordinal number are used more frequently in the training process than those with a high ordinal number.
- neurons that functionally belong together are defined by the same ordinal number. This means that logical groups of
- Neurons that fulfill a function in the network architecture
- a separate threshold value or a separate upper limit can only be defined for the neurons of this layer.
- the neurons are given their own natural order in each layer.
- a step 306 is then carried out.
- step 306 training data of a training data point for an output of the artificial neural network are propagated forward by the neurons of the device-specific network architecture.
- a step 308 is then carried out.
- step 308 the weights of the neurons of the device-specific network architecture are determined by backpropagation as a function of the output. The weights of the other neurons are retained.
- Steps 302 to 308 are repeated to create a plurality of device-specific artificial neural networks depending on the general
- the neurons are selected in this case in step 304 through which the in this case Training data point to be trained device-specific artificial neural network is defined.
- step 306 training data are propagated forward in this case by the neurons that define the device-specific artificial neural network to be trained in this training data point.
- step 308 in this case, only the weights of these neurons of the device-specific artificial neural network to be trained are determined by backpropagation as a function of the output. The weights of the other neurons are maintained.
- the selection of neurons is made before or after the batch.
- step 304 can be omitted during training with the batch.
- a method for producing a classification device provides that a device-specific artificial neural network is determined according to the method according to one of the preceding claims.
- the network architecture of the device-specific artificial neural network determined in this way is then transferred to the classification device without further training.
- the classification device can in particular be used for a robot, a tool or an at least partially autonomous vehicle.
- Training data overfitted Another advantage is that a model is found that has an optimal architecture for the underlying question under restrictions such as storage space. Another advantage is that several models for different target platforms with different restrictions are trained at the same time, which leads to comparable behavior on comparable data.
- This approach can be used in any area in which neural networks are used, especially if the resources on the target platform are limited. This approach is of particular relevance when it comes to autonomous driving, in which neural networks are used on control units.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Feedback Control In General (AREA)
Abstract
La présente invention a trait à un procédé de réalisation d'un appareil de classification, de réalisation d'une pluralité de réseaux neuronaux artificiels spécifiques à un appareil et à un procédé d'apprentissage d'une pluralité de réseaux neuronaux artificiels spécifiques à un appareil, qui comportent une pluralité de couches. Le procédé d'apprentissage, dans une première itération pour un premier point de données d'apprentissage d'un premier lot d'une pluralité de lots d'une époque de données d'apprentissage, comprend les étapes suivantes : la détermination stochastique (206) d'une valeur pour une couche de la pluralité des couches, la valeur étant déterminée en fonction d'un nombre maximal de neurones pour cette couche ; la détermination (208) d'une pluralité de neurones parmi les neurones de la couche et en fonction de la valeur ; la propagation vers l'avant (208) du lot par la pluralité de neurones ; la détermination (208) d'une tâche du réseau neuronal artificiel ou de la couche ; la propagation vers l'arrière (210), en particulier avec calcul d'un gradient d'écart entre la tâche et une tâche attendue. Pour une seconde itération, les pondérations de la pluralité des neurones sont déterminées en fonction de la tâche (212) et les pondérations d'autres neurones de la couche restent inchangées pour la seconde itération pour un second lot parmi la pluralité de lots.
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| DE102019204136.1 | 2019-03-26 | ||
| DE102019204136.1A DE102019204136A1 (de) | 2019-03-26 | 2019-03-26 | Verfahren und Vorrichtung für Training und Herstellung eines künstlichen neuronalen Netzes |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2020193481A1 true WO2020193481A1 (fr) | 2020-10-01 |
Family
ID=69954060
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/EP2020/058017 Ceased WO2020193481A1 (fr) | 2019-03-26 | 2020-03-23 | Procédé et dispositif d'apprentissage et de réalisation d'un réseau neuronal artificiel |
Country Status (2)
| Country | Link |
|---|---|
| DE (1) | DE102019204136A1 (fr) |
| WO (1) | WO2020193481A1 (fr) |
Cited By (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| DE102022212902A1 (de) | 2022-11-30 | 2024-06-06 | Robert Bosch Gesellschaft mit beschränkter Haftung | Verfahren zum Trainieren eines künstlichen neuronalen Netzes |
| WO2024239104A1 (fr) * | 2023-05-19 | 2024-11-28 | Multicom Technologies Inc. | Systèmes et procédés d'entraînement de modèles d'apprentissage profond |
Citations (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2000003355A2 (fr) * | 1998-07-08 | 2000-01-20 | Siemens Aktiengesellschaft | Reseau neuronal, et procede et dispositif pour l'entrainement d'un reseau neuronal |
| US20160217368A1 (en) * | 2015-01-28 | 2016-07-28 | Google Inc. | Batch normalization layers |
| DE202017106532U1 (de) * | 2016-10-28 | 2018-02-05 | Google Llc | Suche nach einer neuronalen Architektur |
| WO2019001649A1 (fr) * | 2017-06-30 | 2019-01-03 | Conti Temic Microelectronic Gmbh | Transfert de connaissance entre différentes architectures d'apprentissage profond |
-
2019
- 2019-03-26 DE DE102019204136.1A patent/DE102019204136A1/de active Pending
-
2020
- 2020-03-23 WO PCT/EP2020/058017 patent/WO2020193481A1/fr not_active Ceased
Patent Citations (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2000003355A2 (fr) * | 1998-07-08 | 2000-01-20 | Siemens Aktiengesellschaft | Reseau neuronal, et procede et dispositif pour l'entrainement d'un reseau neuronal |
| US20160217368A1 (en) * | 2015-01-28 | 2016-07-28 | Google Inc. | Batch normalization layers |
| DE202017106532U1 (de) * | 2016-10-28 | 2018-02-05 | Google Llc | Suche nach einer neuronalen Architektur |
| WO2019001649A1 (fr) * | 2017-06-30 | 2019-01-03 | Conti Temic Microelectronic Gmbh | Transfert de connaissance entre différentes architectures d'apprentissage profond |
Non-Patent Citations (2)
| Title |
|---|
| ARIEL GORDON ET AL: "MorphNet: Fast & Simple Resource-Constrained Structure Learning of Deep Networks", 2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, 30 November 2017 (2017-11-30), pages 1586 - 1595, XP055603467, DOI: 10.1109/CVPR.2018.00171 * |
| THOMAS ELSKEN ET AL: "Simple And Efficient Architecture Search for Convolutional Neural Networks", ARXIV.ORG, CORNELL UNIVERSITY LIBRARY, 201 OLIN LIBRARY CORNELL UNIVERSITY ITHACA, NY 14853, 13 November 2017 (2017-11-13), XP081287784 * |
Cited By (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| DE102022212902A1 (de) | 2022-11-30 | 2024-06-06 | Robert Bosch Gesellschaft mit beschränkter Haftung | Verfahren zum Trainieren eines künstlichen neuronalen Netzes |
| WO2024239104A1 (fr) * | 2023-05-19 | 2024-11-28 | Multicom Technologies Inc. | Systèmes et procédés d'entraînement de modèles d'apprentissage profond |
Also Published As
| Publication number | Publication date |
|---|---|
| DE102019204136A1 (de) | 2020-10-01 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| EP2999998B1 (fr) | Méthode de détermination d'un modèle d'une grandeur de sortie d'un système technique | |
| DE202017102235U1 (de) | Trainingssystem | |
| DE202017102238U1 (de) | Aktorsteuerungssystem | |
| WO2019081241A1 (fr) | Procédé, dispositif et programme informatique pour créer un réseau neuronal profond | |
| EP4193135B1 (fr) | Procédé mis en oeuvre par ordinateur permettant la fourniture d'un processus de test destiné à des scénarios de trafic à tester | |
| EP3785177A1 (fr) | Procédé et dispositif de détermination d'une configuration d'un réseau neuronal | |
| EP0901658B1 (fr) | Procede d'optimisation d'un ensemble de regles floues au moyen d'un ordinateur | |
| EP0875808A2 (fr) | Système et méthode pour la modélisation de processus d'une installation technique | |
| WO2020187591A1 (fr) | Procédé et dispositif de commande d'un robot | |
| EP1327959B1 (fr) | Réseau neuronal pour modéliser un système physique et procédé de construction de ce réseau neuronal | |
| WO2020193481A1 (fr) | Procédé et dispositif d'apprentissage et de réalisation d'un réseau neuronal artificiel | |
| DE102019216973A1 (de) | Lernverfahren für neuronale netze basierend auf evolutionären algorithmen | |
| WO2019206776A1 (fr) | Procédé et dispositif de détermination de la configuration de réseau d'un réseau de neurones artificiels | |
| WO2020193294A1 (fr) | Procédé et dispositif destinés à commander de manière compatible un appareil avec un nouveau code de programme | |
| DE102019212912A1 (de) | Komprimieren eines tiefen neuronalen Netzes | |
| DE69313622T2 (de) | Speicherorganisationsverfahren für eine Steuerung mit unscharfer Logik und Gerät dazu | |
| WO1998008173A1 (fr) | Procede pour la production mecanisee automatique de documents de fabrication | |
| EP3736709A1 (fr) | Système de classification et procédé de génération distribuée des modèles de classification | |
| DE102020210795A1 (de) | Künstliches neuronales Netz | |
| DE102021109169A1 (de) | Verfahren zum Trainieren eines neuronalen Netzes | |
| DE102022112606B3 (de) | Computerimplementiertes Verfahren zur Kalibrierung eines technischen Systems | |
| DE102022205547A1 (de) | Verfahren zum Trainieren eines Convolutional Neural Networks | |
| DE102022115101A1 (de) | Automatisierter entwurf von architekturen künstlicher neuronaler netze | |
| DE102022207072A1 (de) | Verfahren zum Ermitteln einer optimalen Architektur eines künstlichen neuronalen Netzes | |
| DE202021103700U1 (de) | Vorrichtung zum Erzeugen eines künstlichen neuronalen Netzes |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 20713624 Country of ref document: EP Kind code of ref document: A1 |
|
| NENP | Non-entry into the national phase |
Ref country code: DE |
|
| 122 | Ep: pct application non-entry in european phase |
Ref document number: 20713624 Country of ref document: EP Kind code of ref document: A1 |