[go: up one dir, main page]

EP1384199A2 - Procede de determination de risques concomitants - Google Patents

Procede de determination de risques concomitants

Info

Publication number
EP1384199A2
EP1384199A2 EP01999919A EP01999919A EP1384199A2 EP 1384199 A2 EP1384199 A2 EP 1384199A2 EP 01999919 A EP01999919 A EP 01999919A EP 01999919 A EP01999919 A EP 01999919A EP 1384199 A2 EP1384199 A2 EP 1384199A2
Authority
EP
European Patent Office
Prior art keywords
time
learning
function
objective function
neurons
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
EP01999919A
Other languages
German (de)
English (en)
Inventor
Ronald E. Kates
Nadia Harbeck
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Publication of EP1384199A2 publication Critical patent/EP1384199A2/fr
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks

Definitions

  • the invention relates to a method for determining competing risks after an initial event with the aid of systems capable of learning on the basis of data that has already been measured or can otherwise be objectified (training data).
  • Systems capable of learning such as neural networks, are increasingly being used for risk assessment because they are able to recognize and present complex, previously unknown relationships between raised factors and outcomes. This capability enables them to provide more reliable or more precise estimates of risk probabilities than conventional methods, which have to be based on a special form of the relationship, such as a linear dependency.
  • the factors of the data sets comprise a number of objectifiable parameters, on the values of which a person operating the learning system has no influence.
  • these parameters include, for example Age at the time of surgery, number of lymph nodes affected, laboratory value of the uPA factor, laboratory value of the PAI-1 factor, characteristic value for the tumor size, laboratory value of the estrogen receptor, laboratory value of the progesterone receptor.
  • the type of therapy actually used can be recorded as an indication, so that the relationship between therapy and outcome is also recognized.
  • the values are temporarily stored on a suitable storage medium and fed to the system capable of learning.
  • the individual details are usually associated with an unsharpness, analogous to signal noise. From these noisy individual signals, it is the task of the adaptive system to form refined signals that can lead to a risk assessment within the framework of a suitable probability representation.
  • a so-called “multilayer perceptron” (in the technical literature always abbreviated as "MLP") contains, for example, an input layer, a hidden layer, and an output layer.
  • MLP multilayer perceptron
  • the "hidden nodes” in the neural network perform the task of generating a signal for the probability of complex internal processes. They can therefore use the underlying, but not directly detectable, biological processes, which are ultimately decisive for the course of a disease will be, provide information.
  • Competing risks can also arise from the fact that a patient dies, for example, from a completely different illness or from a side effect of the treatment, so that the risk of the characteristic of interest to the doctor remains hidden.
  • an exclusive classification with a censoring rule can map the training data in such a way that for each possible output a neural network or a classification tree can be trained by recursive partitioning according to the state of the art. In the example with the outputs 1 to 3 one would have to train three completely independent neural networks or three different decision trees.
  • a problem with this use of the prior art is that the detection of a possible informative value of internal nodes with regard to one of the disease outlets is lost for the detection of its informative value with regard to the other disease outlets.
  • an internal biological process recognized by internal nodes in a neural network could contribute to several observable outputs, albeit with different weightings.
  • the biological "invasiveness" of a tumor is of different but significant importance for distant metastases or for local recurrences.
  • the independently trained networks must independently “discover" the meaningfulness of an internal connection represented by the node.
  • the object of the invention is to provide a method with which competing risks can be detected, identified and represented in their logical or causal context, in particular in such a way that the determination of a temporally variable statement is not impaired ,
  • the method according to the invention can be used to assign suitable characteristic values to the competing risks through the system capable of learning. These characteristic values are intended to enable the calculation of the conditional probability per unit of time for the occurrence of the respective event (provided that none of the possible end events has occurred to date). “Suitable” characteristic values in the sense of the invention can have the property that a maximum of the statistical “likelihood” regarding all outputs is aimed for.
  • data of the initial event and a follow-up observation up to a predetermined time are used for the method for the training data sets or are objectively recorded in some other way.
  • the method according to the invention can thus also make it possible to use other characteristic values in the context of a trained, learnable system, as long as these characteristic values can be formed from the follow-up observations in a manner analogous to the statistical likelihood.
  • the other characteristics are excluded. In this way, a manifestation of a failure can preferably be taken into account.
  • means the parameters of the system capable of learning.
  • LS stands for “learnable system”.
  • F LS ktX) (.) Denotes the failure rate of the expression k and S LS ⁇ kx) (t.) Den
  • a neural network is used as the learning system.
  • the above objective function L can have the form depending on P.
  • the adaptive system performs recursive partitioning, where
  • the partitioning is carried out in such a way that the objective function is optimized which statistically takes these frequencies or probabilities into account.
  • the learnable system is preferably used in the context of a decision-making aid.
  • a therapy strategy can thus be determined, for example, in a medical application of the present invention.
  • FIG. 1 shows a representation of a neural network in an implementation as an MLP
  • FIG. 2 shows a Venn diagram of competing risks
  • Figure 3 is an illustration of a trained neural network with three competing risks.
  • the additional dimension of the starting layer comprises at least two nodes
  • Each output node is assigned to a signal
  • the individual signals are each assigned to a risk function with regard to the possible events.
  • the system capable of learning is trained by using the values of the total signals for all data sets as a lens function for the system
  • a system trained in this way supports the attending physician and the patient, for example, in the decision for one of several different therapeutic approaches by determining which of the possible manifestations of the risk of recurrence should be directed to the therapy.
  • the goal of individualized patient prognosis with competing risks can be understood mathematically in such a way that several functions f ⁇ (x) f 2 (x) f 3 (x), ... with the system capable of learning, here with a neural network NN ⁇ (x), NN 2 (x), .... are approximated. More precisely, the neural network estimates the expected value E (y k
  • the neural network can first be represented schematically in the current implementation as an MLP in the exemplary embodiment as in FIG. 1.
  • raw patient characteristics for primary breast cancer, for example, uPA, PAI-1, number of affected lymph nodes, etc.
  • the middle neurons form the internal layer.
  • Several internal layers can also be provided. Each internal neuron processes the signals from the input neurons and passes on a signal. The mathematical relationship between the "inputs" to the internal neurons and their “outputs” is controlled by leveling out synaptic weights.
  • the lower neurons provide estimates for the desired parameters (e.g. expected value of survival) and form the starting layer.
  • the architecture used in the embodiment consists of a classic multilayer feedforward network. Neurons are organized in layers as described above. Connectors exist in the embodiment as follows
  • the activation function of the hidden layer is the hyperbolic tangent.
  • the invention can also be used using other activation functions such as the logistic function.
  • the factors are initially transformed univariate so that they are in an interval of the order of 1.
  • the median XMedian is subtracted and the values are scaled with a factor x Q : values above the median are scaled with the 75% quantile, values below the median with the 25% quantile.
  • the tanh function is then applied.
  • the input neurons have a static function and are therefore implemented as fields that pass on the transformed values.
  • the tanh function of equation (1a) can be seen as the activation function of the input layer.
  • w ih is the weight of the connector from the input neuron i to the hidden neuron h
  • Xi (j) represents the (scaled) response of the i-th input neuron.
  • b h is the bias of the hidden neuron h, which is mathematically optimized like any other weight of the network.
  • the nonlinear activation function F h is the hyperbolic tangent.
  • the signal z Q is initially generated: the bias of the neuron b 0 is subtracted, and the activation function of the output neuron o is applied to this result.
  • the output O 0 0) thus becomes
  • the activation function of the starting layer is chosen as the identity function in the exemplary embodiment.
  • the total bias is not freely optimized, but is chosen so that the median signal of all output neurons is zero. This is possible without restricting the generality of the model.
  • the number of parameters to be optimized is thus reduced by the number of bias parameters.
  • the second equation ⁇ 0 is regarded as a constant.
  • the time dependence is in the coefficient B.
  • lens function takes shape
  • a preferred class of lens functions of the shape (7th) can be understood as statistical likelihood functions, whereby for the embodiment
  • the functional dependency on the model is symbolically characterized by variable parameters ⁇ .
  • An example for the determination of ⁇ jk and ⁇ jk is given below.
  • the parameters denoted by ⁇ are the survival time scales ⁇ ok and the weights of the neural network.
  • the index j denotes the patient record.
  • the time integral for solving equation 6 is solved by the standard method “Romberg integration”. Any time dependencies of the functions B ⁇ (t) can thus be taken into account.
  • this size is given by the product of the individual probabilities:
  • the neural network comprises
  • An input layer with a plurality of input neurons j (i for “input neuron”)
  • At least one intermediate layer with intermediate neurons N h (h for “hidden neuron”)
  • An output layer with a plurality of output neurons N 0 (o for “output neuron”)
  • a two-dimensional starting layer is shown in order to illustrate the possibility for the simultaneous display of temporally variable and also competing risks.
  • the simplified representation of non-time-variable risks is the special case in which only the characteristic dimension is necessary.
  • the number of input neurons Ni initially used is usually chosen in accordance with the number of objectifiable information available for the patient collective. According to the state of the art, methods are available which either automatically reduce the number of input neurons in advance to a level that is acceptable for the respective computer system or automatically remove unnecessary input neurons in the course of the optimization, so that in both cases the determination of the ultimately input neurons used without intervention of the respective operator.
  • the original number of hidden neurons is determined by the original number of input neurons, i.e.
  • N h Ni (10.a)
  • methods are available according to the state of the art, which enable the connectors to be preassigned favorably.
  • the neurons of the output layer are analogously in a two-dimensional matrix with indices
  • N 0 N, i me x N ey (10.d)
  • the index J key designates signals of the respective form, while the index J, il ⁇ , e designates the signals relating to the respective time function (for example “fractional polynomials” or spline functions).
  • An output neuron designated by two indices J t i me , J k ey carries accordingly for determining the coefficient of the time function J tim e for the risk for the characteristic J key .
  • the indices J key or J t i me correspond analogously to the indices k or I of equations 4 to 7.
  • N ey or N time in the embodiment corresponding to the quantities K and L of these equations.
  • End nodes which are usually arranged in a one-dimensional row, are also available for use in the context of recursive partitioning. According to the prior art, each patient is assigned to such a node. According to the prior art, the node is assigned a risk that can be viewed as a (scalar) signal.
  • the invention now assigns a vector with N key indices to each end node instead of a scalar.
  • the aim of learning is to locate the highest possible value of this likelihood function in the parameter space, but at the same time superfluous parameters to avoid if possible.
  • learning through initialization, optimization steps and complexity reduction is as follows:
  • the univariate analyzes can be used to preset the weights that favor or at least not disadvantage non-linear configurations (see below).
  • an exponential survival model is determined with the only parameter ⁇ 0 . This model is used for initialization and also for control in the subsequent analysis.
  • the four parameters correspond to the time constant ( ⁇ 0 ), the weight and the bias to the hidden layer, and the weight to the starting layer. These are optimized and stored in a table together with the quality (likelihood) and significance for subsequent purposes.
  • the ranking of the univariate significant factors is determined according to the amounts of the linear weights.
  • the numbering of the input nodes for the subsequent analysis corresponds to this ranking. In the event that fewer input nodes are available as factors, this procedure allows an objective preselection of the "most important" factors.
  • initial values for the weights must first be set. A default value of zero is not sought.
  • the weights of the linear connectors are initially filled with small values as usual.
  • the time parameter is preset with the value ⁇ 0 determined from the 1-parameter model.
  • the number of hidden nodes H is chosen equal to the number of input nodes J.
  • the corresponding bias is preset analogously with the bias determined in this way.
  • the value of the weight obtained from the univariate optimization which we refer to as w h ⁇ , for the first neuron of the output layer is also available.
  • a second way of initialization which is more common for neural networks, is to assign small, random weights to all connectors. This means that at the beginning of the optimization, all links, including those via the hidden layer, are in the linear range. For small arguments, the "activation function" is almost linear, e.g., tanh (x) «x for small x.
  • the covariance matrix of all input factors is calculated and stored.
  • a linear regression of each factor on all other factors is also determined: X 2 »A Xi + B.
  • Eigenvectors and eigenvalues of the covariance matrix are calculated and recorded. The linear relationships are used in the embodiment for the various thinning processes.
  • the quality on the validation set if available, is used several times during the course of the optimization: The quality on the validation set provides an independent measure of the progress of the optimization based on the training set and also serves to avoid over-adjustment.
  • the optimization is about the search for a maximum of the likelihood function, based on the data of the training amount.
  • the search method implemented in the embodiment uses the construction of an n-fold simplex in this space according to the known method by Neider and Mead (1965).
  • the search requires the formation of an n-dimensional simplex in the parameter space.
  • a simplex can be determined by specifying n + 1 non-degenerate corners, i.e. the corresponding edges are all linearly independent of one another. It therefore comprises an n-dimensional point cloud in the parameter space.
  • the search for optimization takes place in epochs. During each epoch, the quality function on the training set is evaluated at various points in the parameter space, namely at the current location and at n further corners, which are defined by the combination of operations such as reflection, expansion / contraction in one direction, etc. The directions of these operations are automatically selected based on the values of the quality function at the corners defined in the previous epoch.
  • the decrease in the quality function in the embodiment is monotonic and the search always ends at a (at least local) minimum.
  • the validation set described above if available, is used to control the progress of the optimization and to avoid overfitting.
  • the variables minus log-like-iihood per sample of the two quantities are continuously calculated and output as key figures of the instantaneous quality of the optimization with regard to the training and validation quantities. While this key figure must decrease monotonically on the training set, temporary fluctuations in the corresponding key figure on the validation set are possible without an over-adjustment already taking place. However, a monotonous increase in the key figure on the validation set should stop further optimization and lead to a Lead complexity reduction. This type of abort presents a kind of emergency brake to avoid overfitting.
  • a possible termination criterion that can be carried out automatically is achieved by maintaining the exponentially smoothed quality of the validation quantity. If this smoothed parameter exceeds the previous minimum of the current optimization step by a fixed percentage (deterioration in quality), the optimization is terminated.
  • a percentage increase of about 1% tolerance was found as an empirical value for typical sizes of the training amount around 300 or more data records. With this tolerance and with roughly the same size of training and validation quantities, the training is stopped more often by reaching a minimum on the training quantity than by the deterioration in the quality on the validation quantity.
  • This "normal" termination is preferred because an (almost) monotonous improvement in the quality on the validation set is a sign that the neural network has recognized real underlying structures and not simply the noise.
  • the simplex optimization described for the embodiment results in a set of weights ⁇ wpj, ... w [n] ⁇ and other parameters which determine a local minimum of the negative log likelihood.
  • the numbering [1] ... [n] of the weights in this context does not include the topological order of the weights.
  • This minimum refers to the fixed number n of the weights and a fixed topology. In order to avoid overfitting, it is desirable to reduce the complexity by thinning the weights as far as this is possible without a significant loss in quality.
  • Thinning refers to the deactivation of connectors. For this purpose, their weights are “frozen” to a fixed value (zero in the embodiment, where one can also speak of "removing”). In principle, it is possible to remove individual weights or even entire knots. In the latter case, all weights are deactivated which either insert into the node to be removed or continue from the node.
  • a phase of complexity reduction in the network is carried out following an optimization phase (simplex method).
  • the first step in this is the "thinning" of individual connectors.
  • combinations of different Connectors tested for redundancy are tested for redundancy.
  • the consistency of the topology is checked and, if necessary, connectors or nodes are removed which, due to the previous removal of other connectors and nodes, can no longer contribute to the statement.
  • test variable log (likelihood ratio) is first formed in the embodiment. Two networks are envisaged for each weight w IA] :
  • the connector When deactivated, the connector is removed from the list of active connectors and the associated weight is frozen (mostly zero).
  • the number G of the removed connectors becomes a maximum number limited, where n is the number of connectors remaining.
  • Thinning or removal of individual connectors can result in isolation of a node from input signals, output signals, or (in the case of a hidden neuron) from both.
  • a deactivation flag is set for the node in the embodiment.
  • Isolation means that there are no active connectors either from the input layer or from the hidden layer. If all connectors from an input neuron to the hidden and to the output layer have been removed, the bias of the linear connectors must also be deactivated.
  • a hidden neuron that has been isolated from all inputs can still be connected to outputs.
  • the "frozen" contributions of such hidden neurons to the output are then redundant because, in principle, they only include the bias values of the other active connectors change. As a result, such neurons are deactivated and any remaining connectors to the output layer are removed.
  • the trained neural network is thus clearly determined.
  • the trained neural network can be used in accordance with the description above to generate the output values and thus the functions defined above for any data which contain the independent factors (“covariates”) x f k (t), ⁇ k (_), and S k (f) to obtain the covariates x.
  • covariates independent factors
  • first 1000 fictitious patient data sets with 9 factors (covariates) were generated by means of a random generator.
  • the first 7 factors were created as realizations of a multivariate Gaussian distribution.
  • mean values and variances of the factors and a covariance matrix were specified in the exemplary embodiment: Factor x'yj? 0 Xer X PJ xa . Q e ⁇ tum xujpa xpai
  • xlypo xer; xpr: xage: xtum; xupa: xpai xlypo 1.00 -0.06 -0.09 0.03 0.42 0.02 0.05 xer -0.06 1.00 0.54 0.29 -0.07 -0.18 -0.19 xpr -0.09 0.54 1.00 0.03 -0.06 -0.07 -0.14 xage 0.03 0.29 0.03 1.00 0.04 0.02 0.00 xtum 0.42 -0.07 - 0.06 0.04 1.00 0.03 0.06 xupa 0.02 -0.18 -0.07 0.02 0.03 1.00 0.54 xpai 0.05 -0.19 -0.14 0.00 0.06 0.54 1.00
  • the model assumed in the exemplary embodiment shows that only the factor "xlypo" is causally decisive for the failure of the third variant. Nevertheless, there is an indirect connection between the other factors and the observations of the third variant, because increased risks of the other factors may reduce the likelihood of observing the failure of the third variant, although this property of the model assumed is insignificant for the function of the invention, but illustrates a typical benefit.
  • the neural network trained according to the described method is illustrated in FIG. 3 ("xpai” and “xpail” are identical). Note that there is only one connector to the "O3" output, namely from the "xlypo" node (neuron).
  • the outputs 01 to 03 are assigned to the risks "risk (1)" to "risk (3)".
  • Table 2b Bias values (automatically 0 for inactive neurons)
  • N t i me 1 as used here.
  • the number of output neurons is then determined from equation 10.d.
  • the training would then be carried out in the manner previously described.
  • the possible temporal variations of the different forms could be determined independently of one another in the context of the model of equations 4 to 7, the task of recording competing risks in particular would not be affected thereby.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Molecular Biology (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Medical Treatment And Welfare Office Work (AREA)
  • Measuring And Recording Apparatus For Diagnosis (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)
  • Complex Calculations (AREA)

Abstract

Procédé de détermination de risques concomitants pour des objets après un événement premier sur la base de groupes de données d'apprentissage déjà mesurées ou par ailleurs objectivables. Selon ledit procédé, plusieurs signaux produits par un système adaptatif sont combinés en une fonction objective de manière telle que ledit système adaptatif peut reconnaître ou prédire les probabilités sous-jacentes de risques concomitants.
EP01999919A 2000-12-07 2001-12-07 Procede de determination de risques concomitants Withdrawn EP1384199A2 (fr)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
DE10060928 2000-12-07
DE10060928 2000-12-07
PCT/EP2001/014411 WO2002047026A2 (fr) 2000-12-07 2001-12-07 Procede de determination de risques concomitants

Publications (1)

Publication Number Publication Date
EP1384199A2 true EP1384199A2 (fr) 2004-01-28

Family

ID=7666201

Family Applications (1)

Application Number Title Priority Date Filing Date
EP01999919A Withdrawn EP1384199A2 (fr) 2000-12-07 2001-12-07 Procede de determination de risques concomitants

Country Status (4)

Country Link
US (1) US7395248B2 (fr)
EP (1) EP1384199A2 (fr)
AU (1) AU2002216080A1 (fr)
WO (1) WO2002047026A2 (fr)

Families Citing this family (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1388812A1 (fr) * 2002-07-04 2004-02-11 Ronald E. Dr. Kates Procédé d'entraínement pour un système capable d'appentissage
US7485390B2 (en) 2003-02-12 2009-02-03 Symyx Technologies, Inc. Combinatorial methods for preparing electrocatalysts
WO2005024717A1 (fr) * 2003-09-10 2005-03-17 Swiss Reinsurance Company Système et un procédé de tarification empirique automatique et/ou de provision pour dommages automatique
US8096811B2 (en) * 2003-11-29 2012-01-17 American Board Of Family Medicine, Inc. Computer architecture and process of user evaluation
US20070239496A1 (en) * 2005-12-23 2007-10-11 International Business Machines Corporation Method, system and computer program for operational-risk modeling
US7747551B2 (en) * 2007-02-21 2010-06-29 Neurovista Corporation Reduction of classification error rates and monitoring system using an artificial class
DE102007044919A1 (de) * 2007-09-19 2009-04-02 Hefter, Harald, Prof. Dr. med. Dr. rer. nat. Verfahren zur Bestimmung von sekundärem Therapieversagen
US8949671B2 (en) * 2008-01-30 2015-02-03 International Business Machines Corporation Fault detection, diagnosis, and prevention for complex computing systems
DE102009009228A1 (de) * 2009-02-17 2010-08-26 GEMAC-Gesellschaft für Mikroelektronikanwendung Chemnitz mbH Verfahren und Vorrichtung zur agglutinationsbasierten erkennung von spezifischen Erkankungen über einen Bluttest
WO2011161301A1 (fr) * 2010-06-24 2011-12-29 Valtion Teknillinen Tutkimuskeskus Déduction d'état dans un système hétérogène
US8620720B2 (en) * 2011-04-28 2013-12-31 Yahoo! Inc. Embedding calendar knowledge in event-driven inventory forecasting
US9235799B2 (en) * 2011-11-26 2016-01-12 Microsoft Technology Licensing, Llc Discriminative pretraining of deep neural networks
US8738421B1 (en) * 2013-01-09 2014-05-27 Vehbi Koc Foundation Koc University Driver moderator method for retail sales prediction
US20150032681A1 (en) * 2013-07-23 2015-01-29 International Business Machines Corporation Guiding uses in optimization-based planning under uncertainty
US10133980B2 (en) 2015-03-27 2018-11-20 Equifax Inc. Optimizing neural networks for risk assessment
WO2018084867A1 (fr) 2016-11-07 2018-05-11 Equifax Inc. Optimisation d'algorithmes de modélisation automatisée pour l'évaluation des risques et la génération de données explicatives
CN111602149B (zh) 2018-01-30 2024-04-02 D5Ai有限责任公司 自组织偏序网络
US10832137B2 (en) 2018-01-30 2020-11-10 D5Ai Llc Merging multiple nodal networks
US11321612B2 (en) 2018-01-30 2022-05-03 D5Ai Llc Self-organizing partially ordered networks and soft-tying learned parameters, such as connection weights
US10558913B1 (en) * 2018-10-24 2020-02-11 Equifax Inc. Machine-learning techniques for monotonic neural networks
US11468315B2 (en) 2018-10-24 2022-10-11 Equifax Inc. Machine-learning techniques for monotonic neural networks

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5862304A (en) 1990-05-21 1999-01-19 Board Of Regents, The University Of Texas System Method for predicting the future occurrence of clinically occult or non-existent medical conditions
DE4224621C2 (de) * 1992-07-25 1994-05-05 Boehringer Mannheim Gmbh Verfahren zur Analyse eines Bestandteils einer medizinischen Probe mittels eines automatischen Analysegerätes
US5943663A (en) * 1994-11-28 1999-08-24 Mouradian; Gary C. Data processing method and system utilizing parallel processing
US5701400A (en) * 1995-03-08 1997-12-23 Amado; Carlos Armando Method and apparatus for applying if-then-else rules to data sets in a relational data base and generating from the results of application of said rules a database of diagnostics linked to said data sets to aid executive analysis of financial data
US5812992A (en) * 1995-05-24 1998-09-22 David Sarnoff Research Center Inc. Method and system for training a neural network with adaptive weight updating and adaptive pruning in principal component space
US6125105A (en) * 1997-06-05 2000-09-26 Nortel Networks Corporation Method and apparatus for forecasting future values of a time series
DE19940577A1 (de) * 1999-08-26 2001-03-01 Wilex Biotechnology Gmbh Verfahren zum Trainieren eines neuronalen Netzes
US6606615B1 (en) * 1999-09-08 2003-08-12 C4Cast.Com, Inc. Forecasting contest
US20040122702A1 (en) * 2002-12-18 2004-06-24 Sabol John M. Medical data processing system and method
JP4177228B2 (ja) * 2003-10-24 2008-11-05 三菱電機株式会社 予測装置

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
None *
See also references of WO0247026A3 *

Also Published As

Publication number Publication date
WO2002047026A2 (fr) 2002-06-13
US20040073096A1 (en) 2004-04-15
WO2002047026A3 (fr) 2003-11-06
US7395248B2 (en) 2008-07-01
AU2002216080A1 (en) 2002-06-18

Similar Documents

Publication Publication Date Title
EP1384199A2 (fr) Procede de determination de risques concomitants
DE102016203546B4 (de) Analysator zur verhaltensanalyse und parametrisierung von neuronaler stimulation
DE112011101370T5 (de) Neuronales Netz mit kanonischen gepulsten Neuronen für einen raumzeitlichen Assoziativspeicher
DE112019000806T5 (de) Erkennen und vorhersagen von epilepsieanfällen unter verwenden von techniken wie methoden des tiefen lernens
DE112018002822T5 (de) Klassifizieren neuronaler netze
DE10296704T5 (de) Fuzzy-Inferenznetzwerk zur Klassifizierung von hochdimensionalen Daten
DE10237310A1 (de) Verfahren, Datenverarbeitungseinrichtung und Computerprogrammprodukt zur Datenverarbeitung
Speekenbrink et al. Learning strategies in amnesia
DE102021124256A1 (de) Mobile ki
DE112022001973T5 (de) Vorhersage von medizinischen ereignissen mit hilfe eines personalisierten zweikanal-kombinator-netzwerks
DE102007001026A1 (de) Verfahren zur rechnergestützten Steuerung und/oder Regelung eines technischen Systems
EP1232478B1 (fr) Procede destine a l'apprentissage d'un reseau neuronal
WO2003054794A2 (fr) Evaluation d'images du cerveau obtenues par tomographie par resonance magnetique fonctionnelle
DE112021003761T5 (de) Prädiktive modelle mit zerlegbaren hierarchischen ebenen, die konfiguriert werden, um interpretierbare resultate zu erzeugen
DE102005046747B3 (de) Verfahren zum rechnergestützten Lernen eines neuronalen Netzes und neuronales Netz
EP0890153B1 (fr) Procede de determination de poids aptes a etre elimines, d'un reseau neuronal, au moyen d'un ordinateur
DE102019216973A1 (de) Lernverfahren für neuronale netze basierend auf evolutionären algorithmen
EP3739592A1 (fr) Obtention de données de patient à base d'imagerie commandée décentralisée
DE112024000715T5 (de) Datenschutzorientiertes, interpretierbares skill-lernen für entscheidungen im gesundheitswesen
WO1998034176A1 (fr) Procede pour la transformation d'une logique floue servant a la simulation d'un processus technique en un reseau neuronal
EP1114398B1 (fr) Procede pour entrainer un reseau neuronal, procede de classification d'une sequence de grandeurs d'entree au moyen d'un reseau neuronal, reseau neuronal et dispositif pour l'entrainement d'un reseau neuronal
EP1359539A2 (fr) Modèle neurodynamique de traitement d'informations visuelles
WO1998007100A1 (fr) Selection assistee par ordinateur de donnees d'entrainement pour reseau neuronal
DE102021205097A1 (de) Computerimplementiertes Verfahren und System zur Bestimmung einer Kostenfunktion
Taha et al. A new quantum radial wavelet neural network model applied to analysis and classification of EEG signals

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

17P Request for examination filed

Effective date: 20030704

AK Designated contracting states

Kind code of ref document: A2

Designated state(s): AT BE CH CY DE DK ES FI FR GB GR IE IT LI LU MC NL PT SE TR

17Q First examination report despatched

Effective date: 20100319

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION IS DEEMED TO BE WITHDRAWN

18D Application deemed to be withdrawn

Effective date: 20180817