[go: up one dir, main page]

MXPA97008565A - Learning method in binar systems - Google Patents

Learning method in binar systems

Info

Publication number
MXPA97008565A
MXPA97008565A MXPA/A/1997/008565A MX9708565A MXPA97008565A MX PA97008565 A MXPA97008565 A MX PA97008565A MX 9708565 A MX9708565 A MX 9708565A MX PA97008565 A MXPA97008565 A MX PA97008565A
Authority
MX
Mexico
Prior art keywords
layer
binary
pseudo
input
neuron
Prior art date
Application number
MXPA/A/1997/008565A
Other languages
Spanish (es)
Other versions
MX9708565A (en
Inventor
Tang Zheng
Original Assignee
Sowa Institute Of Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US08/744,299 external-priority patent/US6061673A/en
Application filed by Sowa Institute Of Technology Co Ltd filed Critical Sowa Institute Of Technology Co Ltd
Publication of MX9708565A publication Critical patent/MX9708565A/en
Publication of MXPA97008565A publication Critical patent/MXPA97008565A/en

Links

Abstract

The present invention relates to learning methods in binary systems, modifying the connected states of the circuit between each basic binary gate, in combined binary, logical and sequential circuits, composed with the basic binary gates such as y (AND), or ( OR), NO (NOT), NO-Y (NAND), NO-O (NOR) and EXOR, as the pseudo-neuron theory and the pseudo-potential energy theory are skillfully introduced, it is possible to obtain specified learning effects during a very short learning period. In addition to carry out the learning methods in conventional computers and other digital equipment, it is simple, it is expected to be widely used in wide applications, for example, such as in the process of images, speech process or natural word processes

Description

METHOD OF LEARNING IN BINARY SYSTEMS BACKGROUND OF THE INVENTION This invention relates to binary systems that can be learned. Until now, learning in traditional neural networks is executed by modifying each factor of the process and the threshold of each neuron. However, as the operations of the aforementioned factors and thresholds require a complicated and large-scale hardware (computer equipment), such as adders and multipliers, and it takes a long time to operate, it is difficult to realize this hardware (equipment of computing) on a large scale. The present invention was developed in consideration of the above drawbacks and the object of it is to provide learning methods in binary systems, modifying the connected states of the circuit in each of the basic binary circuits in combined binary, logical and sequence circuits, composite with basic binary gates, such as gates Y (AND), 0 (OR), NO (NOT), NO-Y (NAND), NO-0 (NOR) and EXOR. BRIEF DESCRIPTION OF THE INVENTION In order to obtain the above object, in the learning methods in binary systems, according to this invention, this learning is executed under the connected states, in which the first binary gate connects to the second gate binary, selecting each of the following four connected states: 1) directly connected; 2) connected through an inverter; 3) connected to the second gate input with binary 1; 4) connected to the second gate input with the binary 0. The energies that show the connection conditions have the high-low orders shown in Figure 1. In addition, this learning is done by modifying the pseudo-potential energies, which express the previous connection states. Also, the modification of the pseudo-potential energies, which show the previous connected conditions, are performed as shown in Figure 2. Also, the aforementioned combined binary logic circuit is constructed with the connections between the basic binary gates, just like the Y (AND) gates, OR (OR), NO (NOT), NO-Y (NAND), NO-0 (NOR) and EXOR, as shown in Figure 3. Likewise, the sequence circuits, mentioned above, are composed with the combined circuit and a memory circuit and the connection between them, as shown in Figure 4, and the combined circuit is constructed with the basic binary gates, such as the gates Y (AND), O (OR), NO (NOT), NO-Y (NAND), NO-0 (NOR) and EXOR. These learning methods are also characterized because the connected states, mentioned above, are made using neurons. These learning methods are still further characterized because learning is done by modifying the factors of pseudo-neurons and thresholds. Also, in these learning methods, the modification of neuron factors and thresholds? they are changed to the gradient descent direction of an error function E, as shown in equation (l): dE? w oc - Equation (1) d? These learning methods are further characterized in that the above connected states are expressed using a pseudo-potential energy (hereinafter referred to as PPE). These learning methods are also characterized because the PPE of each gate has a high-low order, defined in Figure 1.
These learning methods are also characterized because the learning is done by modifying the PPE in the connected states. These learning methods are also characterized because the modification of the PPE in the connected states is done as shown in Figure 2. These learning methods are also characterized because the previous binary logic circuits are composed of the basic gates Y ( AND), OR (OR), NO (NOT), NO-Y (NAND), NO-0 (NOR) and EXOR. and the connections between them, as shown in Figure 3. These learning methods are also characterized because the previous sequence networks consist of a combination circuit and a memory circuit, as shown in Figure 4, and the combination logic circuit is composed of the basic gates, such as the gates Y (AND), O (OR), NO (NOT), NO-Y (NAND), NO-0 (NOR) and EXOR. and the connections between them. In addition, the above combination binary logic circuits are characterized in that they are composed of an input layer, a connection layer and a layer I (AND) and a layer O (OJ?), As shown in Figure 5. In addition, the above combination binary logic circuits are also characterized in that they are composed of an input layer, a connection layer, a layer O (OR) and a layer Y (AND), as shown in Figure 6. Also, the above combination binary logic circuits are also characterized in that they are composed of an input layer, a connection layer, an intermediate NO-Y (NAND) layer and an output NO-Y layer, as shown in FIG. Figure 7. Likewise, the above combination binary logic circuits are also characterized because they are composed with an input layer, a connection layer, an intermediate NO-O (ÑOR) layer and an output NO-0 layer, as shown in FIG. shown in Figure 8. Also, what The above combination binary logic circuits are also characterized in that they are composed of an input layer, a connection layer, an intermediate EXOR layer and an output EXOR layer, as shown in Figure 9. BRIEF DESCRIPTION OF THE DRAWINGS With In order to better understand the features according to this invention, by way of example only and without being limiting in any way, the following preferred embodiment is described with reference to the accompanying drawings, in which: Figure 1 shows the order of the pseudo-potential energy of the connection states; Figure 2 shows the method of modifying the pseudo-potential energy of the connection states; Figure 3 shows a block diagram of the combination network; Figure 4 shows a block diagram of a sequence network; Figure 5 shows a block diagram of a Y-0 network (AND-OR); Figure 6 shows a block diagram of an O-Y network (OR-AND); Figure 7 shows a block diagram of a network by the NO-Y gates (NAND); Figure 8 shows a block diagram of the network by gates NO-0 (ÑOR); Figure 9 shows a block diagram of a network by the EXOR gates; Figure 10 shows a truth table for an exemplary binary function; Figure 11 shows a Karnaugh map for an exemplary binary function; Figure 12 shows a logic circuit for an exemplary binary function; Figure 13 shows a diagram of the threshold function and the model of the pseudo-neuron; Figure 14 shows the expression of the connection state with the pseudo-neuron; Figure 15 shows an output Y-0 network (AND-OR) with the pseudo-neuron; Figure 16 shows a continuous valued function approximated to gate O (OJ?); Figure 17 shows a continuous valued function approximated to the Y (AND) gate; Figure 18 shows a truth table of the learning signals; Figure 19 shows a truth table of the learning signals; Figure 20 shows a map of Karnaugh's updated threshold; Figure 21 shows the state assignment of the connection states by the pseudo-neuron; Figure 22 shows a Karnaugh map of the pseudo-neuron output (Y) with the input (Xj and the state assignment (qβ, q2, qi), Figure 23 shows the performance of the learning algorithm circuit; Figure 24 (a) shows the state transition diagram of the learning threshold, Figure 24 (b) shows the state transition diagram of the learning factor? W; Figure 25 (a) shows the state transition table of threshold learning; Figure 25 (b) shows a state transition table of the learning factor; Figure 26 shows the truth table for the threshold learning circuit; Figure 27 shows the truth table for the learning factor circuit; Figure 28 shows a truth table of the factor and threshold modification circuits; Figure 29 shows a Karnaugh map of q3 i; Figure 29 shows a Karnaugh map of q2 '; Figure 29 shows a map of Karnaugh from q '; Figure 32 shows a diagram of the modification circuit using a combination network; Figure 33 shows a diagram of the modification circuit using the network in sequence; Figure 34 shows a truth table of the circuit connecting the pseudo-neuron; Figure 35 shows a circuit of the connection of the pseudo-neuron; Figure 36 shows a block diagram of the entire learning circuit; Figure 37 shows a truth table of the connection function; Figure 38 shows a circuit of the learning algorithm using the pseudo-potential energy method; Figure 39 shows a truth table of the learning circuit of the connection state; Figure 40 shows a learning modification circuit using the network in sequence; Figure 41 shows the diagram of the connection circuit; Figure 42 shows a block diagram of the entire learning circuit, which uses the pseudo-potential energy method; Figure 43 shows the learning in the sequence network. DETAILED DESCRIPTION OF THE INVENTION A preferred embodiment of the learning methods in binary systems, according to this invention, will be described in detail as an example with respect to the composite logic circuits with the Y (AND) layer and the 0 layer (OR ), shown in Figure 5. 1. Connected Conditions Initially, the conditions connected in the embodiment according to the invention will be described. In the composition of the binary systems, any logical function is expressed in the form of logical sum (composed with the circuit Y-0 (AND-OR) shown in Figure 5). for example, a logical function shown in Figure 10 comes to be expressed in Equation (2) simplifying with a Karnaugh map, shown in Figure 11. Equation (2) The logical function, shown in Equation (2), is expressed in a block diagram shown in Figure 12, applying a Y-0 (AND-OR) network. Accordingly, the connection states between an input layer and a Y (AND) layer are determined in any of the following four connected states, according to the logical function, that is: (1) The input X¿ is included in the logical term Yj (AND- \) (for example, according to X2 shown in Figure 12 is included in both Y ± as Y2 (AND2 and AND2) X2 is directly connected); (2) The negation of the input X¿ is included in the logical term Y (AND.}.) (For example, the input X3 is connected to Y2 (AND2) through an inverter); (3) The input X¿ and the negation of the input X ± are not included in the term of the logical product Yj (AND?) (For example, there is no connection between X3 and Y- ^ (AWD). input from X3 to Y ^ (AND ^) is commonly connected to binary 1); (4) Any input is commonly connected to gate Y (AND) with binary 0. Therefore, any logical function that has n variables can be performed with a Y-0 (AND-OR) network consisting of a sumo 2 (n + 1) + l gates Y (AND). The connections between the input layer and the Y (AND) layer are made applying any of the previous connections. 2. Expression by Pseudo-Neuron The conditions connected above can be expressed by applying a pseudo-neuron (hereinafter "PN"). The conditions connected between the inputs and outputs in the pseudo-neuron are expressed with a threshold function, shown in Equation (3) or Equation (4): Y «= (WijXi - 0 ij) 1 (W? = Gu) in which: X¿ = order entry ir¿j = output of the pseudo-neuron of order ij WH = weight factor of the input X¿ to the pseudo-neuron of order ij? j = the threshold of the pseudo -neuron of order ij. Also, in this case, the pseudo-neuron has only one input and one output, and WA takes any value of 1 or -1 and T-H takes a value between -1.5, -0.5, 0.5 or 1.5. as shown in Figures 13 (a) or (b). Since the input X¿ only takes any value of 0 or 1, in the binary systems, the output of the pseudo-neuron takes any value of 1 or 0, according to the weight factor Wj and the threshold? J, as it is shown in Figure 14. Therefore, it becomes possible to express the connected condition between the input and gate Y (AND) by applying a pseudo-neuron. Then, the construction Y-0 (AND-OR), shown in Figure 5, can be expressed as shown in Figure 15, by applying a pseudo-neuron between the input layer and the Y (AND) layer. The network shown in Figure 15 is of the stratum type, which is composed of an input layer, a pseudo-neuron layer, a Y (AND) layer and a 0 (OR) layer, and each layer is composed of appropriate numbers of gates in any connection in each layer itself. In addition, the connection between each layer is limited only in one direction (i.e., a forward feed type) from an input layer to an output layer. In gates of each layer, with the exception of the connection between any input layer and any pseudo-neuron layer, the connection with the gate placed forward is specified as binary 1. If the response function of the PN is approximately a function sigmoid, and the gates Y, O (AND, OR) are approximated by the minimum and maximum functions, valued continuous, many algorithms, for example, such as the backward error propagation method, can be used. However, the modification or learning is done only by applying the weight factors and thresholds of the PN. 3. Descending Gradient Learning Algorithm A learning algorithm for connected conditions, between the input layer and the Y (AND) layer, in the binary system, is derived as follows. In consideration of a network, shown in Figure 5, desired outputs or signals from the teacher are assumed, as Ti, T-2, m for the given inputs Xí f X2, Xn, and the outputs of the network, shown in Figure 5, are assumed as zl 'z2' zm / Y an error function E is defined as the sum of the squares, as shown in Equation (5): Equation (5) 1 m i = 1 The learning is done so that the error will be reduced by changing the factors (conditions connected) between the input layer and the PN layer, and the thresholds of the PN (all other connections are fixed). Here, by allowing the W factors and the thresholds? change to the gradient descent direction, the correction values of? w and ?? they are expressed by Equation (6): ? W = - e d E w d w Equation (6)? 0 = - e d E 0 d T In Equation (6), Z and 8Q are defined to take only positive values. For simplicity, a network that requires only one output, shown in Figure 15, is considered. By letting? J denote the PN of order ij between the input X¿ and the gate Y of order j, Ij (AND?) And also allowing Yij,? ¿J, W¿j to denote its output, the threshold and the weight factor, the correction values? w? j and? TH are expressed as shown in the following formulas of Equation (7): Equation (7) ? fí F J - < - d \. dZ. d 0R_. L? NJA ._ £ ¥ * _ and "~ *? A, (? Z dOR 0? ND, c? Y" T 0U Here, how the error function F is expressed in Equation (8): 1 m ~ 2 ~ ^ ~ ~ ~ Z - T) Equation (8)? = l? then, the following Equation (9) is concluded: d E _ rj r ^ - Z i Equation (9) dZ Also, Equation (10) is deduced, as Z = OR.
- ~ A =] Equation (10) d OR Therefore, each gate O (OR) is approximated by the following continuous functions, shown in Equation (11): Equation (11) In Figure 11, M is maximum of the input, except for Y3 (AND3, that is, M = Max (Y¿, i 0 1, 2,, i? J) 0 This relationship is expressed in Figure 16. Therefore, these relationships are expressed as shown in Equation (12). d or v_ 0 (AND, < M) = Sgn (AND, - M) = Equation (12) d AND, 1 (AND, = M) In the same way, it is possible to approximate, as shown in Equation (13) in each gate Y (AND) that 0 faces each entry.
AND, AND "^ m AND, 111 AND« > m Equation (13) Here, m is the minimum of all entries, except for Yij • That is, m = Min (Yik, k = 1, 2,, k? J) This relationship is expressed in Figure 17. Therefore, this relationship it is expressed as shown in Equation (14). -3ANO, Equation (14) Finally, as Y¿j is expressed as shown in Equation (15), 1 Y "= f (x) = i + e-x Equation (15) X = Xí-Tu then, Equation (16) is deduced as follows, Equation (16) A L. = f '(x) • (-1) d T »because f' (x) > 0, then assuming that f '(x) = 1,? W y and ^ j j come to satisfy the following equations. ? Wij = - ew (ZT) Sgn (AiVD -M) Sgn (m-Yij) Xi Y ?? ij = e? (ZT) Sgn (AN £> jM) Sgn (m-Yij) (-1) then , supposing that e ^ = 2 and eg = 1, the higher relations are reduced as follows. ? Wj = -2 (Z-) Sgn (? ND ^ -M) Sgn (m-Y ^) Xt? 0 .. = (Z-TJSgníANDj-MJSgním-Y ^) In the above equations, which represent? W ^ j and ???, as all quantities are expressed in binary systems, then the quantities will be corrected,? W ^ and ?? ji have respectively simple logical relations with the output Z, the signal of the master T, the output of the gate Y (AND), Yj (ANDj), the output of the PN, Y¿j and the input X¿. Therefore, the learning rules can be realized with logic circuits. The modification is limited to 1, -1 or 0, which represent that the current factors and thresholds are increased, decreased or maintained as much as one unit, and this unit is defined as 1 for the factors and 2 for the thresholds.
Realization of the Hardware (Computer Equipment) (1) Realization of the Hardware in the Learning Algorithm As mentioned before, the learning algorithm is composed only of logical operations between the input signals, output, teacher signals, outputs of the Y (AND) layers and outputs of the PN, and provides a learning signal against the PN, to increase or decrease or maintain the respective weight factors and thresholds. Thus, since there are three conditions, that is, increase, decrease or retention, then if we allow the retention signal to be q = HP (high impedance), the increase and decrease signals are respectively expressed as q = 1 and q = 0. Thus, the learning signals for the weight and threshold factors, shown in Equations (17) and (18) can be represented as a truth table, shown in Figures 18 and 19.
A T n? = Z-T-? ND, Equation (17)? 0.? O) = Z-T-? 11? W., (1) = Z-T-? ND1-X. =? 0"(1) -,. == - Equation (18) ? Wt) (0) = Z • T • Y, • X. =? 0"(0) • X, With these truth tables (Figures 18 and 19) are able to be expressed on the map of Karnaugh, then the map of Karnaugh that includes still terms without interest, is expressed by Figure 20. The logical function of learning signals It is possible to be deduced from these truth tables. Therefore, the modifications of the factors and thresholds are determined by the input X¿, the output Z, the output of the PN (Yij) Y3 (AND3) and the signal of the teacher T. Then, assigning the connected conditions (8). conditions) of the PN, shown in Figure 14 to the conditions shown in Figure 21, applying 3 bits (q3, q2, i), the logical function composed of the output of the PN, the inputs and variables (q3, q2 , qi) are expressed by the map of Karnaugh shown in Figure 22, in addition, the following Equation 19 is obtained from that map of Karnaugh: And "- X, q2 -I q? q, H- q3 q2 - | - X, q3 Qi Equation (19) Using the MOS transistor switches, the logic circuit for learning signals, shown in Equations (17) and (18), are expressed as shown in Figure 23, and the logic circuit, shown in Figure 23, gives 0 or 1, or HP, according to the learning algorithm described above. (2) Factors and Thresholds of Modification of Circuits Applying the state variables that connect each PN, as shown in Figure 21, the operations of the modifier circuits for the factors and thresholds by the learning algorithm can be represented as a diagram of state and a state transition table, shown in Figures 24 and 25, respectively. Rewriting Figures 2 and 25 in the truth tables shown in Figures 26 and 27, the state transition functions are expressed as shown in Equation (20): Q3 '- q3 c' rQ2 q.i ?? ,, c?, Í? 0u qz Cli '- C.}. CJ, -I? 0"(1," i? 0"(\ Equation (20) or expressed as follows: q, '1 = q2 q 1 = qi combining both factors and thresholds, a truth table is provided, shown in Figure 28. Karnaugh's map for q3 ', q2 * and qi', is expressed in Figures 29, 30 and 31, respectively, and the following Equation (21): q3 '=? WU Equation (21) q,' = q2 q "? -? 0Ü q2 +? 0ij" q2"q? In addition, the circuit of them is expressed in the Figure 32. Using bistable D circuits as memory devices, the learning circuit is expressed as shown in Figure 33.
Here, denoting S (l), S (x), S (lx) and S (0) as states: 1-connected, connected directly, connected to the inverter and connected 0 of the connected states of the PN, respectively, the truth table of the connection function is expressed as shown in Figure 34. Applying this truth table, shown in Figure 34, the following logic functions shown in Equation (22) are obtained.
S (1) = q3 q2 + q2 q. S (X) = q3 q2 q. - Equation (22) S (1 - X) = q3 q2 q. S (0) = q2 q, -I- q3 q2 Therefore, the connection circuit is expressed as shown in Figure 35, and the block diagram of all learning circuits using the PN is shown in Figure 36. 3. Learning Algorithm and Its Applied Realization with The Pseudo-Potential Energy Method. Here, the learning algorithms applied with the pseudo-potential energy method (hereinafter referred to as the PPE method), are described in order to compose the internal model (state connected between the input layer and the Y (AND) layer). in a binary system Y-0 (AND-OR), shown in Figure 5. As mentioned before, there are four connected states composed of 1-connected, connected directly, connected to the inverter and O-connected. It is defined by applying the pseudo-potential energy.In addition, the order of high to low of the pseudo-potential energy is assumed as follows: For the input 0, (1) 1-connected, (2) connected to the inverter, (3) connected direct, (4) 0-connected; and for input 1, (1) 1-connected, (2) connected directly, (3) connected to the inverter and (4) 0-connected. When considering the quasi-potential energy, defined as mentioned above, it will be noted that the greater the pseudo-potential energy defined, the more easily the connected state will give the 1-output. Conversely, the lower the energy, the easier the connected states will give the 0-output. Therefore, when the output of the 1-output is desired, it is necessary to change the current pseudo-potential energy to the major state. Conversely, when the output for the 0-output is desired, it is necessary to change the energy to a lower state. The learning is to allow the output of the network to match the teacher's signal, and then the learning is obtained by modifying the almost-potential energy of the connection. Here, a Y-0 network (AND-OR), shown in Figure 5, is considered. When the teacher's signal is equal to, the output Z is equal to 9, and the output of all the I3 (AND) becomes 0. In order to allow the Z output to be 1, it is necessary to move the state (2) or (3) for input 0, and state (3) or (4) for input 1 to state (2) or state (3) that have higher quasi-potential energies, only when the input Y3 (AND3) ie Yj is equal to 0. In state (1), and state (2), as binary 1 is already output, state (1) and state (2) are maintained correspondingly When the teacher's signal, T, is 0, and the output T = 1, at least one output of Y (AND3) is kept to the output binary 1. To allow the output to be binary 0, it is required to allow all the gates Yj that produce binary 1, to produce binary zeroes. As Y3 (AND3) produces binary 1, it means that the connected state of Y3 (AND3) is in state (1) or (2), which has a higher potential energy. Therefore, by allowing the output to be in binary 0, it is necessary to move states (1) and (2) that have higher potential energies to states (2) or (3) that have lower potential energies.
Based on the above, it becomes possible to obtain the following learning signals, shown in Equation (23). ? qu 1 Equation (23) Here, allowing S (l), S (x), S (lx) and S (0) to denote the states 1-connected, connected directly connected to the inverter and 0-connected of a pseudo-neuron, and assigning 11, 10 , 01, 00 to each of the four connected previous states, applying the 2-bit binary key (q2, qi) The logical relationship between Y¿j and the current states c32 < 3l 'the input X¿ is expressed by the truth table, shown in Figure 37, and furthermore its logical relationship is expressed by the following Equation 24: Y, = q, X, + q2Xj Equation (24) Likewise, the network of learning algorithms are shown in Figure 38. With the state variables defined as before, the truth table for the combination network in the sequence network can be expressed in Figure 39. Thus, the State transition function can be obtained from Equation (25): Equation (25) I q2 '=? Q1Aht X.?q1J q2 + 1q2 q? + X.q2 q.'X.? Q11 q2 q, '=? q "q2" X.?q" "q? + X.q2 q.i-X.c Q > Next, using the flip-flops D as memory devices, the learning modification circuit can be realized with a circuit as shown in Figure 40. The connected circuit can also be made with a circuit as shown in Figure 41 Finally, the block diagram of the entire learning circuit, which uses the pseudo-potential energy method, is shown in Figure 42. Similarly, it is possible to increase the internal states or the cycle of state transitions. . In addition, it is also possible to use the RAM of the general CPU to perform the teaching. (4) Learning Method in the Sequence Network Therefore, the learning method to compose the sequence network. As mentioned before, a binary system, for example the system shown in Figure 5, is a forward-fed, multilayer network, consisting of a connecting layer, a Y (AND) layer, and an O (OR) layer. . Using the letter X for the input, the letter C for the connection function and Z for the output, the output Z is expressed as follows: Z = f (C, X) Learning is to change the function of connection C by applying the gradient method in descent or the pseudo-potential energy method. For example, a sequence network, composed of a combination network with a connection layer, a Y layer (AND), a layer 0 (OR) and a memory network with tilting circuits D, is considered. The sequence network can be represented in the following equations: Z (t) = F (Ci (t), X (t), D (tl)) D (tl) = f (C2 (tl), x (tl) , D (t-2)) Thus, Z (t) = f (C! (T), X (t), C2 (t-1), X (tl), D (t-2)) in which C ^ (t), C2 (t) are connection functions at the time of stage t, and X (t), Z (t) and D (t) are states of the input, output and internal at the time of the stage t, respectively. Therefore, learning can be done by modifying the connection functions C] _ (t), C2 (t-1), by the gradient descent method or the pseudo-potential energy method. It is notable that learning is not only dependent on the input X (t) and the output z (t) at the time of stage t, but also the input x (tl) at the time of stage (t-1) , and the internal state d (t-2). thus, C ^ t + l) = Cx (t) +? Ci C2 (t) = C2 (t-1) +? C2 where? C ^ and? C2 are the quantities to be modified. The internal state D (t) at the time of step t, can be calculated by the following equation: D (t) = f (C2 (t), X (t), D (tl)) As described earlier in detail, in the learning method in the binary systems, according to this invention, the first binary gate and the second binary gate are defined as those that comprise the gates Y (AND), O (OJ?), NO (NOT) , NO-Y (NAND), NO-0 (NOR) and EXOR. , and the first gate connects to the second gate in any state between the following four connected states composed of (1) directly connected; (2) connected through an inverter; (3) binary 1 is entered into the second gate; (4) binary 0 is entered into the second gate. In this binary system, learning is done by selecting any state connected between the four previous states. Furthermore, in the learning method in the binary systems, according to this invention, an input is connected to any gate between the gates Y (AND), O (OJ?), NO (NOT), NO-Y (NAND) , NO-0 (NOJ ?; and EXOR.) In any state between the following four connected states, composed of: (1) directly connected; (2) connected through an inverter; (3) binary 1 is entered into the gateway; (4) binary 0 is to be entered into the gate In this binary system, learning is performed by selecting any state connected between the four previous states.Also, in the learning method in binary systems, according to this invention, The current inputs and internal states, which express the past sequence of values of the inputs, are connected to any of the gates Y (AND), OR (OR),? O (NOT),? OY (NAND),? O- 0 (ÑOR) and EXOR.in any state between the following four connected states, composed of: (1) with directly ected; (2) connected through an inverter; (3) binary 1 is entered into the gate; (4) binary 0 is entered into the gate. In this binary system, learning is done by selecting any state connected between the four previous states. Also, in the learning method in binary systems, according to this invention, the connection between the first previous binary gate or an input and the second binary gate are constructed in order to select any state between the four previous connected states, at least from according to the calculated result between the input signal in the first binary gate and the teacher's signal for learning. Also, in the learning method in the binary systems, according to this invention, by the supply of a pseudo-neuron Q, defined as follows, between the first binary gate, mentioned above (or an input), and the second gate binary, the connection between the first binary gate (or input) and the second binary gate is defined by the pseudo-neuron Q and the selection of the connection (ie learning) is carried out by modifying the weight and threshold factors of the pseudo-neuron Q.
Here, the pseudo-neuron Q is defined as Q = f (X,?): Where f: a threshold function or a sigmoid function a partial linear function; X: the input signal in the pseudo-neuron Q of the first binary gate; W: the weight factor between the input and the pseudo-neuron Q; ?: the threshold of the pseudo-neuron Q. Likewise, in the learning method in the binary systems according to this invention, the systems are comprised of an input layer that allows multiple binary data to enter, a layer Y (AND ) that has multiple Y gates (AND), a layer O (OJ?) that has multiple gates 0 (OR) that allow the outputs of the Y-layer input (AND), an output layer that inputs the outputs from the O layer (OR) and a connection layer, which has the pseudo-neurons Q provided between the input layer and the Y (AND) layer, and the connections between the input layer and the Y (AND) layer are selected between the following connected states: (1) the input layer connects directly to the Y (AND) layer; (2) the input layer connects to the Y (AND) gate through the inverters; (3) as they enter the Y (AND) layer, binary 1 always enters; (4) As they enter the Y (AND) layer, binary 0 always enters. Here, the pseudo-neuron Q is defined as Q = f (X,?) And f: is the threshold function, a sigmoid function or a function partial linear; X: is the input signal in the pseudo-neuron Q; W: is the weight factor between the input and the pseudo-neuron; Y ? is the threshold of the pseudo-neuron. Likewise, in the learning method in binary systems, according to this invention, the system is comprised with an input layer that allows multiple binary input data to enter, one layer O (OJ?) having multiple gates, one layer Y (AND) that has multiple Y gates that allow the output of the layer 0 (OR) input, an output layer, which inputs the outputs from the Y (AND) layer, and a connection layer, which has the pseudo-neurons Q provided between the input layer and the O layer (OJ?), and the connections between the input layer and the O layer (OJ?) are selected from the following four connected states: (1) the layer input is connected directly to layer O (OR); (2) the input layer connects to layer O (OJ?) Through inverters; (3) as they enter the O layer (OJ?), A binary 1 always enters; (4) As they enter the O (OR) layer, always enter a binary 0: Here, the pseudo-neuron Q is defined as Q = f (WX,?) And f: is the threshold function, a sigmoid function or a function partial linear; X: is the input signal in the pseudo-neuron Q; W: is the weight factor between the input and the pseudo-neuron; ? is the threshold of the pseudo-neuron. Also, in the learning method in the binary system, according to this invention, the system is comprised of an input layer, which allows multiple binary data to enter, an intermediate NO-Y (NAND) layer, which has multiple gates NO-Y (NAND), a NO-Y (NAND) output layer, which has multiple NO-Y (NAND) gates that make the output from the intermediate NO-Y (NAND) layer, an output layer that makes enter the output of the output NO-Y (NAND) layer and a connection layer, which has pseudo-neurons Q provided between the input layer and the intermediate NO-Y (NAND) layer, and the connections between the layer of input and the intermediate NO-Y (NAND) layer, selected from the following connected states: (1) an input layer connects directly to the NO-Y (NAND) layer; (2) the input layer connects to the NO-Y (NAND) layer through inverters; (3) as they enter the NO-Y (NAND) layer, binary 1 always enters; (4) As they enter the NO-Y (NAND) layer, binary 0 always enters. Here, the pseudo-neuron Q is defined as Q = f (WX,?), And f: is the threshold function, a sigmoid function or a partial linear function; X: is the input signal that enters the pseudo-neuron Q; W: is the weight factor between the input and the pseudo-neuron; ?: is the threshold of the pseudo-neuron. Also, in the learning method in a binary system, according to this invention, the system is comprised of an input layer that allows multiple binary data to enter, an intermediate layer NO-0 (NOJ?), Which has multiple gates ? O-0 (WOJ?), An output of the layer? O-0 (ÑOR) that has multiple gates? O-0 (? OR) that input the output from the intermediate layer? O-0 (WOJ?) , an output layer that inputs the output from the output layer? O-0 (? OR) and a connection layer, which has pseudo-neurons, Q provided between the input layer and the layer? O-0 ( WOJ?) Intermediate, selected from the following connected states: (1) an input layer is directly connected to the intermediate layer? O-0 (ÑOR); (2) the input layer is connected to the intermediate layer? O-0 (ÑOR) through inverters; (3) as they enter the intermediate layer 0-0 (ÑOR), a binary 1 always enters; (4) As they enter the intermediate layer 0-0 (ÑOR), a binary 0 always enters. Here, the pseudo-neuron Q is defined as Q = f (WX,?) And f: is the threshold function, a function sigmoid or a partial linear function; X: is the input signal that enters the pseudo-neuron Q; : is the weight factor between the input and the pseudo-neurons; and?: is the threshold of the pseudo-neuron. Furthermore, in the learning method in the binary system, according to the invention, the system is comprised of an input layer, which allows multiple binary data to enter, an intermediate EXOR layer, which has multiple EXOR gates, an EXOR layer, and output, which has multiple EXOR gates, which make the output of the intermediate EXOR layer, an output layer, which makes the output of the output EXOR layer, and a connection layer, which has the Q-pseudo-neurons provided between the input layer and the intermediate EXOR layer, and both layers are connected by any method selected from the following four connected states: (1) the input layer is directly connected to the intermediate EXOR layer; (2) the input layer is connected to the intermediate EXOR layer through inverters; (3) as they enter the intermediate EXOR layer, a binary 1 always enters; (4) As they enter the intermediate EXOR layer, a binary 9 always enters. Here, the pseudo-neuron Q is defined as Q = f (X,?) And f: is the threshold function, a sigmoid function or a linear function partial; X: is the input signal that enters the pseudo-neuron Q; W: is the weight factor between the input and the pseudo-neuron; ? is the threshold of the pseudo-neuron. Likewise, in these learning methods in binary systems, according to this invention, they are characterized because the modification of the weight and threshold factors of the pseudo-neurons, is carried out by the gradient descent method. Also, in these learning methods in the binary systems, according to this invention, they are characterized by the pseudo-potential energies of each basic gate that are calculated together with expression of the connected states of the aforementioned connection layer, and learning is done by modifying the pseudo-potential energies of the connected states.
EFFECTS OF THE INVENTION Applying these learning methods in binary systems, according to this invention, it is possible to obtain specified learning effects during a very short learning period, constructed as described above. Also, since all functions are performed with simple logic gates, it becomes possible to easily build and practice the portion that performs the logical operation of the learning algorithm and the modifier circuit. In addition, it is easy to carry out these learning methods in conventional computers and other digital equipment, these learning methods are expected to be widely used in the imaging process, voice process, natural word processes and movement control.

Claims (12)

  1. CLAIMS 1. Learning methods in binary systems, in which the first binary gate and the second binary gate are respectively defined as those that comprise the gates Y (AND), 0 (OJ?), NO (NOT), NO-Y (NAND), NO-0 (ÑOR) and EXOR, and learning is done by selecting any of the methods that connect the first gate to the second gate between the following four connection conditions: (1) directly connected; (2) connected through an inverter; (3) connected to the second gate always at the entrance with a binary 1; (4) connected to the second gate always at the entrance with a binary 0.
  2. 2. Learning methods in binary systems, in which this learning is done by selecting a method to connect an input to any of the binary gates, between gates Y (AND), 0 (OJ?), NO (NOT), NO-Y (NAND), NO-0 (NOJ ?; and EXOR.) In one of the following four connection conditions: (1) directly connected; (2) connected through an inverter; (3) connected to this binary gateway always existing at the entrance with a binary 1; (4) connected to this binary gate always existing at the entrance with a binary 0.
  3. 3. Learning methods in binary systems, in which this learning is done by selecting a method that connects the internal conditions, which includes the present and previous inputs to any of the binary gates of the gates Y (AND), OR (OR), NO (NOT), N0-Y (NAND), NO-0 (NOJ ?; and EXOR.) Between the following four connection conditions: (1) connected directly; (2) connected through an inverter; (3) connected to the binary gate always in the entrance with a binary 1, (4) connected to the binary gate always in the entrance with a binary 0.
  4. 4. Learning methods in binary systems, as claimed in claims 1, 2 and 3, in which the connection between the first binary gate or the input and the second binary gate, is constructed by selecting any of the four connection conditions, in accordance with the results calculated between the input signal that enters the first binary gate and a signal from the teacher for learning.
  5. 5. Learning methods in binary systems, as claimed in claims 1, 2, 3 and 4, in which a pseudo-neuron Q, defined in the following, is provided between any of the first binary gate or the input data and the second binary gate, and the condition of connection between them is selected according to the value of the pseudo-neuron Q, and as shown above, this pseudo-neuron Q is defined as Q = f (X,?), in which: f: is the threshold function, a sigmoid function or a partial linear function; X: is the input signal, which enters the pseudo-neuron Q; W: is the weight factor between the input and the pseudo-neuron Q; and?: the threshold of the pseudo-neuron Q.
  6. 6. Learning methods in binary systems, in which systems are comprised of an input layer, which allows multiple binary data to enter by themselves; a layer Y (AND), which has multiple Y gates (AND), a layer O (OJ?), which has multiple gates 0 (OJ?), which make the output from the Y (AND) layer, a layer of output that brings the output from layer O (OJ?), and a connection layer, which has pseudo-neurons Q provided between the input layer and the Y (AND) layer, and the learning is done by selecting each state of connection that connects the input layer to the Y (AND) layer from the following connected states: (1) the input layer connects directly to the Y (AND) layer; (2) the input layer connects to the Y (AND) layer through the inverters; (3) as it enters the Y (AND) layer, binary 1 always enters; (4) as it enters the Y (AND) layer, binary 0 always enters, in which this pseudo-neuron Q is defined as Q = f (WX,?), Where f: is the threshold function, a sigmoid function or a partial linear function; X: is the input signal that enters the pseudo-neuron Q; : is the weight factor between the input and the pseudo-neuron; Y ? is the threshold of the pseudo-neuron.
  7. 7. Learning methods in binary systems, in which the systems are comprised with an input layer that allows a multitude of binary data to enter by themselves, a layer 0 (OR) that has a multitude of gates O (OR) , a layer Y (AND), which has a multitude of Y gates (AND), which make the output from the layer O (OJ?), an output layer, which causes the output to enter from the Y (AND) layer , and a connection layer, which has pseudo-neurons Q, provided between the input layer and the layer O (OR) and the learning is performed by selecting each connection state connecting the input layer to the layer O (OR) , from among the following connected states: (1) the input layer connects directly to layer O (OJ?); (2) the input layer connects to layer O (OJ?) Through inverters; (3) as it enters the O layer (OJ?), A binary 1 always enters; (4) as it enters the O layer (OJ?), Always enters a binary 0: in which the pseudo-neuron Q is defined as Q = f (WX,?), In which: f: is the threshold function, a sigmoid function or a partial linear function; X: is the input signal that enters the pseudo-neuron Q; : is the weight factor between the input and the pseudo-neuron Q; Y ? is the threshold of the pseudo-neuron Q.
  8. 8. Learning methods in binary systems, in which the systems are comprised of an input layer that allows a multitude of binary data to enter by themselves, a NO-Y layer ( NAND) intermediate, which has multiple NO-Y (NAND) gates, a NO-Y (NAND) output layer, which has multiple NO-Y gates (NAND) that make the output from the NO-Y (NAND) layer intermediate, an output layer that inputs the output of the NO-Y (NAND) layer, and a connection layer, which has pseudo-neurons Q provided between the input layer and the intermediate NO-Y (NAND) layer, and learning is done by selecting each connection state that connects the input layer to the NO-Y (NAND) layer from among the following connected states: (1) an input layer connects directly to the NO-Y layer (NAND) ); (2) the input layer connects to the NO-Y (NAND) layer through inverters; (3) as it enters the intermediate NO-Y (NAND) layer, binary 1 always enters; (4) As it enters the intermediate NO-Y (NAND) layer, binary 0 always enters. The pseudo-neuron Q is defined as Q = f (WX,?), Where f: is the threshold function, a sigmoid function or a partial linear function; X: is the input signal that enters the pseudo-neuron Q; : is the weight factor between the input and the pseudo-neuron Q; and?: is the threshold of the pseudo-neuron Q.
  9. 9. Learning methods in binary systems, in which the systems are comprised with an input layer, which allows a multitude of binary data to enter by themselves, an intermediate layer NO-0 (NOJ?), Which has multiple NO- gates 0 (NOJ?), A NO-0 (ÑOR) output layer, which has multiple NO-0 (ÑOR) gates that make the output from the intermediate layer NO-0 (ÑOR), an output layer that makes it enter the output from the output layer NO-0 (ÑOR) and a connection layer, which has pseudo-neurons, Q provided between the input layer and the intermediate layer NO-0 (ÑOR), and the learning is done by selecting each one of the connection states, which connect the input layer to the intermediate layer NO-0 (ÑOR) from the following connected states: (1) an input layer is directly connected to the intermediate layer NO-0 (ÑOR); (2) the input layer is connected to the intermediate NO-O (NOJ?) Layer through inverters; (3) as it enters the intermediate layer NO-0 (ÑOR), a binary 1 always enters; (4) as it enters the intermediate layer NO-0 (NOJ?), Always enters a binary 0. in which the pseudo-neuron Q is defined as Q = f (X,?) In which, f: is the function threshold, a sigmoid function or a partial linear function; X: is the input signal that enters the pseudo-neuron Q; W: is the weight factor between the input and the pseudo-neurons; and?: is the threshold of the pseudo-neuron.
  10. 10. Learning methods in binary systems in which the systems are comprised of an input layer that allows a multitude of binary data to enter by themselves, an intermediate EXOR layer, which has multiple EXOR gates, an output EXOR layer, which has multiple EXOR gates, which enter the output of the intermediate EXOR layer, an output layer, which produces the output of the output EXOR layer, and a connection layer, which has the pseudo-neurons Q provided between the input layer and the intermediate EXOR layer, and the learning is done by selecting each connection state that connect the input layer to the intermediate EXOR layer from among the following connected states: (1) the input layer connects directly to the layer EXOR intermedia; (2) the input layer is connected to the intermediate EXOR layer through inverters; (3) as it enters the intermediate EXOR layer, a binary 1 always enters; (4) As it enters the intermediate EXOR layer, always enters a binary 0, in which the pseudo-neuron Q is defined as Q = f (X,?) In which, f: is the threshold function, a sigmoid function or a partial linear function; X: is the input signal that enters the pseudo-neuron Q; W: is the weight factor between the input and the pseudo-neuron; ? is the threshold of the pseudo-neuron.
  11. 11. Learning methods in binary systems, as claimed in claims 1, 2, 3, 4, 5, 6, 7, 8, 9 and 10, in which these methods are characterized in that the changes in the weight factors W and the thresholds? they are made by the use of the gradient descending method.
  12. 12. Methods of learning in binary systems, as claimed in claims 1, 2, 3, 4, 5, 6, 7, 8, 9 and 10, in which these methods are characterized because the quasi-potential energies in each The basic gates are calculated together with the expression of the connected states of the connection layer with the quasi-potential energy, and the learning is done by modifying this quasi-potential energy of the connected states.
MXPA/A/1997/008565A 1996-11-06 1997-11-06 Learning method in binar systems MXPA97008565A (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US08/744,299 US6061673A (en) 1996-11-06 1996-11-06 Learning methods in binary systems
US08744299 1996-11-06

Publications (2)

Publication Number Publication Date
MX9708565A MX9708565A (en) 1998-06-30
MXPA97008565A true MXPA97008565A (en) 1998-10-30

Family

ID=

Similar Documents

Publication Publication Date Title
Mao et al. Approximating functions with multi-features by deep convolutional neural networks
Li et al. On μ-pseudo almost periodic solutions for Clifford-valued neutral type neural networks with delays in the leakage term
EP0841621B1 (en) Learning method in binary systems
US5347613A (en) MOS multi-layer neural network including a plurality of hidden layers interposed between synapse groups for performing pattern recognition
MXPA97008565A (en) Learning method in binar systems
Kobayashi Quaternion projection rule for rotor hopfield neural networks
Mutimbu et al. A factor graph evidence combining approach to image defogging
Thevaril et al. Speech enhancement using adaptive neuro-fuzzy filtering
JP3172278B2 (en) Neural network circuit
CN117592537A (en) Graph data representation learning method under dynamic scene
Rangarajan et al. Markov random fields and neural networks with applications to early vision problems
Kohut et al. Boolean neural networks
Hattori et al. Episodic associative memories
AU5194899A (en) Learning methods in binary systems
US5644681A (en) Learning method for neural network having discrete interconnection strengths
Al-Nsour et al. Implementation of programmable digital sigmoid function circuit for neuro-computing
ZA200102507B (en) Learning methods in binary systems.
Dalhoum et al. High-Order Neural Networks are Equivalent to Ordinary Neural Networks
Honma et al. Effect of complexity on learning ability of recurrent neural networks
Vijayakumari et al. An improved design of combinational digital circuits with multiplexers using genetic algorithm
KR100342886B1 (en) Method of learning binary system
Lee et al. Neighbor-layer updating in MBDS for the recall of pure bipolar patterns in gray-scale noise
Kinjo et al. Hardware implementation of a DBM network with non-monotonic neurons
Rhee Fast and optimal synthesis of binary threshold neural networks
HABIB et al. O FT SOT EN CET ENGINEERING