US20180357527A1 - Data-processing device with representation of values by time intervals between events - Google Patents
Data-processing device with representation of values by time intervals between events Download PDFInfo
- Publication number
- US20180357527A1 US20180357527A1 US15/743,642 US201615743642A US2018357527A1 US 20180357527 A1 US20180357527 A1 US 20180357527A1 US 201615743642 A US201615743642 A US 201615743642A US 2018357527 A1 US2018357527 A1 US 2018357527A1
- Authority
- US
- United States
- Prior art keywords
- node
- connection
- connections
- neuron
- weight
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/049—Temporal neural networks, e.g. delay elements, oscillating neurons or pulsed inputs
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F17/00—Digital computing or data processing equipment or methods, specially adapted for specific functions
- G06F17/10—Complex mathematical operations
- G06F17/11—Complex mathematical operations for solving equations, e.g. nonlinear equations, general mathematical optimization problems
- G06F17/13—Differential equations
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/048—Activation functions
-
- G06N3/0481—
Definitions
- the present invention relates to data processing techniques.
- Embodiments implement a new way of carrying out calculations in machines, in particular in programmable machines.
- NEF Neuro Engineering Framework
- An object of the present disclosure is to propose a novel approach for the representation of the data and the execution of calculations. It is desirable for this approach to be suitable for an implementation having reduced energy consumption and massive parallelism.
- a data processing device comprising a set of processing nodes and connections between the nodes.
- Each connection has an emitter node and a receiver node out of the set of processing nodes and is configured to transmit, to the receiver node, events delivered by the emitter node.
- Each node is arranged to vary a respective potential value according to events that it receives and to deliver an event when the potential value reaches a predefined threshold.
- At least one input value of the data processing device is represented by a time interval between two events received by at least one node, and at least one output value of the data processing device is represented by a time interval between two events delivered by at least one node.
- the processing nodes form neuron-type calculation units. However, it is not especially desired here to imitate the operation of the brain.
- the term “neuron” is used in the present disclosure for linguistic convenience but does not necessarily mean strong resemblance to the operating mode of the neurons of the cortex.
- the proposed methodology is consistent with the neuromorphic architectures that do not make any distinction between memory and calculation.
- Each connection of each processing node stores information and simultaneously uses this information for the calculation. This is very different from the prevailing organisation in conventional computers that distinguishes between memory and processing and causes the Von Neumann bottleneck, in which the majority of the calculation time is dedicated to moving information between the memory and the central processing unit (John Backus: “Can Programming Be Liberated from the von Neumann Style?: A Functional Style and Its Algebra of Programs”, Communications of the ACM , Vol. 21, No. 8, pages 613-641, August 1978).
- the operation is based on communication governed by events (“event-driven”) like in biological neurons, and thus allowing execution with massive parallelism.
- each processing node is arranged to reset its potential value when it delivers an event.
- the reset can in particular be to a zero potential value.
- Numerous embodiments of the device for processing data include, among the connections between the nodes, one or more potential variation connections, each having a respective weight.
- the receiver node of such a connection is arranged to respond to an event received on this connection by adding the weight of the connection to its potential value.
- the potential variation connections can include excitation connections, which have a positive weight, and inhibiting connections, which have a negative weight.
- the set of processing nodes can comprise at least one first node forming the receiver node of a first potential variation connection having a first positive weight at least equal to the predefined threshold for the potential value, and at least one second node forming the receiver node of a second potential variation connection having a weight at least equal to half the predefined threshold for the potential value and less than the predefined threshold for the potential value.
- the aforementioned first node further forms the emitter node and the receiver node of a third potential variation connection having a weight equal to the opposite of the first weight, as well as the emitter node of a fourth connection, while the second node further forms the emitter node of a fifth connection.
- the first and second potential variation connections are thus configured to each receive two events separated by a first time interval representing an input value whereby the fourth and fifth connections transport respective events having between them a second time interval related to the first time interval.
- an example of a device for processing data comprises at least one minimum calculation circuit, which itself comprises:
- first, second, third, fourth, fifth and sixth potential variation connections each having a first positive weight at least equal to half the predefined threshold for the potential value and less than the predefined threshold for the potential value;
- ninth and tenth potential variation connections each having a third weight double of the second weight.
- the first input node forms the emitter node of the first and third connections and the receiver node of the tenth connections
- the second input node forms the emitter node of the second and fourth connections and the receiver node of the ninth connection
- the first selection node forms the emitter node of the fifth, seventh and ninth connections and the receiver node of the first and eighth connections
- the second selection node forms the emitter node of the sixth, eighth and tenth connections and the receiver node of the second and seventh connections
- the output node forms the receiver node of the third, fourth, fifth and sixth connections.
- Another example of a device for processing data comprises at least one maximum calculation circuit, which itself comprises:
- first, second, third and fourth potential variation connections each having a first positive weight at least equal to half the predefined threshold for the potential value and less than the predefined threshold for the potential value;
- fifth and sixth potential variation connections each having a second weight equal to double the opposite of the first weight.
- the first input node forms the emitter node of the first and third connections
- the second input node forms the emitter node of the second and fourth connections
- the first selection node forms the emitter node of the fifth connection and the receiver node of the first and sixth connections
- the second selection node forms the emitter node of the sixth connection and the receiver node of the second and fifth connections
- the output node forms the receiver node of the third and fourth connections.
- Another example of a device for processing data comprises at least one subtractor circuit, which itself comprises:
- first, second, third, fourth, fifth and sixth potential variation connections each having a first positive weight at least equal to the predefined threshold for the potential value
- the first synchronisation node forms the emitter node of the first, second, third and ninth connections
- the second synchronisation node forms the emitter node of the fourth, fifth, sixth and tenth connections
- the first inhibition node forms the emitter node of the eleventh connection and the receiver node of the third, eighth and tenth connections
- the second inhibition node forms the emitter node of the twelfth connection and the receiver node of the sixth, seventh and ninth connections
- the first output node forms the emitter node of the seventh connection and the receiver node of the first, fifth and eleventh connections
- the second output node forms the emitter node of the eighth connection and the receiver node of the second, fourth and twelfth connections.
- the first synchronisation node is configured to receive, on at least one potential variation connection having the second weight, a first pair of events having between them a first interval of time representing a first operand.
- the second synchronisation node is configured to receive, on at least one potential variation connection having the second weight, a second pair of events having between them a second interval of time representing a second operand, whereby a third pair of events having between them a third time interval is delivered by the first output node if the first time interval is longer than the second time interval and by the second output node if the first time interval is shorter than the second time interval, the third time interval representing the absolute value of the difference between the first and second operand.
- the subtractor circuit can further comprise zero detection logic including at least one detection node associated with detection and inhibition connections with the first and second synchronisation nodes, one of the first and second inhibition nodes and one of the first and second output nodes.
- the detection and inhibition connections are faster than the first, second, third, fourth, fifth, sixth, seventh, eighth, ninth, tenth, eleventh and twelfth connections, in order to inhibit the production of events by one of the first and second output nodes when the first and second time intervals are substantially equal.
- the set of processing nodes comprises at least one node arranged to vary a current value according to events received on at least one current adjustment connection, and to vary its potential value over time at a rate proportional to said current value.
- a processing node can in particular be arranged to reset its current value to zero when it delivers an event.
- the current value in at least some of the nodes has a component that is constant between two events received on at least one constant current component adjustment connection having a respective weight.
- the receiver node of a constant current component adjustment connection is arranged to react to an event received on this connection by adding the weight of the connection to the constant component of its current value.
- Another example of a device for processing data comprises at least one inverter memory circuit, which itself comprises:
- first, second and third constant current component adjustment connections having the same positive weight and the second connection having a weight opposite to the weight of the first and third connections;
- the accumulator node forms the receiver node of the first, second and third connections and the emitter node of the fourth connection, and the first and second connections are configured to respectively address, to the accumulator node, first and second events having between them a first time interval related to a time interval representing a value to be memorised, whereby the accumulator node then responds to a third event received on the third connection by increasing its potential value until delivery of a fourth event on the fourth connection, the third and fourth events having between them a second time interval related to the first time interval.
- Another example of a device for processing data comprises at least one memory circuit, which itself comprises:
- first, second, third and fourth constant current component adjustment connections each having a first positive weight and the third connection having a second weight opposite to the first weight;
- the first accumulator node forms the receiver node of the first connection and the emitter node of the third connection
- the second accumulator node forms the receiver node of the second, third and fourth and fifth connections and the emitter node of the fifth connection
- the first and second connection are configured to respectively address, to the first and second accumulator nodes, first and second events having between them a first time interval related to a time interval representing a value to be memorised, whereby the second accumulator node then responds to a third event received on the fourth connection by increasing its potential value until delivery of a fourth event on the fifth connection, the third and fourth events having between them a second time interval related to the first time interval.
- the memory circuit can further comprise a sixth connection having the first accumulator node as an emitter node, the sixth connection delivering an event to signal the availability of the memory circuit for reading.
- Another example of a device for processing data comprises at least one synchronisation circuit, which includes a number N>1 of memory circuits, of the type mentioned just above, and a synchronisation node.
- the synchronisation node is sensitive to each event delivered on the sixth connection of one of the N memory circuits via a respective potential variation connection having a weight equal to the first weight divided by N.
- the synchronisation node is arranged to trigger simultaneous reception of the third events via the respective fourth connections of the N memory circuits.
- Another example of a device for processing data comprises at least one accumulation circuit, which itself comprises:
- N inputs each having a respective weighting coefficient, N being an integer greater than 1;
- the accumulator node forms the receiver node of the first, second and third connections
- the synchronisation node forms the emitter node of the third connection.
- the first and second connections are configured to address, to the accumulator node, respective first and second events having between them a first time interval representing a respective operand provided on said input.
- the synchronisation node is configured to deliver a third event once the first and second events have been addressed for each of the N inputs, whereby the accumulator node increases its potential value until delivery of a fourth event.
- the third and fourth events have between them a second time interval related to a time interval representing a weighted sum of the operands provided on the N inputs.
- the accumulation circuit is part of a weighted addition circuit further comprising:
- the synchronisation node of the accumulation circuit forms the emitter node of the fourth connection
- the accumulator node of the accumulation circuit forms the emitter node of the fifth connection
- the second accumulator node forms the receiver node of the fourth connection and the emitter node of the sixth connection.
- the accumulator node of the accumulation circuit increases its potential value until delivery of a fourth event on the fifth connection
- the second accumulator node increases its potential value until delivery of a fifth event on the sixth connection
- the fourth and fifth events having between them a third time interval related to a time interval representing a weighted sum of the operands provided on the N inputs of the accumulation circuit.
- Another example of a device for processing data comprises at least one linear combination circuit including two accumulation circuits, which share their synchronisation node, and a subtractor circuit configured to respond to the third event delivered by the shared synchronisation node and to the fourth events respectively delivered by the accumulator nodes of the two accumulation circuits by delivering a pair of events having between them a third time interval representing the difference between the weighted sum for one of the two accumulation circuits and the weighted sum for the other of the two accumulation circuits.
- the set of processing nodes comprises at least one node, the current value of which has a component that decreases exponentially between two events received on at least one exponentially decreasing current component adjustment connection having a respective weight.
- the receiver node of an exponentially decreasing current component adjustment connection is arranged to react to an event received on this connection by adding the weight of the connection to the exponentially decreasing component of its current value.
- Another example of a device for processing data comprises at least one logarithm calculation circuit, which itself comprises:
- first and second constant current component adjustment connection the first connection having a positive weight
- second connection having a weight opposite to the weight of the first connection
- the accumulator node forms the receiver node of the first, second and third connections and the emitter node of the fourth connection.
- the first and second connections are configured to address, to the accumulator node, respective first and second events having between them a first time interval related to a time interval representing an input value of the logarithm calculation circuit.
- the third connection is configured to address, to the accumulator node, a third event simultaneous or posterior to the second event, whereby the accumulator node increases its potential value until delivery of a fourth event on the fourth connection, the third and fourth events having between them a second time interval related to a time interval representing a logarithm of the input value.
- the processing device can further comprise at least one deactivation connection, the receiver node of which is a node capable of cancelling out its exponentially decreasing component of current in response to an event received on the deactivation connection.
- Another example of a device for processing data comprises at least one exponentiation circuit, which itself comprises:
- the accumulator node forms the receiver node of the first, second and third connections and the emitter node of the fourth connection.
- the first and second connection are configured to address, to the accumulator node, respective first and second events having between them a first time interval related to a time interval representing an input value of the exponentiation circuit.
- the third connection is configured to address, to the accumulator node, a third event simultaneous or posterior to the second event, whereby the accumulator node increases its potential value until delivery of a fourth event on the fourth connection, the third and fourth events having between them a second time interval related to a time interval representing an exponentiation of the input value.
- Another example of a device for processing data comprises at least one multiplier circuit, which itself comprises:
- first, second, third, fourth and fifth constant current component adjustment connections the first, third and fifth connections having a positive weight, and the second and fourth connections having a weight opposite to the weight of the first, second and fifth connections;
- the first accumulator node forms the receiver node of the first, second and sixth connections and the emitter node of the seventh connection
- the second accumulator node forms the receiver node of the third, fourth and seventh connections and the emitter node of the fifth and ninth connections
- the third accumulator node forms the receiver node of the fifth, eighth and ninth connections and the emitter node of the tenth connection
- the synchronisation node forms the emitter node of the sixth and eighth connections.
- the first and second connection are configured to address, to the first accumulator node, respective first and second events having between them a first time interval related to a time interval representing a first operand of the multiplier circuit.
- the third and fourth connections are configured to address, to the second accumulator node, respective third and fourth events having between them a second time interval related to a time interval representing a second operand of the multiplier circuit.
- the synchronisation node is configured to deliver a fifth event on the sixth and eighth connections once the first, second, third and fourth events have been received.
- the first accumulator node increases its potential value until delivery of a sixth event on the seventh connection and then, in response to the sixth event, the second accumulator node increases its potential value until delivery of a seventh event on the fifth and ninth connections.
- the third accumulator node increases its potential value until delivery of an eighth event on the tenth connection, the seventh and eighth events having between them a third time interval related to a time interval representing the product of the first and second operands.
- Sign detection logic can be associated with the multiplier circuit in order to detect the respective signs of the first and second operands and cause two events having between them a time interval representing the product of the first and second operands to be delivered on one or the other of two outputs of the multiplier circuit according to the signs detected.
- each connection is associated with a delay parameter, in order to signal the receiver node of this connection to carry out a change of state with a delay, with respect to the reception of an event on the connection, indicated by said parameter.
- the values represented by time intervals have, for example, absolute values x between 0 and 1.
- a logarithmic scale rather than a linear one for ⁇ t as a function of x can also be suitable for certain uses. Other scales can also be used.
- the processing device can have special arrangements in order to handle signed values. It can thus comprise, for an input value:
- a first input comprising one node or two nodes out of the set of processing nodes, the first input being arranged to receive two events having between them a time interval representing a positive value of the input value;
- a second input comprising one node or two nodes out of the set of processing nodes, the second input being arranged to receive two events having between them a time interval representing a negative value of the input value.
- the processing device can comprise:
- a first output comprising one node or two nodes out of the set of processing nodes, the first output being arranged to deliver two events having between them a time interval representing a positive value of said output value;
- a second output comprising one node or two nodes out of the set of processing nodes, the second output being arranged to deliver two events having between them a time interval representing a negative value of said output value.
- the set of processing nodes is in the form of at least one programmable array, the nodes of the array having a shared behaviour model according to the events received.
- This device further comprises a programming logic in order to adjust weights and delay parameters of the connections between the nodes of the array according to a calculation program, and a control unit in order to provide input values to the array and recover output values calculated according to the program.
- FIG. 1 is a diagram of a processing circuit producing the representation of a constant value on demand, according to an embodiment of the invention
- FIG. 2 is a diagram of an inverter memory device according to an embodiment of the invention.
- FIG. 3 is a diagram showing the change in potential values over time and the production of events in an inverter memory device according to FIG. 2 ;
- FIG. 4 is a diagram of a memory device according to an embodiment of the invention.
- FIG. 5 is a diagram showing the change in potential values over time and the production of events in a memory device according to FIG. 4 ;
- FIG. 6 is a diagram of a signed memory device according to an embodiment of the invention.
- FIGS. 7( a ) and 7( b ) are diagrams showing the change in potential values over time and the production of events in a signed memory device according to FIG. 6 when it is presented with various input values;
- FIG. 8 is a diagram of a synchronisation device according to an embodiment of the invention.
- FIG. 9 is a diagram showing the change in potential values over time and the production of events in a synchronisation device according to FIG. 8 ;
- FIG. 10 is a diagram of a synchronisation device according to another embodiment of the invention.
- FIG. 11 is a diagram of a device for calculating a minimum according to an embodiment of the invention.
- FIG. 12 is a diagram showing the change in potential values over time and the production of events in a device for calculating a minimum according to FIG. 11 ;
- FIG. 13 is a diagram of a device for calculating a maximum according to an embodiment of the invention.
- FIG. 14 is a diagram showing the change in potential values over time and the production of events in a device for calculating a maximum according to FIG. 13 ;
- FIG. 15 is a diagram of a subtractor device according to an embodiment of the invention.
- FIG. 16 is a diagram showing the change in potential values over time and the production of events in a subtractor device according to FIG. 15 ;
- FIG. 17 is a diagram of an alternative of the subtractor device in which a difference equal to zero is taken into account
- FIG. 18 is a diagram of an accumulation circuit according to an embodiment of the invention.
- FIG. 19 is a diagram of a weighted addition device according to an embodiment of the invention.
- FIG. 20 is a diagram of a linear combination calculation device according to an embodiment of the invention.
- FIG. 21 is a diagram of a logarithm calculation device according to an embodiment of the invention.
- FIG. 22 is a diagram showing the change in potential values over time and the production of events in a logarithm calculation device according to FIG. 21 ;
- FIG. 23 is a diagram of an exponentiation device according to an embodiment of the invention.
- FIG. 24 is a diagram showing the change in potential values over time and the production of events in an exponentiation device according to FIG. 23 ;
- FIG. 25 is a diagram of a multiplier device according to an embodiment of the invention.
- FIG. 26 is a diagram showing the change in potential values over time and the production of events in a multiplier device according to FIG. 25 ;
- FIG. 27 is a diagram of a signed multiplier device according to an embodiment of the invention.
- FIG. 28 is a diagram of an integrator device according to an embodiment of the invention.
- FIG. 29 is a diagram of a device suitable for solving a first-order differential equation in an example of an embodiment of the invention.
- FIGS. 30A and 30B are graphs showing results of simulation of the device of FIG. 29 ;
- FIG. 31 is a diagram of a device suitable for solving a second-order differential equation in an example of an embodiment of the invention.
- FIGS. 32A and 32B are graphs showing results of simulation of the device of FIG. 31 ;
- FIG. 33 is a diagram of a device suitable for solving a system of three-variable nonlinear differential equations in an example of an embodiment of the invention.
- FIG. 34 is a graph showing results of simulation of the device of FIG. 33 ;
- FIG. 35 is a diagram of a programmable processing device according to an embodiment of the invention.
- a data processing device as proposed here works by representing the processed values not as amplitudes of electric signals or as binary-encoded numbers processed by logic circuits, but as time intervals between events occurring within a set of processing nodes having connections between them.
- the data processing device does not necessarily have an architecture strictly corresponding to that which people agree to call “neural networks”, the following description uses the terms “node” and “neuron” interchangeably, just like it uses the term “synapse” to designate the connections between two nodes or neurons in the device.
- the synapses are oriented, i.e. each connection has an emitter node and a receiver node and transmits, to the receiver node, events generated by the emitter node.
- An event typically manifests itself as a spike in a voltage signal or current signal delivered by the emitter node and influencing the receiver node.
- each connection or synapse has a weight parameter w that measures the influence that the emitter node exerts on the receiver node during an event.
- a description of the behaviour of each node can be given by referring to a potential value V corresponding to the membrane potential V in the paradigm of artificial neural networks.
- the potential value V of a node varies over time according to the events that the node receives on its incoming connections. When this potential value V reaches or exceeds a threshold V t , the node emits an event (“spike”) that is transmitted to the node(s) located downstream.
- a current value g having a component g e and optionally a component g ⁇ .
- the component g e is a component that remains constant, or substantially constant, between two events that the node receives on a particular synapse that is called here constant current component adjustment connection.
- the component g ⁇ is an exponentially changing component, i.e. it varies exponentially between two events that the node receives on a particular synapse that is called here exponentially decreasing current component adjustment connection.
- a node that takes into account an exponentially decreasing current component g ⁇ can further receive events for activation and deactivation of the component g ⁇ on a particular synapse that is called here activation connection.
- each synapse being associated with a weight parameter indicating a synaptic weight w, positive or negative:
- Each synaptic connection is further associated with a delay parameter that gives the delay in propagation between the emitter neuron and the receiver neuron.
- a neuron triggers an event, when its potential value V reaches a threshold V t , i.e.:
- the triggering of the event gives rise to a spike delivered on each synapse of which the neuron forms the emitter node and to resetting its state variables to:
- the notation T syn designates the delay in propagation along a standard synapse
- the notation T neu designates the time that a neuron takes to transmit the event when producing its spike after having been triggered by an input synaptic event.
- T neu can for example represent the time step of a neural simulator.
- a standard weight w e is defined as the minimum excitation weight that must be applied to a V-synapse in order to trigger a neuron from the reset state, and another standard weight W is defined as the inhibition weight having a contrary effect:
- the values processed by the device are represented by time intervals between events. Two events of a pair of events are separated by a time interval ⁇ t that is a function of the value x encoded by this pair:
- ⁇ is an encoding function chosen for the representation of the data in the device.
- the two events of the pair encoding this value x can be delivered by the same neuron n or by two distinct neurons.
- this neuron n encodes a time-varying signal u(t), the discrete values of which are given by:
- ⁇ ⁇ 1 is the encoding function inverse chosen and i is an even number.
- the function ⁇ calculates the interval between spikes associated with a particular value.
- T min can be zero. However, it is advantageous for it to be non-zero. Indeed, if two events representing a value come from the same neuron or are received by the same neuron, the minimum interval T min >0 gives this neuron time to reset. Moreover, a choice of T min >0 allows certain arrangements of neurons to respond to the first input event and propagate a change of state before receiving a second event.
- the form (11) for the encoding function ⁇ is not the only one possible. Another suitable choice is to take a logarithmic function, which allows a wide range of values to be encoded with dynamics that are suitable for certain uses, in this case with less precision for large values.
- the choice of (9) or (11) for the encoding function leads to the definition of two standard weights for the g e -synapses.
- the weight w acc is defined as being the value of g e necessary to trigger a neuron, from its reset state, after the time T cod , or:
- g mult For the g e -synapses, another standard weight g mult can be given as:
- connections between nodes of the device can further be each associated with a respective delay parameter.
- This parameter indicates a delay with which the receiver node of the connection carries out a change of state, with respect to the emission of an event on the connection.
- the indication of delay values by these delay parameters associated with the synapses allows suitable sequencing of the operations in the processing device to be ensured.
- Each node can, for example, be created using analogue technology, with resistive and capacitive elements in order to preserve and vary a voltage level and transistor elements in order to deliver events when the voltage level exceeds the threshold V t .
- FPGAs field-programmable gate arrays
- FIGS. 1, 2, 4, 6, 8, 10, 11, 13, 15, 17, 18, 19, 20, 21, 23, 25, 27, 28, 29, 31 and 33 are presented.
- Some of the nodes or neurons shown in these drawings are named in such a way as to evoke the functions resulting from their arrangement in the circuit: ‘input’ for an input neuron, ‘input+’ for the input of a positive value, ‘input’ for the input of a negative value, “output” for an output neuron, ‘output+’ for the output of a positive value, ‘output ⁇ ’ for the output of a negative value, ‘recall’ for a neuron used to recover a value, ‘acc’ for an accumulator neuron, ‘ready’ for a neuron indicating the availability of a result or of a value, etc.
- FIG. 1 shows a very simple circuit 10 that can be used to produce the representation of a constant value x on demand.
- the synapse 11 is configured with a delay parameter T syn
- the synapse 12 is configured with a delay parameter T syn + ⁇ (x).
- the activation of the recall neuron 15 triggers the output neuron 16 at times T syn and T syn + ⁇ (x), and thus the circuit 10 delivers two events separated in time by the value ⁇ (x) representing the constant x.
- FIG. 2 shows a processing circuit 18 forming an inverting memory.
- This group comprises a ‘first’ neuron 23 and a ‘last’ neuron 25 .
- Two excitation V-synapses 22 , 24 having a delay T syn go from the input neuron 21 to the first neuron 23 and to the last neuron 25 , respectively.
- the V-synapse 22 has a weight w e
- the V-synapse 24 has a weight equal to w e /2.
- the first neuron 23 inhibits itself via a V-synapse 28 having a weight w i and a delay T syn .
- the excitation g e -synapse 26 goes from the first neuron 23 to the acc neuron 30 , and has the weight w acc and a delay of T syn +T min .
- the inhibiting g e -synapse 27 goes from the last neuron 25 to the acc neuron 30 , and has the weight ⁇ w acc and a delay T syn .
- An excitation V-synapse 32 goes from the recall neuron 31 to the output neuron 33 , and has the weight w e and a delay of 2T syn +T neu .
- An excitation V-synapse 34 goes from the recall neuron 31 to the acc neuron 30 , and has the weight w acc and a delay T syn .
- an excitation V-synapse 35 goes from the acc neuron 30 to the output neuron 33 , and has the weight w e and a delay T syn .
- FIG. 3 The operation of the inverting-memory device 18 is illustrated by FIG. 3 .
- Emission of a first event (spike) at time t in 1 at the input neuron 21 triggers an event at the output of the first neuron 23 after the time T syn +T neu , i.e. at time t first 1 in FIG. 3 , and raises the potential value of the last neuron 25 to V t /2.
- the first neuron 23 then inhibits itself via the synapse 28 by giving the value ⁇ V t to its membrane potential, and it starts the accumulation by the acc neuron 30 after T syn +T min , i.e. at time t st 1 , via the g e -synapse 26 .
- the emission of the second spike at time t in 2 t in 1 +T min +x ⁇ T cod at the input neuron 21 brings the last neuron 25 to the threshold potential V t .
- the second spike also triggers the resetting of the potential of the first neuron 23 to zero via the synapse 22 .
- the value x is stored in the acc neuron 30 upon reception of the two input spikes and immediately available to be read by activating the recall neuron 31 .
- the processing circuit 18 of FIG. 2 functions similarly if certain weights are chosen in the following manner: the V-synapse 22 has a weight w greater than or equal to w e , the V-synapse 24 has a weight at least equal to w e /2 and less than V t , the first neuron 23 inhibits itself via a recall V-synapse 28 having a weight ⁇ w, the excitation V-synapse 32 has a weight greater than or equal to w e and the excitation V-synapse 35 has a weight greater than or equal to w e . This observation extends to the following processing circuits.
- FIG. 4 shows a processing circuit 40 forming a memory.
- the memory circuit 40 has an input neuron 21 in order to receive the value to be stored, a read-command input formed by a recall neuron 48 , a ready neuron 47 indicating the time from which a reading command can be presented to the recall neuron 48 , and an output neuron 50 in order to return the stored value. All the synapses of this memory circuit have the delay T syn .
- a g e -synapse 41 goes from the first neuron 23 to the first acc neuron 42 , and has the weight w acc
- the acc neuron 42 thus starts accumulation at time t st 1 ⁇ t in 1 +2 ⁇ T syn +T neu ( FIG. 5 ).
- a g e -synapse 43 goes from the last neuron 25 to the second acc neuron 44 , and has the weight w acc .
- V t T max ⁇ ( t end ⁇ ⁇ 2 1 - t st ⁇ ⁇ 2 1 ) V t ⁇ ( 1 - f ⁇ ( x ) - T syn - T neu T max ) ,
- the reading can then take place by activating the recall neuron 48 , which takes place at time t recall 1 in FIG. 5 .
- the acc neuron 42 in FIG. 4 could be eliminated by configuring delays of T syn +T max on certain synapses. This could be of interest for reducing the number of neurons, but can pose a problem in an installation using specific integrated circuits (ASIC) because of the extension of the delays between neighbouring neurons.
- ASIC specific integrated circuits
- the memory circuit 40 functions for any encoding of the value x by a time interval between T min and T max , without being limited to the form (11) above.
- the signed-memory circuit 60 is based on a memory circuit 40 of the type shown in FIGS. 4A-B .
- the input+ and input ⁇ neurons 61 , 62 are connected, respectively, to the input neuron 21 of the circuit 40 by excitation V-synapses 63 , 64 having the weight w e .
- activates the input neuron 21 of the circuit 40 twice, such that the time interval ⁇ (
- the neurons 61 , 62 are connected, respectively, to ready+ and ready ⁇ neurons 65 , 66 by excitation V-synapses 67 , 68 having a weight of w e /4.
- the signed memory circuit has a recall neuron 70 connected to the ready+ and ready ⁇ neurons 65 , 66 by respective excitation V-synapses 71 , 72 having the weight w e /2.
- Each of the ready+ and ready ⁇ neurons 65 , 66 is connected to the recall neuron 48 of the circuit 40 by respective excitation V-synapses 73 , 74 having the weight w e .
- An inhibiting V-synapse 75 having a weight of w i /2 goes from the ready+ neuron 65 to the ready ⁇ neuron 66 , and reciprocally, an inhibiting V-synapse 76 having a weight of w i /2 goes from the ready neuron 66 to the ready+ neuron 65 .
- the ready+ neuron 65 is connected to the output ⁇ neuron 82 of the signed memory circuit by an inhibiting V-synapse 77 having a weight of 2w i .
- the ready ⁇ neuron 66 is connected to the output+ neuron 81 of the signed memory circuit by an inhibiting V-synapse 78 having a weight of 2w i .
- the output neuron 50 of the circuit 40 is connected to the output+ and output ⁇ neurons 81 , 82 by respective excitation V-synapses 79 , 80 having the weight w e .
- the output of the signed memory circuit 60 comprises a ready neuron 84 that is the receiver node of an excitation V-synapse 85 having the weight w e coming from the ready neuron 47 of the memory circuit 40 .
- FIG. 7 shows the behaviour of the neurons of the signed-memory circuit 60 ( a ) in the case of a positive input and (b) in the case of a negative input.
- the acc neuron 44 of the memory circuit 40 is charged to the value
- the ready neuron 70 can be activated in order to read the signed piece of data, which takes place at time t recall 1 in FIG. 7 .
- Activation of the recall neuron 70 triggers the ready+ or ready ⁇ neuron 65 , 66 via the V-synapse 70 or 71 , and this triggering resets the other ready ⁇ or ready+ neuron 65 , 66 to zero via the V-synapse 75 or 76 .
- the event delivered by the ready+ or ready ⁇ neuron 65 , 66 inhibits the output ⁇ or output+ neuron 82 , 81 via the V-synapse 77 or 78 by bringing its potential to ⁇ 2V t .
- the event delivered by the ready+ or ready ⁇ neuron 65 , 66 at time t sign 1 is provided via the V-synapse 73 or 74 .
- This pair of spikes communicated to the output+ and output ⁇ neurons 81 , 82 via the V-synapses 79 , 80 twice triggers, at times t out 1 and t out 2 t out 1 + ⁇ (
- signed-memory circuit 60 shown in FIG. 6 is not optimised in terms of number of neurons, because the following is possible:
- FIG. 8 shows a processing circuit 90 used to synchronise signals received on a number N of inputs (N ⁇ 2). All the synapses of this synchronisation circuit have the delay T syn .
- the circuit 90 shown in FIG. 8 comprises N neurons input 91 0 , . . . , 91 N ⁇ 1 and N neurons output 92 0 , . . . , 92 N ⁇ 1 .
- Each input neuron 91 k is the emitter node of a V-synapse 93 k having the weight w e , the receiver node of which is the input neuron 21 k of a respective memory circuit 40 k .
- the output neuron 50 k of each memory circuit 40 k is the emitter node of a V-synapse 94 k having the weight w e , the receiver node of which is the output neuron 92 k of the synchronisation circuit 90 .
- the synchronisation circuit 90 comprises a sync neuron 95 that is the receiver node of N excitation V-synapses 96 0 , . . . , 96 N ⁇ 1 having a weight of w e /N, the emitter nodes of which are, respectively, the ready neurons 47 0 , . . . , 47 N ⁇ 1 of the memory circuits 40 0 , . . . , 40 N ⁇ 1 .
- the circuit 90 also comprises excitation V-synapses 97 0 , . . . , 97 N ⁇ 1 having the weight w e , the sync neuron 95 as an emitter node, and, respectively, the recall neurons 48 0 , . . . , 48 N ⁇ 1 of the memory circuits 40 0 , . . . , 40 N ⁇ 1 as receiver nodes.
- the sync neuron 95 receives the events produced by the ready neurons 47 0 , . . . , 47 N ⁇ 1 as the N input signals are loaded into the memory circuits 40 0 , . . . , 40 N ⁇ 1 , i.e. at times t ridy0 1 , t rdy1 1 in FIG. 9 .
- the sync neuron 95 delivers an event T syn later, i.e. at time t sync 1 in FIG. 9 .
- each memory circuit 40 k produces its second respective spike at time t outk 2 .
- the input neurons 91 0 , . . . , 91 N ⁇ 1 and the output neurons 92 0 , . . . , 92 N ⁇ 1 are optional, since the inputs can be provided directly by the input neurons 21 0 , . . . , 21 N ⁇ 1 of the memory circuits 40 0 , . . . , 40 N ⁇ 1 and the outputs directly by the output neurons 50 0 , . . . , 50 N ⁇ 1 of the memory circuits 40 0 , . . .
- the V-synapses 46 of the memory circuit 40 0 , . . . , 40 N ⁇ 1 can go directly to the sync neuron 95 , without passing through a ready neuron 47 0 , . . . , 47 N ⁇ 1 .
- the synapses 97 0 , . . . , 97 N ⁇ 1 can be directly fed to the output neurons 50 0 , . . . , 50 N ⁇ 1 of the memory circuits (thus replacing their synapses 49 ), and the sync neuron 95 can also form the emitter node of the g e -synapses 51 of the memory circuits 40 0 , . . . , 40 N ⁇ 1 in order to control the restart of accumulation in the acc neurons 44 ( FIGS. 4 and 5 ).
- the sync neuron 95 thus directly controls the emission of the first spike on a particular output of the circuit (which can be one of the output neurons 92 0 , . . . , 92 N ⁇ 1 or a specific neuron), and then the second spike of each pair by reactivating the acc neurons 44 of the memory circuits 40 0 , . . . , 40 N ⁇ 1 via a g e -synapse.
- the sync neuron 95 acts as the recall neurons 48 of the various memory circuits.
- the sync neuron 95 is excited by two V-synapses 46 having a weight of w e /2 coming directly from the acc neurons 42 of the two memory circuits, and it is the emitter node of the g e -synapses 51 in order to restart the accumulation in the acc neurons 44 .
- the role of this output ref neuron 99 could, alternatively, be played by one of the two output neurons 92 0 , 92 1 .
- the two events encoding the value of an output value of the circuit 98 are produced by two different neurons (for example the neurons 99 and 92 1 for the value x 1 ).
- the two events of a pair representing a value it is not necessary for the two events of a pair representing a value to come from a single node (in the case of an output value) or to be received by a single node (in the case of an input value).
- FIG. 11 shows a processing circuit 100 that calculates the minimum between two values received in a synchronised manner on two input nodes 101 , 102 and delivers this minimum on an output node 103 .
- this circuit 100 comprises two ‘smaller’ neurons 104 , 105 .
- An excitation V-synapse 106 having a weight of w e /2, goes from the input neuron 101 to the smaller neuron 104 .
- An excitation V-synapse 107 having a weight of w e /2, goes from the input neuron 102 to the smaller neuron 105 .
- An excitation V-synapse 108 having a weight of w e /2, goes from the input neuron 101 to the output neuron 103 .
- An excitation V-synapse 109 having a weight of w e /2, goes from the input neuron 102 to the output neuron 103 .
- An excitation V-synapse 110 having a weight of w e /2, goes from the smaller neuron 104 to the output neuron 103 .
- An excitation V-synapse 111 having a weight of w e /2, goes from the smaller neuron 105 to the output neuron 103 .
- An inhibiting V-synapse 112 having a weight of w i /2, goes from the smaller neuron 104 to the smaller neuron 105 .
- An inhibiting V-synapse 113 having a weight of w i /2, goes from the smaller neuron 105 to the smaller neuron 104 .
- An inhibiting V-synapse 114 having the weight w i , goes from the smaller neuron 104 to the input neuron 102 .
- An inhibiting V-synapse 115 having the weight w i , goes from the smaller neuron 105 to the input neuron 101 . All the synapses 106 - 115 shown in FIG. 11 are associated with a delay T syn , except the synapses 108 , 109 for which the delay is 2 ⁇ T syn +T neu .
- the emission of the second spike on the input neuron having the smallest value, namely the neuron 101 at time t in1 2 t in1 1 + ⁇ t 1 in the example of FIG.
- FIG. 13 shows a processing circuit 120 that calculates the maximum between two values received in a synchronised manner on two input nodes 121 , 122 and delivers this maximum on an output node 123 .
- this circuit 120 comprises two ‘larger’ neurons 124 , 125 .
- An excitation V-synapse 126 having a weight of w e /2, goes from the input neuron 121 to the larger neuron 124 .
- An excitation V-synapse 127 having a weight of w e /2, goes from the input neuron 122 to the larger neuron 125 .
- An excitation V-synapse 128 having a weight of w e /2, goes from the input neuron 121 to the output neuron 123 .
- An excitation V-synapse 129 having a weight of w e /2, goes from the input neuron 122 to the output neuron 123 .
- An inhibiting V-synapse 132 having the weight 142 goes from the larger neuron 124 to the larger neuron 125 .
- An inhibiting V-synapse 133 having the weight 142 goes from the larger neuron 125 to the larger neuron 124 . All the synapses shown in FIG. 13 are associated with the delay T syn .
- the emission of the second spike on the input neuron having the smallest value, namely the neuron 121 at time t in1 2 t in1 1 + ⁇ t 1 in the example of FIG.
- the synapse 132 inhibits the other larger neuron 125 , the potential of which is set to the value V t /2.
- FIG. 15 shows a subtraction circuit 140 that calculates the difference between two values x 1 , x 2 received in a synchronised manner on two input nodes 141 , 142 and delivers the result x 1 ⁇ x 2 on an output node 143 if it is positive and on another output node 144 if it is negative.
- the subtraction circuit 140 comprises two sync neurons 145 , 146 and two ‘inb’ neurons 147 , 148 .
- An excitation V-synapse 150 having a weight of w e /2, goes from the input neuron 141 to the sync neuron 145 .
- An excitation V-synapse 151 having a weight of w e /2, goes from the input neuron 142 to the sync neuron 146 .
- Three excitation V-synapses 152 , 153 , 154 each having the weight of w e , go from the sync neuron 145 to the output+ neuron 143 , to the output ⁇ neuron 144 and to the inb neuron 147 , respectively.
- Three excitation V-synapses 155 , 156 , 157 each having the weight w e , go from the sync neuron 146 to the output ⁇ neuron 144 , to the output+ neuron 143 and to the inb neuron 148 , respectively.
- An inhibiting V-synapse 158 having the weight iv goes from the sync neuron 145 to the inb neuron 148 .
- An inhibiting V-synapse 159 having the weight w i goes from the sync neuron 146 to the inb neuron 147 .
- An excitation V-synapse 160 having a weight of w e /2, goes from the output+ neuron 143 to the inb neuron 148 .
- An excitation V-synapse 161 having a weight of w e /2, goes from the output ⁇ neuron 144 to the inb neuron 147 .
- An inhibiting V-synapse 162 having a weight of 2w i , goes from the inb neuron 147 to the output+ neuron 143 .
- An inhibiting V-synapse 163 goes from the inb neuron 163 to the output ⁇ neuron 144 .
- the synapses 150 , 151 , 154 and 157 - 163 are associated with a delay of T syn .
- the synapses 152 and 155 are associated with a delay of T min +3 ⁇ T syn +2 ⁇ T neu .
- the synapses 153 and 156 are associated with a delay of 3 ⁇ T syn +2 ⁇ T neu .
- FIG. 16 The operation of the subtraction circuit 140 according to FIG. 15 is illustrated by FIG. 16 for the case in which the result x 1 -x 2 is positive. Everything happens symmetrically if the result is negative.
- the two excitation events received by the output ⁇ neuron 144 at times t in2 2 +T min +4 ⁇ T syn +3 ⁇ T neu and t in1 2 +4 ⁇ T syn +3 ⁇ T neu are after the inhibiting event received at time t in2 2 +3 ⁇ T syn +2 ⁇ T neu .
- this neuron 144 does not emit any event when ⁇ t 2 ⁇ t 1 , and thus the sign of the result is suitably signalled.
- the output+ neuron 143 delivers two events having between them a time interval ⁇ t out between the events of the two pairs produced by the input neurons 141 , 142 , with:
- the subtractor circuit 140 shown in FIG. 15 activates the two parallel paths and the result is delivered on both the output+ neuron 143 and the output ⁇ neuron 144 , the inb neurons 147 , 148 not having the time to select a winning path.
- the zero neuron 171 is the receiver node of two excitation V-synapses 172 , 173 having a weight of w e /2 and the delay T neu , one coming from the sync neuron 145 and the other from the sync neuron 146 . It is also the receiver node of two inhibiting V-synapses 174 , 175 having a weight of w e /2 and a delay of 2 ⁇ T neu , one coming from the sync neuron 145 and the other from the sync neuron 146 .
- the zero neuron 171 excites itself via a V-synapse 176 having the weight w e and the delay T neu . It is also the emitter node of two inhibiting V-synapses having the delay T neu , one 177 having the weight w i directed towards the inb neuron 148 and the other 178 having a weight of 2w i directed towards the output ⁇ neuron 144 .
- the zero neuron 171 acts as a detector of coincidence between the events delivered by the sync neurons 145 , 146 . Given that these two neurons only deliver events at the time of the second encoding spike of their associated input, detecting this temporal coincidence is equivalent to detecting the equality of the two input values, if the latter are correctly synchronised.
- the zero neuron 171 only produces an event if it receives two events separated by a time interval less than T neu from the sync neurons 145 , 146 . In this case, it directly inhibits the output ⁇ neuron 144 via the synapse 178 , and deactivates the inb neuron 148 via the synapse 177 .
- two equal input values provided to the subtractor circuit of FIG. 17 lead to two events separated by a time interval equal to T min , i.e. encoding a difference of zero, at the output of the output+ neuron 143 , and to no event on the output ⁇ neuron 144 . If the input values are not equal, the zero neuron 171 is not activated and the subtractor functions in the same manner as that of FIG. 15 .
- FIG. 18 shows a circuit 180 for accumulating positive input values with weighting. Its goal is to load, into a acc neuron 184 , a potential value related to a weighted sum:
- ⁇ 0 , ⁇ 1 , . . . , ⁇ N ⁇ 1 are positive or zero weighting coefficients and the input values x 0 , x 1 , . . . , x N ⁇ 1 are positive or zero.
- the circuit 180 For each input value x k (0 ⁇ k ⁇ N), the circuit 180 comprises a input neuron 181 k and an input ⁇ neuron 182 k each part of a respective group 20 of neurons arranged in the same way as in the group 20 described above in reference to FIG. 2 .
- the outgoing connections of the first and last neurons of these N groups of neurons 20 are configured as a function of the coefficients ⁇ k of the weighted sum to be calculated.
- the first neuron connected to the input neuron 181 k (0 ⁇ k ⁇ N) is the emitter node of an excitation g e -synapse 182 k having a weight of ⁇ k , w acc and a delay of T min +T syn .
- the last neuron connected to the input neuron 181 k is the emitter node of an inhibiting g e -synapse 183 k having a weight of ⁇ k ⁇ w acc and the delay T syn .
- the acc neuron 184 accumulates the terms ⁇ k ⁇ x k .
- the acc neuron 184 is the receiver node of the excitation g e -synapse 182 k and of the inhibiting g e -synapse 183 k .
- the circuit 180 further comprises a sync neuron 185 that is the receiver node of N V-synapses, each having a weight of w e /N and the delay T syn , respectively coming from the last neurons connected to the N neurons input 181 k (0 ⁇ k ⁇ N).
- the sync neuron 185 is the emitter node of an excitation g e -synapse 186 having the weight w acc and the delay T syn , the receiver node of which is the acc neuron 184 .
- the acc neuron 184 integrates the quantity ⁇ k ⁇ V t /T max over a duration ⁇ t k ⁇ T min x k ⁇ T cod .
- the sync neuron 185 is triggered and excites the acc neuron 184 via the g e -synapse 186 .
- the threshold V t is reached by the acc neuron 184 that triggers an event.
- the delay of this event with respect to that delivered by the sync neuron 185 is T max
- the weighted sums is only made accessible by the circuit 180 in its inverted form (1 ⁇ s).
- the coefficients ⁇ k can be normalised in order for this condition to be met for all the possible values of the xk, i.e. such that
- a weighted addition circuit 190 can have the structure shown in FIG. 19 .
- a circuit 180 for weighted accumulation of the type of that described in reference to FIG. 18 is associated with another acc neuron 188 and with an output neuron 189 .
- the acc neuron 188 is the receiver node of an excitation g e -synapse 191 having the weight w acc and the delay T syn , and the emitter node of an excitation V-synapse 192 having the weight w e and a delay of T min +T syn .
- the output neuron 189 is also the receiver node of an excitation V-synapse 193 having the weight w e and the delay T syn .
- the linearly changing accumulation starts in the acc neuron 188 at the same time as it restarts in the acc neuron 184 of the circuit 180 , the two acc neurons 184 , 188 being excited on the g e -synapses 186 , 191 by the same event coming from the sync neuron 185 .
- the expected weighted sum is represented at the output of the circuit 190 .
- this circuit 190 becomes a simple adder circuit, with a scale factor of 1 ⁇ 2 in order to avoid overflows in the acc neuron 184 .
- the circuit 200 for calculating a linear combination shown in FIG. 20 comprises two accumulation circuits 180 A, 180 B of the type of that described in reference to FIG. 18 .
- the input neurons 181 k of the accumulation circuit 180 A are respectively associated with the coefficients ⁇ k for 0 ⁇ k ⁇ M and with the inverted coefficients ⁇ k for M ⁇ k ⁇ N. These input neurons 181 k for 0 ⁇ k ⁇ M receive a pair of spikes representing x k when x k ⁇ 0 and thus form neurons of the input+ type for these values x 0 , . . . , x M ⁇ 1 . The input neurons 181 k of the circuit 180 A for M ⁇ k ⁇ N receive a pair of spikes representing x k when x k ⁇ 0 and thus form neurons of the input ⁇ type for these values x M , . . . , x N ⁇ 1 .
- the input neurons 181 k of the circuit 180 B for weighted accumulation are respectively associated with the inverted coefficients ⁇ k for 0 ⁇ k ⁇ M and with the coefficients ⁇ k for M ⁇ k ⁇ N. These input neurons 181 k for 0 ⁇ k ⁇ M receive a pair of spikes representing x k when x k ⁇ 0 and thus form neurons of the input ⁇ type for these values x 0 , . . . , x M ⁇ 1 .
- the neurons input 181 k of the circuit 180 B for M ⁇ k ⁇ N receive a pair of spikes representing x k when x k ⁇ 0 and thus form neurons of the input+ type for these values x M , . . . , x N ⁇ 1 .
- the two accumulation circuits 180 A, 180 B share their sync neuron 185 that is thus the receiver node of 2N V-synapses, each having a weight of w e /N and the delay T syn , coming from last neurons coupled with the 2N input neurons 181 k .
- the sync neuron 185 of the linear combination calculation circuit 200 is therefore triggered once the N input values x 0 , . . . , x N ⁇ 1 , positive or negative, have been received on the neurons 181 k .
- a time ⁇ T A T max
- ⁇ T cod ⁇ (1 ⁇ ⁇ k ⁇ x k ⁇ 0
- a time ⁇ T B T max ⁇ ⁇ k ⁇ x k ⁇ 0
- ⁇ T cod ⁇ (1 ⁇ ⁇ k ⁇ x k ⁇ 0
- a subtractor circuit 170 that can be of the type of that shown in FIG. 17 then combines the time intervals ⁇ T A and ⁇ T B in order to produce the representation of
- ⁇ ⁇ k ⁇ x k ⁇ 0
- the linear combination calculation circuit 200 of FIG. 20 comprises two excitation V-synapses 198 , 199 , having the weight w e and a delay of T min +T syn , directed towards the input neurons 141 , 142 of the subtractor circuit 170 .
- an excitation V-synapse 201 having the weight w e and the delay T syn goes from the acc neuron 184 of the circuit 180 A to the input neuron 141 of the subtractor circuit 170 .
- An excitation V-synapse 202 having the weight w e and the delay T syn goes from the acc neuron 184 of the circuit 180 B to the other input neuron 142 of the subtractor circuit 170 .
- the output ⁇ neuron 144 and the output+ neuron 143 of the subtractor circuit 170 are respectively connected, via excitation V-synapses 205 , 206 having the weight w e and the delay T syn , to two other output+ and output ⁇ neurons 203 , 204 that form the outputs of the circuit 200 for calculating a linear combination.
- ⁇ k ⁇ x k ⁇ ) ⁇ (
- a ‘start’ neuron 207 receiving two excitation V-synapses 208 , 209 , having the weight w e and the delay T syn , coming from the output+ neuron 143 and the output ⁇ neuron 144 of the subtractor circuit 170 .
- the start neuron 207 inhibits itself via a V-synapse 210 , having the weight w i and the delay T syn .
- the start neuron 207 delivers a spike simultaneously to the first spike of the output+ or output ⁇ neuron 203 , 204 which is activated.
- the coefficients ⁇ k can be normalised in order for the conditions ⁇ ⁇ k ⁇ x k ⁇ 0
- ⁇ k 0 N - 1 ⁇ ⁇ ⁇ k ⁇ ⁇ T max T cod ,
- the input neuron 211 belongs to a group of nodes 20 similar to that described in reference to FIG. 2 .
- the first neuron 213 of this group 20 is the emitter node of an excitation ge-synapse 212 having the weight w acc and a delay of Tmin+Tsyn, while the last neuron 215 is the emitter node of an inhibiting ge-synapse 214 having a weight of ⁇ w acc and the delay Tsyn.
- the two ge-synapses 212 , 214 have the same acc neuron 216 as a receiver node. From the last neuron 215 to the acc neuron 216 , there is also a gf-synapse 217 having the weight
- the circuit 210 further comprises an output neuron 220 that is the receiver node of an excitation V-synapse 221 having the weight w e and a delay of 2 ⁇ T syn coming from the last neuron 215 , and of an excitation V-synapse 222 having the weight w e and a delay of T min +T syn coming from the acc neuron 216 .
- FIG. 22 The operation of the logarithm calculation circuit 210 according to FIG. 21 is illustrated by FIG. 22 .
- the potential value V t ⁇ x is stored in the acc neuron 216 .
- the last neuron 215 further activates the exponential change on the acc neuron 216 at the same time t end 1 via the g ⁇ -synapse 217 and the gate-synapse 218 .
- the event transported by the g ⁇ -synapse 217 could also arrive later at the acc neuron 216 if it is desired to store, in the latter, the potential value V t ⁇ x while other operations are carried out in the device.
- the component g ⁇ of the acc neuron 216 changes according to:
- g f ⁇ ( t ) V t ⁇ ⁇ m ⁇ f ⁇ e - t - t end 1 ⁇ f ( 17 )
- V ⁇ ( t ) V t ⁇ ⁇ ( 1 + x - e - t - t end 1 ⁇ f ) ( 18 )
- the circuit 210 of FIG. 21 delivers the representation of log A (x) when it receives the representation of a real number x such that A ⁇ x ⁇ 1, where log A ( ⁇ ) designates the base-A logarithm operation. If we consider that in the form (11), the time interval between the two events delivered by the output neuron 220 can exceed T max , the circuit 210 delivers the representation of log A (x) for any number x such that 0 ⁇ x ⁇ 1.
- the input neuron 231 belongs to a group of nodes 20 similar to that described in reference to FIG. 2 .
- the first neuron 233 of this group 20 is the emitter node of a g ⁇ -synapse 232 having the weight g mult and a delay of T min +T syn , as well as of an excitation gate-synapse 234 having a weight of 1 and a delay of T min +T syn .
- the last neuron 235 of the group 20 is the emitter node of an inhibiting gate-synapse 236 having a weight of ⁇ 1 and the delay T syn , as well as of an excitation g e -synapse 237 having the weight w acc and the delay T syn .
- the synapses have the same acc neuron 238 as a receiver node.
- the circuit 230 further comprises an output neuron 240 that is the receiver node of an excitation V-synapse 241 having the weight w e and a delay of 2 ⁇ T syn coming from the last neuron 235 , and of an excitation V-synapse 242 having the weight w e and a delay of T min +T syn coming from the acc neuron 238 .
- FIG. 24 The operation of the exponentiation circuit 230 according to FIG. 23 is illustrated by FIG. 24 .
- the component g ⁇ of the acc neuron 238 changes according to:
- V ⁇ ( t ) V t ⁇ ⁇ ( 1 - e - t - t st 1 ⁇ f ) ( 20 )
- the potential value V t ⁇ (1 ⁇ A x ) is stored in the acc neuron 238 , where, as above,
- the last neuron 235 further activates the linear dynamics having the weight w acc on the acc neuron 238 at the same time t end 1 .
- the membrane potential of the neuron 238 thus changes according to:
- V ⁇ ( t ) V t ⁇ ⁇ ( 1 - A x + t - t end 1 T cod ) ( 21 )
- the circuit 230 of FIG. 23 thus delivers the representation of A x when it receives the representation of a number x between 0 and 1.
- This circuit can accept input values x greater than 1 ( ⁇ t>T max ) and also deliver the representation of A x on its output neuron 240 .
- the circuit 230 of FIG. 23 carries out the inversion of the operation carried out by the circuit 210 of FIG. 21 .
- the first neuron 253 k of this group 20 k is the emitter node of an excitation g e -synapse 252 k having the weight w acc and a delay of T min +T syn
- the last neuron 255 k is the emitter node of an inhibiting g e -synapse 254 k having a weight of ⁇ w acc and the delay T syn .
- the two g e -synapses 252 k , 254 k from the group of nodes 20 k have, as a receiver node, the same acc neuron 256 k , which plays a role similar to the acc neuron 216 in FIG. 21 .
- the circuit 250 further comprises a sync neuron 260 that is the receiver node of two excitation V-synapses 261 1 , 261 2 having a weight of w e /2 and the delay T syn coming, respectively, from the last neurons 255 1 , 255 2 .
- a g ⁇ -synapse 262 having the weight g mult and the delay T syn and an excitation gate-synapse 264 having a weight of 1 and the delay T syn go from the sync neuron 260 to the acc neuron 256 1 .
- a g ⁇ -synapse 265 having the weight g mult and the delay T syn and an excitation gate-synapse 266 having a weight of 1 and the delay T syn go from the acc neuron 256 1 to the acc neuron 256 2 .
- the circuit 250 comprises another acc neuron 268 that plays a role similar to the acc neuron 238 in FIG. 23 .
- the acc neuron 268 is the receiver node of a g ⁇ -synapse 269 , having the weight g mult and a delay of 3T syn and of an excitation gate-synapse 270 , having a weight of 1 and a delay of 3 T syn , both coming from the sync neuron 260 .
- the acc neuron 268 is the receiver node of an inhibiting gate-synapse 271 , having a weight of ⁇ 1 and the delay T syn , and of an excitation g e -synapse 272 , having the weight w acc and the delay T syn , both coming from the acc neuron 256 2 .
- the circuit 250 has an output neuron 274 that is the receiver node of an excitation V-synapse 275 , having the weight w e and a delay of 2T syn , coming from the acc neuron 256 2 and of an excitation V-synapse 276 , having the weight w e and a delay of T syn +T syn , coming from the acc neuron 268 .
- FIG. 26 The operation of the multiplier circuit 250 according to FIG. 25 is illustrated by FIG. 26 .
- Each of the two acc neurons 256 1 , 256 2 initially behaves like the acc neuron 216 of FIG. 21 , with a linear progression 278 1 , 278 2 having the weight w acc on a first period having a respective duration of x 1 ⁇ T cod , x 2 ⁇ T cod , leading to storage of the potential values V t ⁇ x 1 and V t ⁇ x 2 in the acc neurons 256 1 , 256 2 .
- the membrane potential of this acc neuron 256 2 thus has a plateau 279 that lasts until its reactivation via the synapses 265 , 266 .
- V t ⁇ ⁇ ( 1 - e - t end ⁇ ⁇ 3 1 - t st ⁇ ⁇ 3 1 ⁇ f ) V t ⁇ ( 1 - x 1 ⁇ x 2 ) ( 22 )
- the circuit 250 of FIG. 25 thus delivers, on its output neuron 268 , the representation of the product x 1 ⁇ x 2 of two numbers x 1 , x 2 between A and 1, the respective representations of which it receives on its input neurons 251 1 , 251 2 .
- the pairs of events did not have to be received in a synchronised manner on the input neurons 251 1 , 251 2 since the sync neuron 260 handles the synchronisation.
- FIG. 27 shows a multiplier circuit 290 that calculates the product of two signed values x 1 , x 2 . All the synapses shown in FIG. 27 have the delay T syn .
- the multiplier circuit 290 For each input value x k (1 ⁇ k ⁇ 2), the multiplier circuit 290 comprises a input+ neuron 291 k and a input ⁇ neuron 292 k that are the emitter nodes of two respective V-synapses 293 k and 294 k having the weight w e .
- the V-synapses 293 1 and 294 1 are directed towards an input neuron 251 1 of a multiplier circuit 250 of the type shown in FIG. 25 , while the V-synapses 293 1 and 294 1 are directed towards the other input neuron 251 2 of the circuit 250 .
- the multiplier circuit 290 has a output+ neuron 295 and a output ⁇ neuron 296 that are the receiver nodes of two respective excitation V-synapses 297 and 298 having the weight w e coming from the output neuron 274 of the circuit 250 .
- the multiplier circuit 290 also comprises four sign neurons 300 - 303 connected to form logic for selecting the sign of the result of the multiplication.
- Each sign neuron 300 - 303 is the receiver node of two respective excitation V-synapses having a weight of w e /4 coming from two of the four input neurons 291 k , 292 k .
- the sign neuron 300 connected to the input+ neurons 291 1 , 291 2 detects the reception of two positive inputs x 1 , x 2 . It forms the emitter node of an inhibiting V-synapse 305 having a weight of 214), going to the output ⁇ neuron 296 .
- the sign neuron 303 connected to the input ⁇ neurons 292 1 , 292 2 detects the reception of two negative inputs x 1 , x 2 . It forms the emitter node of an inhibiting V-synapse 308 having a weight of 2w i going to the output ⁇ neuron 296 .
- the sign neuron 301 connected to the input neuron 292 1 and the input+ neuron 292 1 detects the reception of a negative input x 1 and of a positive input x 2 . It forms the emitter node of an inhibiting V-synapse 306 having a weight of 2w i going to the output+ neuron 295 .
- the sign neuron 302 connected to the input+ neuron 291 1 and the input ⁇ neuron 292 2 detects the reception of a positive input x 1 and of a negative input x 2 . It forms the emitter node of an inhibiting V-synapse 307 having a weight of 2w i going to the output+ neuron 295 .
- Inhibiting V-synapses are arranged between the sign neurons 300 - 303 in order to ensure that only one of them acts in order to inhibit one of the output+ neuron 295 and the output ⁇ neuron 296 .
- Each sign neuron 300 - 303 corresponding to a sign (+ or ⁇ ) of the product is thus the emitter node of two inhibiting V-synapses having a weight of w e /2 going, respectively, to the two sign neurons corresponding to the opposite sign.
- the circuit 290 of FIG. 27 delivers two events separated by the time interval ⁇ (
- Logic for detecting a zero on one of the inputs can be added thereto, like in the case of FIG. 17 , in order to make sure that an input of zero will produce the time interval T min between two events produced on the output+ neuron 295 and not the output ⁇ neuron 296 .
- FIG. 28 shows a circuit 310 that reconstructs a signal from its derivatives provided in signed form on a neuron of a pair of input+ and input ⁇ neurons 311 , 312 .
- the integrated signal is presented, according to its sign, by a neuron of a pair of output+ and output ⁇ neurons 313 , 314 .
- the synapses 321 - 332 shown in FIG. 28 are all excitation V-synapses having the weight w e . They all have the delay T syn except the synapse 329 , the delay of which is T min +T syn .
- the circuit 317 substantially consists of a pair of output+ and output ⁇ neurons 315 , 316 connected to the same recall neuron 15 in the manner shown in FIG. 1 .
- Another init neuron 318 of the integration circuit 310 is the emitter node of a synapse 325 , the receiver node of which is the recall neuron 15 of the circuit 317 .
- the init neuron 318 loads the integrator with its initial value x 0 stored in the circuit 317 .
- Synapses 326 , 327 are arranged to provide feedback from the output+ neuron 143 of the linear combination circuit 200 to its input+ neuron 181 0 and from the output ⁇ neuron 144 of the integration circuit 200 to its input ⁇ neuron 181 0 .
- a start neuron 319 is the emitter node of two synapses 328 , 329 that feed a zero value in the form of two events separated by the time interval T min on the input+ neuron 181 1 of the integration circuit 180 .
- the output+ neuron 143 and the output ⁇ neuron 144 of the linear combination circuit 200 are the respective emitter nodes of two synapses 330 , 331 , the receiver nodes of which are, respectively, the output+ neuron 313 and the output ⁇ neuron 314 of the integration circuit 310 .
- the integration circuit 310 has a new input neuron 320 that is the receiver node of a synapse 332 coming from the start neuron 207 of the linear combination circuit 200 .
- the initial value x 0 is, according to its sign, delivered on the output+ neuron 313 or the output ⁇ neuron 314 once the init neuron 318 and then the start neuron 319 have been activated.
- an event is delivered by the new input neuron 320 .
- circuits described above in reference to FIGS. 1-28 can be assembled and configured to execute numerous types of calculations in which the values manipulated, at the input and/or output are represented by time intervals between events received or delivered by neurons.
- FIG. 29 shows a processing device that implements the resolution of the differential equation:
- ⁇ and X ⁇ are parameters that can take on various values.
- the synapses shown in FIG. 29 are all excitation V-synapses having the weight w e and the delay T syn .
- the device of FIG. 29 uses:
- Two synapses 341 , 342 provide feedback from the output node output+ 313 of the integrator circuit 310 to the other input node input+ 181 0 of the linear combination circuit 200 , and from the output node output ⁇ 314 of the circuit 310 to the other input node input ⁇ 181 0 of the circuit 200 .
- Two synapses 343 , 344 go from the output node output+ 203 of the linear combination circuit 200 to the input node input+ 311 of the integrator circuit 310 and, respectively, from the output node output+ 204 of the circuit 200 to the input node input ⁇ 312 of the circuit 310 .
- the device of FIG. 29 has a pair of output+ and output ⁇ neurons 346 , 347 that are the receiver nodes of two synapses coming from the output+ neuron 313 and the output ⁇ neuron 314 of the integrator circuit 310 .
- the init and start neurons 348 , 349 allow the process of integration to be initialised and started.
- the init neuron 348 must be triggered before the integration process in order to load the initial value into the integrator circuit 310 .
- the start neuron 349 is triggered in order to deliver the first value from the circuit 310 .
- the device of FIG. 29 is made using 118 neurons if the components as described in reference to the preceding figures are used. This number of neurons can be reduced via optimisation.
- FIG. 31 shows a processing device that implements the resolution of the differential equation:
- ⁇ and ⁇ 0 are parameters that can take on various values.
- the synapses shown in FIG. 31 are all excitation V-synapses having the weight w e and the delay T syn . Since the values manipulated in this example are all positive, it is not necessary to provide two distinct paths for the positive values and for the negative values. Only the path relating to the positive values is therefore included.
- the device of FIG. 31 uses:
- a synapse 353 goes from the output node output 203 of the linear combination circuit 200 to the input node input 311 of the first integrator circuit 310 A.
- a synapse 354 goes from the output node output 313 of the first integrator circuit 310 A to the input node input 311 of the second integrator circuit 310 B.
- the device of FIG. 31 has an output neuron 356 that is the receiver node of a synapse coming from the output neuron 313 of the second integrator circuit 310 B.
- the init and start neurons 358 359 allow the process of integration to be initialised and started.
- the init neuron 358 must be triggered before the integration process in order to load the initial value into the integrator circuits 310 A, 310 B.
- the start neuron 359 is triggered in order to deliver the first value from the second integrator circuit 310 B.
- the device of FIG. 31 is made using 187 neurons if the components as described in reference to the preceding figures are used. This number of neurons can be reduced via optimisation.
- FIG. 33 shows a processing device that implements the resolution of the system of non-linear differential equations proposed by E. Lorenz for the modelling of a deterministic non-periodic flow (“Deterministic Nonperiodic Flow”, Journal of the Atmospheric Sciences , Vol. 20, No. 2, pages 130-141, March 1963):
- the variables were scaled in order to obtain state variables X, Y and Z each changing within the interval [0, 1] in such a way that they could be represented in the form (11) above.
- the synapses shown in FIG. 33 are all excitation V-synapses having the weight w e and the delay T syn . In order to simplify the drawing, only one path is shown, but it should be understood that each time, there is a path for the positive values of the variables and, in parallel, a path for their negative values.
- the device of FIG. 33 uses:
- Three synapses go, respectively, from the output neuron 92 0 of the synchroniser circuit 90 to the input neuron 311 A of the integrator circuit 310 A, from the output neuron 92 1 of the circuit 90 to the input neuron 311 B of the integrator circuit 310 B, and from the output neuron 92 2 of the circuit 90 to the input neuron 311 C of the integrator circuit 310 C.
- the input neuron 291 A 1 of the multiplier circuit 290 A is excited from the output neuron 313 A of the integrator circuit 310 A, and its input neuron 291 A 2 is excited from the output neuron 313 C of the integrator circuit 310 C.
- the input neuron 291 B 1 of the multiplier circuit 290 B is excited from the output neuron 313 A of the integrator circuit 310 A, and its input neuron 291 B 2 is excited from the output neuron 313 B of the integrator circuit 310 B.
- the device of FIG. 33 has three output neurons 361 , 362 and 363 that are the receiver nodes of three respective excitation V-synapses coming from the output neurons 313 A, 313 B and 313 C of the integrator circuits 310 A, 310 B, 310 C. These three output neurons 361 - 363 deliver pairs of events, the intervals of which represent values of the solution ⁇ X(t), Y(t), Z(t) ⁇ calculated for the system (26).
- the device of FIG. 33 is made using 549 neurons if the components as described in reference to the preceding figures are used. This number of neurons can be significantly reduced via optimisation.
- the points in FIG. 34 each correspond to a triplet ⁇ X(t), Y(t), Z(t) ⁇ of output values encoded by three pairs of spikes delivered by the three output neurons 361 - 363 , respectively, in a three-dimensional graph illustrating a simulation of the device shown in FIG. 33 .
- the point P represents the initialisation values X(0), Y(0), Z(0) of the simulation.
- the other points represent triplets calculated by the device of FIG. 33 .
- the system behaves in the expected manner, in accordance with the strange attractor described by Lorenz.
- circuits can then be assembled to carry out more sophisticated calculations. They form a sort of brick from which powerful calculation structures can be built. Examples of this have been shown with respect to the resolution of differential equations.
- the processing nodes are typically organised as a matrix. This lends itself well in particular to an implementation using FPGA.
- a programmable array 400 forming the set of processing nodes, or a portion of this set, in an exemplary implementation of the processing device is illustrated schematically in FIG. 35 .
- the array 400 consists of multiple neurons all having the same model of behaviour according to the events received on their connections.
- the behaviour can be modelled by the equations (1) indicated above, with identical parameters ⁇ m and ⁇ ⁇ for the various nodes of the array.
- Programming or configuration logic 420 is associated with the array 400 in order to adjust the synaptic weights and the delay parameters of the connections between the nodes of the array 400 .
- This configuration is carried out in a manner analogous to that which is routinely practice in the field of artificial neural networks.
- the configuration of the parameters of the connections is carried out according to the calculation program that will be executed and while taking into account the relationship used between the time intervals and the values that they represent, for example the relationship (11). If the program is broken up into elementary operations, the configuration can result from an assembly of circuits of the type of those that were described above. This configuration is produced under the control of a control unit 410 provided with a man-machine interface.
- control unit 410 Another role of the control unit 410 and to provide the input values to the programmable array 400 , in the form of events separated by suitable time intervals, in order for the processing nodes of the array 400 to execute the calculation and deliver the results. These results are quickly recovered by the control unit 410 in order to be presented to a user or to an application that uses them.
- This calculation architecture is well suited for rapidly carrying out massively parallel calculations.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Mathematical Physics (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Software Systems (AREA)
- General Engineering & Computer Science (AREA)
- Computational Linguistics (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Biophysics (AREA)
- Biomedical Technology (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Evolutionary Computation (AREA)
- Computational Mathematics (AREA)
- Mathematical Analysis (AREA)
- Mathematical Optimization (AREA)
- Pure & Applied Mathematics (AREA)
- Operations Research (AREA)
- Algebra (AREA)
- Databases & Information Systems (AREA)
- Multi Processors (AREA)
- Advance Control (AREA)
- Mobile Radio Communication Systems (AREA)
Abstract
Description
- The present invention relates to data processing techniques. Embodiments implement a new way of carrying out calculations in machines, in particular in programmable machines.
- For the most part, current computers are based on the Von Neumann architecture. The data and the program instructions are stored in a memory that is accessed sequentially by an arithmetic logic unit in order to execute the program on the data. This sequential architecture is relatively inefficient, namely because of the requirement for numerous memory accesses, both for reading and writing.
- The search for alternatives that are more energy-efficient lead to the proposition of clockless processing architectures that attempt to imitate the operation of the brain. Recent projects, such as the DARPA SyNAPSE program, lead to the development of silicon-based neuromorphic card technologies, which allow a new type of computer inspired by the shape, the operation and the architecture of the brain to be built. The main advantages of these clockless systems are their energy efficiency and the fact that performance is proportional to the quantity of neurons and synapses used. A plurality of platforms were developed in this context, in particular:
-
- IBM TrueNorth (Paul A. Merolla, et al.: “A Million Spiking-Neuron Integrated Circuit with a Scalable Communication Network and Interface”, Science, Vol. 345, No. 6197, pages 668-673, August 2014);
- Neurogrid (Ben V. Benjamin, et al.: “Neurogrid: A Mixed-Analog-Digital Multichip System for Large-Scale Neural Simulations”, Proceedings of the IEEE, Vol. 102, No. 5, pages 699-716, May 2014);
- SpiNNaker (Steve B. Furber, et al.: “The SpiNNaker Project”, Proceedings of the IEEE, Vol. 102, No. 5, pages 652-665, May 2014).
- These machines substantially aim to simulate biology. Their main uses are in the field of learning, namely in order to execute deep learning architectures such as neural networks or deep belief networks. They are efficient in a plurality of fields like those of artificial vision, speech recognition, and language processing.
- There are other options such as the NEF (“Neural Engineering Framework”) capable of simulating certain functionalities of the brain and in particular carrying out visual, cognitive and motor tasks (Chris Eliasmith, et al.: “A Large-Scale Model of the Functioning Brain”, Science, Vol. 338, No. 6111, pages 1202-1205, November 2012).
- These various approaches do not propose a general methodology for executing calculations in a programmable machine.
- An object of the present disclosure is to propose a novel approach for the representation of the data and the execution of calculations. It is desirable for this approach to be suitable for an implementation having reduced energy consumption and massive parallelism.
- A data processing device is proposed, comprising a set of processing nodes and connections between the nodes. Each connection has an emitter node and a receiver node out of the set of processing nodes and is configured to transmit, to the receiver node, events delivered by the emitter node. Each node is arranged to vary a respective potential value according to events that it receives and to deliver an event when the potential value reaches a predefined threshold. At least one input value of the data processing device is represented by a time interval between two events received by at least one node, and at least one output value of the data processing device is represented by a time interval between two events delivered by at least one node.
- The processing nodes form neuron-type calculation units. However, it is not especially desired here to imitate the operation of the brain. The term “neuron” is used in the present disclosure for linguistic convenience but does not necessarily mean strong resemblance to the operating mode of the neurons of the cortex.
- By using a specific temporal organisation of the events in the processing device, as well as various properties of the connections (synapses), an overall calculation framework, suitable for calculating the elementary mathematical functions, can be obtained. All the existing mathematical operators can then be implemented, whether linear or not, without necessarily having to use a Von Neumann architecture. From that point on, it is possible for the device to function like a conventional computer, but without requiring incessant back-and-forth trips to the memory and without being based on floating point precision. It is the temporal concurrence of synaptic events, or their temporal offsets, that form the basis for the representation of the data.
- The proposed methodology is consistent with the neuromorphic architectures that do not make any distinction between memory and calculation. Each connection of each processing node stores information and simultaneously uses this information for the calculation. This is very different from the prevailing organisation in conventional computers that distinguishes between memory and processing and causes the Von Neumann bottleneck, in which the majority of the calculation time is dedicated to moving information between the memory and the central processing unit (John Backus: “Can Programming Be Liberated from the von Neumann Style?: A Functional Style and Its Algebra of Programs”, Communications of the ACM, Vol. 21, No. 8, pages 613-641, August 1978).
- The operation is based on communication governed by events (“event-driven”) like in biological neurons, and thus allowing execution with massive parallelism.
- In one embodiment of the device, each processing node is arranged to reset its potential value when it delivers an event. The reset can in particular be to a zero potential value.
- Numerous embodiments of the device for processing data include, among the connections between the nodes, one or more potential variation connections, each having a respective weight. The receiver node of such a connection is arranged to respond to an event received on this connection by adding the weight of the connection to its potential value.
- The potential variation connections can include excitation connections, which have a positive weight, and inhibiting connections, which have a negative weight.
- In order to manipulate a value in the device, the set of processing nodes can comprise at least one first node forming the receiver node of a first potential variation connection having a first positive weight at least equal to the predefined threshold for the potential value, and at least one second node forming the receiver node of a second potential variation connection having a weight at least equal to half the predefined threshold for the potential value and less than the predefined threshold for the potential value. The aforementioned first node further forms the emitter node and the receiver node of a third potential variation connection having a weight equal to the opposite of the first weight, as well as the emitter node of a fourth connection, while the second node further forms the emitter node of a fifth connection. The first and second potential variation connections are thus configured to each receive two events separated by a first time interval representing an input value whereby the fourth and fifth connections transport respective events having between them a second time interval related to the first time interval.
- Various operations can be carried out using a device according to the invention.
- In particular, an example of a device for processing data comprises at least one minimum calculation circuit, which itself comprises:
- first and second input nodes;
- an output node;
- first and second selection nodes;
- first, second, third, fourth, fifth and sixth potential variation connections, each having a first positive weight at least equal to half the predefined threshold for the potential value and less than the predefined threshold for the potential value;
- seventh and eighth potential variation connections each having a second weight opposite to the first weight; and
- ninth and tenth potential variation connections each having a third weight double of the second weight.
- In this minimum calculation circuit, the first input node forms the emitter node of the first and third connections and the receiver node of the tenth connections, the second input node forms the emitter node of the second and fourth connections and the receiver node of the ninth connection, the first selection node forms the emitter node of the fifth, seventh and ninth connections and the receiver node of the first and eighth connections, the second selection node forms the emitter node of the sixth, eighth and tenth connections and the receiver node of the second and seventh connections, and the output node forms the receiver node of the third, fourth, fifth and sixth connections.
- Another example of a device for processing data comprises at least one maximum calculation circuit, which itself comprises:
- first and second input nodes;
- an output node;
- first and second selection nodes;
- first, second, third and fourth potential variation connections, each having a first positive weight at least equal to half the predefined threshold for the potential value and less than the predefined threshold for the potential value; and
- fifth and sixth potential variation connections each having a second weight equal to double the opposite of the first weight.
- In this maximum calculation circuit, the first input node forms the emitter node of the first and third connections, the second input node forms the emitter node of the second and fourth connections, the first selection node forms the emitter node of the fifth connection and the receiver node of the first and sixth connections, the second selection node forms the emitter node of the sixth connection and the receiver node of the second and fifth connections, and the output node forms the receiver node of the third and fourth connections.
- Another example of a device for processing data comprises at least one subtractor circuit, which itself comprises:
- first and second synchronisation nodes;
- first and second inhibition nodes;
- first and second output nodes;
- first, second, third, fourth, fifth and sixth potential variation connections each having a first positive weight at least equal to the predefined threshold for the potential value;
- seventh and eighth potential variation connections each having a second weight equal to half the first weight;
- ninth and tenth potential variation connections each having a third weight opposite to the first weight; and
- eleventh and twelfth potential variation connections each having a fourth weight double of the third weight.
- In this subtractor circuit, the first synchronisation node forms the emitter node of the first, second, third and ninth connections, the second synchronisation node forms the emitter node of the fourth, fifth, sixth and tenth connections, the first inhibition node forms the emitter node of the eleventh connection and the receiver node of the third, eighth and tenth connections, the second inhibition node forms the emitter node of the twelfth connection and the receiver node of the sixth, seventh and ninth connections, the first output node forms the emitter node of the seventh connection and the receiver node of the first, fifth and eleventh connections, and the second output node forms the emitter node of the eighth connection and the receiver node of the second, fourth and twelfth connections. The first synchronisation node is configured to receive, on at least one potential variation connection having the second weight, a first pair of events having between them a first interval of time representing a first operand. The second synchronisation node is configured to receive, on at least one potential variation connection having the second weight, a second pair of events having between them a second interval of time representing a second operand, whereby a third pair of events having between them a third time interval is delivered by the first output node if the first time interval is longer than the second time interval and by the second output node if the first time interval is shorter than the second time interval, the third time interval representing the absolute value of the difference between the first and second operand.
- The subtractor circuit can further comprise zero detection logic including at least one detection node associated with detection and inhibition connections with the first and second synchronisation nodes, one of the first and second inhibition nodes and one of the first and second output nodes. The detection and inhibition connections are faster than the first, second, third, fourth, fifth, sixth, seventh, eighth, ninth, tenth, eleventh and twelfth connections, in order to inhibit the production of events by one of the first and second output nodes when the first and second time intervals are substantially equal.
- In various embodiments of the device, the set of processing nodes comprises at least one node arranged to vary a current value according to events received on at least one current adjustment connection, and to vary its potential value over time at a rate proportional to said current value. Such a processing node can in particular be arranged to reset its current value to zero when it delivers an event.
- The current value in at least some of the nodes has a component that is constant between two events received on at least one constant current component adjustment connection having a respective weight. The receiver node of a constant current component adjustment connection is arranged to react to an event received on this connection by adding the weight of the connection to the constant component of its current value.
- Another example of a device for processing data comprises at least one inverter memory circuit, which itself comprises:
- an accumulator node;
- first, second and third constant current component adjustment connections, the first and third connections having the same positive weight and the second connection having a weight opposite to the weight of the first and third connections; and
- at least one fourth connection,
- In this inverter memory circuit, the accumulator node forms the receiver node of the first, second and third connections and the emitter node of the fourth connection, and the first and second connections are configured to respectively address, to the accumulator node, first and second events having between them a first time interval related to a time interval representing a value to be memorised, whereby the accumulator node then responds to a third event received on the third connection by increasing its potential value until delivery of a fourth event on the fourth connection, the third and fourth events having between them a second time interval related to the first time interval.
- Another example of a device for processing data comprises at least one memory circuit, which itself comprises:
- first and second accumulator nodes;
- first, second, third and fourth constant current component adjustment connections, the first, second and fourth connections each having a first positive weight and the third connection having a second weight opposite to the first weight; and
- at least one fifth connection.
- In this memory circuit, the first accumulator node forms the receiver node of the first connection and the emitter node of the third connection, the second accumulator node forms the receiver node of the second, third and fourth and fifth connections and the emitter node of the fifth connection, the first and second connection are configured to respectively address, to the first and second accumulator nodes, first and second events having between them a first time interval related to a time interval representing a value to be memorised, whereby the second accumulator node then responds to a third event received on the fourth connection by increasing its potential value until delivery of a fourth event on the fifth connection, the third and fourth events having between them a second time interval related to the first time interval.
- The memory circuit can further comprise a sixth connection having the first accumulator node as an emitter node, the sixth connection delivering an event to signal the availability of the memory circuit for reading.
- Another example of a device for processing data comprises at least one synchronisation circuit, which includes a number N>1 of memory circuits, of the type mentioned just above, and a synchronisation node. The synchronisation node is sensitive to each event delivered on the sixth connection of one of the N memory circuits via a respective potential variation connection having a weight equal to the first weight divided by N. The synchronisation node is arranged to trigger simultaneous reception of the third events via the respective fourth connections of the N memory circuits.
- Another example of a device for processing data comprises at least one accumulation circuit, which itself comprises:
- N inputs each having a respective weighting coefficient, N being an integer greater than 1;
- an accumulator node;
- a synchronisation node;
- for each of the N inputs of the accumulation circuit:
- a first constant current component adjustment connection having a first positive weight proportional to the respective weighting coefficient of said input; and
- a second constant current component adjustment connection having a second weight opposite to the first weight;
- a third constant current component adjustment connection having a third positive weight.
- In this accumulation circuit, the accumulator node forms the receiver node of the first, second and third connections, the synchronisation node forms the emitter node of the third connection. For each of the Ninputs, the first and second connections are configured to address, to the accumulator node, respective first and second events having between them a first time interval representing a respective operand provided on said input. The synchronisation node is configured to deliver a third event once the first and second events have been addressed for each of the N inputs, whereby the accumulator node increases its potential value until delivery of a fourth event. The third and fourth events have between them a second time interval related to a time interval representing a weighted sum of the operands provided on the N inputs.
- In an example of a device for processing data according to the invention, the accumulation circuit is part of a weighted addition circuit further comprising:
- a second accumulator node;
- a fourth constant current component adjustment connection having the third weight; and
- a fifth and sixth connection.
- In this weighted addition circuit, the synchronisation node of the accumulation circuit forms the emitter node of the fourth connection, the accumulator node of the accumulation circuit forms the emitter node of the fifth connection, and the second accumulator node forms the receiver node of the fourth connection and the emitter node of the sixth connection. In response to delivery of the third event by the synchronisation node, the accumulator node of the accumulation circuit increases its potential value until delivery of a fourth event on the fifth connection, and the second accumulator node increases its potential value until delivery of a fifth event on the sixth connection, the fourth and fifth events having between them a third time interval related to a time interval representing a weighted sum of the operands provided on the N inputs of the accumulation circuit.
- Another example of a device for processing data comprises at least one linear combination circuit including two accumulation circuits, which share their synchronisation node, and a subtractor circuit configured to respond to the third event delivered by the shared synchronisation node and to the fourth events respectively delivered by the accumulator nodes of the two accumulation circuits by delivering a pair of events having between them a third time interval representing the difference between the weighted sum for one of the two accumulation circuits and the weighted sum for the other of the two accumulation circuits.
- In some embodiments of the device, the set of processing nodes comprises at least one node, the current value of which has a component that decreases exponentially between two events received on at least one exponentially decreasing current component adjustment connection having a respective weight. The receiver node of an exponentially decreasing current component adjustment connection is arranged to react to an event received on this connection by adding the weight of the connection to the exponentially decreasing component of its current value.
- Another example of a device for processing data comprises at least one logarithm calculation circuit, which itself comprises:
- an accumulator node;
- first and second constant current component adjustment connection, the first connection having a positive weight, and the second connection having a weight opposite to the weight of the first connection;
- a third exponentially decreasing current component adjustment connection; and
- at least one fourth connection.
- In this logarithm calculation circuit, the accumulator node forms the receiver node of the first, second and third connections and the emitter node of the fourth connection. The first and second connections are configured to address, to the accumulator node, respective first and second events having between them a first time interval related to a time interval representing an input value of the logarithm calculation circuit. The third connection is configured to address, to the accumulator node, a third event simultaneous or posterior to the second event, whereby the accumulator node increases its potential value until delivery of a fourth event on the fourth connection, the third and fourth events having between them a second time interval related to a time interval representing a logarithm of the input value.
- The processing device can further comprise at least one deactivation connection, the receiver node of which is a node capable of cancelling out its exponentially decreasing component of current in response to an event received on the deactivation connection.
- Another example of a device for processing data comprises at least one exponentiation circuit, which itself comprises:
- an accumulator node;
- a first exponentially decreasing current component adjustment connection;
- a second deactivation connection;
- a third constant current component adjustment connection; and
- at least one fourth connection.
- In this exponentiation circuit, the accumulator node forms the receiver node of the first, second and third connections and the emitter node of the fourth connection. The first and second connection are configured to address, to the accumulator node, respective first and second events having between them a first time interval related to a time interval representing an input value of the exponentiation circuit. The third connection is configured to address, to the accumulator node, a third event simultaneous or posterior to the second event, whereby the accumulator node increases its potential value until delivery of a fourth event on the fourth connection, the third and fourth events having between them a second time interval related to a time interval representing an exponentiation of the input value.
- Another example of a device for processing data comprises at least one multiplier circuit, which itself comprises:
- first, second and third accumulator nodes;
- a synchronisation node;
- first, second, third, fourth and fifth constant current component adjustment connections, the first, third and fifth connections having a positive weight, and the second and fourth connections having a weight opposite to the weight of the first, second and fifth connections;
- sixth, seventh and eighth exponentially decreasing current component adjustment connections;
- a ninth deactivation connection; and
- at least one tenth connection.
- In this multiplier circuit, the first accumulator node forms the receiver node of the first, second and sixth connections and the emitter node of the seventh connection, the second accumulator node forms the receiver node of the third, fourth and seventh connections and the emitter node of the fifth and ninth connections, the third accumulator node forms the receiver node of the fifth, eighth and ninth connections and the emitter node of the tenth connection, and the synchronisation node forms the emitter node of the sixth and eighth connections. The first and second connection are configured to address, to the first accumulator node, respective first and second events having between them a first time interval related to a time interval representing a first operand of the multiplier circuit. The third and fourth connections are configured to address, to the second accumulator node, respective third and fourth events having between them a second time interval related to a time interval representing a second operand of the multiplier circuit. The synchronisation node is configured to deliver a fifth event on the sixth and eighth connections once the first, second, third and fourth events have been received. Thus, the first accumulator node increases its potential value until delivery of a sixth event on the seventh connection and then, in response to the sixth event, the second accumulator node increases its potential value until delivery of a seventh event on the fifth and ninth connections. In response to this seventh event, the third accumulator node increases its potential value until delivery of an eighth event on the tenth connection, the seventh and eighth events having between them a third time interval related to a time interval representing the product of the first and second operands.
- Sign detection logic can be associated with the multiplier circuit in order to detect the respective signs of the first and second operands and cause two events having between them a time interval representing the product of the first and second operands to be delivered on one or the other of two outputs of the multiplier circuit according to the signs detected.
- In a typical embodiment of the processing device, each connection is associated with a delay parameter, in order to signal the receiver node of this connection to carry out a change of state with a delay, with respect to the reception of an event on the connection, indicated by said parameter.
- The time interval Δt between two events representing a value having an absolute value x can have, in particular, the form Δt=Tmin+x·Tcod, where Tmin and Tcod are predefined time parameters. The values represented by time intervals have, for example, absolute values x between 0 and 1.
- A logarithmic scale rather than a linear one for Δt as a function of x can also be suitable for certain uses. Other scales can also be used.
- The processing device can have special arrangements in order to handle signed values. It can thus comprise, for an input value:
- a first input comprising one node or two nodes out of the set of processing nodes, the first input being arranged to receive two events having between them a time interval representing a positive value of the input value; and
- a second input comprising one node or two nodes out of the set of processing nodes, the second input being arranged to receive two events having between them a time interval representing a negative value of the input value.
- For an output value, the processing device can comprise:
- a first output comprising one node or two nodes out of the set of processing nodes, the first output being arranged to deliver two events having between them a time interval representing a positive value of said output value; and
- a second output comprising one node or two nodes out of the set of processing nodes, the second output being arranged to deliver two events having between them a time interval representing a negative value of said output value.
- In an embodiment of the processing device, the set of processing nodes is in the form of at least one programmable array, the nodes of the array having a shared behaviour model according to the events received. This device further comprises a programming logic in order to adjust weights and delay parameters of the connections between the nodes of the array according to a calculation program, and a control unit in order to provide input values to the array and recover output values calculated according to the program.
- Other features and advantages of the present invention will appear in the following description, in reference to the appended drawings, in which:
-
FIG. 1 is a diagram of a processing circuit producing the representation of a constant value on demand, according to an embodiment of the invention; -
FIG. 2 is a diagram of an inverter memory device according to an embodiment of the invention; -
FIG. 3 is a diagram showing the change in potential values over time and the production of events in an inverter memory device according toFIG. 2 ; -
FIG. 4 is a diagram of a memory device according to an embodiment of the invention; -
FIG. 5 is a diagram showing the change in potential values over time and the production of events in a memory device according toFIG. 4 ; -
FIG. 6 is a diagram of a signed memory device according to an embodiment of the invention; -
FIGS. 7(a) and 7(b) are diagrams showing the change in potential values over time and the production of events in a signed memory device according toFIG. 6 when it is presented with various input values; -
FIG. 8 is a diagram of a synchronisation device according to an embodiment of the invention; -
FIG. 9 is a diagram showing the change in potential values over time and the production of events in a synchronisation device according toFIG. 8 ; -
FIG. 10 is a diagram of a synchronisation device according to another embodiment of the invention; -
FIG. 11 is a diagram of a device for calculating a minimum according to an embodiment of the invention; -
FIG. 12 is a diagram showing the change in potential values over time and the production of events in a device for calculating a minimum according toFIG. 11 ; -
FIG. 13 is a diagram of a device for calculating a maximum according to an embodiment of the invention; -
FIG. 14 is a diagram showing the change in potential values over time and the production of events in a device for calculating a maximum according toFIG. 13 ; -
FIG. 15 is a diagram of a subtractor device according to an embodiment of the invention; -
FIG. 16 is a diagram showing the change in potential values over time and the production of events in a subtractor device according toFIG. 15 ; -
FIG. 17 is a diagram of an alternative of the subtractor device in which a difference equal to zero is taken into account; -
FIG. 18 is a diagram of an accumulation circuit according to an embodiment of the invention; -
FIG. 19 is a diagram of a weighted addition device according to an embodiment of the invention; -
FIG. 20 is a diagram of a linear combination calculation device according to an embodiment of the invention; -
FIG. 21 is a diagram of a logarithm calculation device according to an embodiment of the invention; -
FIG. 22 is a diagram showing the change in potential values over time and the production of events in a logarithm calculation device according toFIG. 21 ; -
FIG. 23 is a diagram of an exponentiation device according to an embodiment of the invention; -
FIG. 24 is a diagram showing the change in potential values over time and the production of events in an exponentiation device according toFIG. 23 ; -
FIG. 25 is a diagram of a multiplier device according to an embodiment of the invention; -
FIG. 26 is a diagram showing the change in potential values over time and the production of events in a multiplier device according toFIG. 25 ; -
FIG. 27 is a diagram of a signed multiplier device according to an embodiment of the invention; -
FIG. 28 is a diagram of an integrator device according to an embodiment of the invention; -
FIG. 29 is a diagram of a device suitable for solving a first-order differential equation in an example of an embodiment of the invention; -
FIGS. 30A and 30B are graphs showing results of simulation of the device ofFIG. 29 ; -
FIG. 31 is a diagram of a device suitable for solving a second-order differential equation in an example of an embodiment of the invention; -
FIGS. 32A and 32B are graphs showing results of simulation of the device ofFIG. 31 ; -
FIG. 33 is a diagram of a device suitable for solving a system of three-variable nonlinear differential equations in an example of an embodiment of the invention; -
FIG. 34 is a graph showing results of simulation of the device ofFIG. 33 ; -
FIG. 35 is a diagram of a programmable processing device according to an embodiment of the invention. - A data processing device as proposed here works by representing the processed values not as amplitudes of electric signals or as binary-encoded numbers processed by logic circuits, but as time intervals between events occurring within a set of processing nodes having connections between them.
- In the context of the present disclosure, an embodiment of the data processing device according to an architecture similar to those of artificial neural networks is presented. Although the data processing device does not necessarily have an architecture strictly corresponding to that which people agree to call “neural networks”, the following description uses the terms “node” and “neuron” interchangeably, just like it uses the term “synapse” to designate the connections between two nodes or neurons in the device.
- The synapses are oriented, i.e. each connection has an emitter node and a receiver node and transmits, to the receiver node, events generated by the emitter node. An event typically manifests itself as a spike in a voltage signal or current signal delivered by the emitter node and influencing the receiver node.
- As is usual in the context of artificial neural networks, each connection or synapse has a weight parameter w that measures the influence that the emitter node exerts on the receiver node during an event.
- A description of the behaviour of each node can be given by referring to a potential value V corresponding to the membrane potential V in the paradigm of artificial neural networks. The potential value V of a node varies over time according to the events that the node receives on its incoming connections. When this potential value V reaches or exceeds a threshold Vt, the node emits an event (“spike”) that is transmitted to the node(s) located downstream.
- In order to describe the behaviour of a node, or neuron, in an exemplary embodiment of the invention, reference can further be made to a current value g having a component ge and optionally a component gƒ.
- The component ge is a component that remains constant, or substantially constant, between two events that the node receives on a particular synapse that is called here constant current component adjustment connection.
- The component gƒ is an exponentially changing component, i.e. it varies exponentially between two events that the node receives on a particular synapse that is called here exponentially decreasing current component adjustment connection.
- A node that takes into account an exponentially decreasing current component gƒ can further receive events for activation and deactivation of the component gƒ on a particular synapse that is called here activation connection.
- In the example in question, the behaviour of a processing node can therefore be expressed in a generic manner by a set of differential equations:
-
- where:
-
- t designates time;
- the component ge represents a constant input current that can only be changed by synaptic events;
- the component gƒ represents an exponentially changing input current;
- gate is a binary activation (gate=1) or deactivation (gate=0) signal of the exponentially decreasing current component g{dot over (ƒ)},
- τm, is a time constant regulating the linear variation in the potential value V as a function of the current value g=ge+gate·g{dot over (ƒ)},
and τj is a time constant regulating the exponential change in the decrease in the component gƒ.
- In the system (1), it is considered that there is no leak of the membrane potential V, or that the dynamics of this leak are on a much larger time scale than all the other dynamics operating in the device.
- In this model, four types of synapses that influence the behaviour of a neuron can be distinguished, each synapse being associated with a weight parameter indicating a synaptic weight w, positive or negative:
-
- potential variation connections, or V-synapses, which directly modify the value of the membrane potential of the neuron: V←V+w. In other words, the receiver node responds to an event received on a V-synapse by adding, to its potential value V, the weight w indicated by the weight parameter;
- constant current component adjustment connections, or ge-synapses, which directly modify the constant input current of the neuron: ge←ge+w. In other words, the receiver node responds to an event received on a ge-synapse by adding, to the constant component of its current value, the weight w indicated by the weight parameter;
- exponentially decreasing current component adjustment connections, or g{dot over (ƒ)}-synapses, which directly modify the exponentially changing input current of the neuron: gƒ ←gƒ+w. In other words, the receiver node reacts to an event received on a gƒ-synapse by adding, to the exponentially decreasing component of its current value, the weight w indicated by the weight parameter;
- and activation connections, or gate-synapses, which activate the neuron by setting gate←1 when they indicate a positive weight w=1 and deactivate the neuron by setting gate←0 when they indicate a negative weight w=−1.
- Each synaptic connection is further associated with a delay parameter that gives the delay in propagation between the emitter neuron and the receiver neuron.
- A neuron triggers an event, when its potential value V reaches a threshold Vt, i.e.:
-
V≥V t (2) - The triggering of the event gives rise to a spike delivered on each synapse of which the neuron forms the emitter node and to resetting its state variables to:
-
V←V reset (3) -
g e←0 (4) -
g ƒ←0 (5) -
gate←0 (6) - Without losing any generality, the case where Vreset=0 can be considered.
- Hereinafter, the notation Tsyn designates the delay in propagation along a standard synapse, and the notation Tneu designates the time that a neuron takes to transmit the event when producing its spike after having been triggered by an input synaptic event. Tneu can for example represent the time step of a neural simulator.
- A standard weight we is defined as the minimum excitation weight that must be applied to a V-synapse in order to trigger a neuron from the reset state, and another standard weight W is defined as the inhibition weight having a contrary effect:
-
w e =V t (7) -
w i =w e (8) - The values processed by the device are represented by time intervals between events. Two events of a pair of events are separated by a time interval Δt that is a function of the value x encoded by this pair:
-
Δt=ƒ(x) (9) - where ƒ is an encoding function chosen for the representation of the data in the device.
- The two events of the pair encoding this value x can be delivered by the same neuron n or by two distinct neurons.
- In the case of the same neuron n, delivering events at successive times en(i), i=0, 1, 2, etc., it can be considered that this neuron n encodes a time-varying signal u(t), the discrete values of which are given by:
- where ƒ−1 is the encoding function inverse chosen and i is an even number.
- The encoding function ƒ:=→ can be chosen while taking into account the signals processed in a particular system, and adapted to the required precision. The function ƒ calculates the interval between spikes associated with a particular value. In the rest of the present description, embodiments of the processing device using a linear encoding function are illustrated:
-
Δt=ƒ(x)=T min +x·T cod (11) - with x∈[0, 1].
- This representation of the function ƒ[0, 1]→[Tmin, Tmax] allows any value x between 0 and 1 to be encoded linearly by a time interval between Tmin and Tmax=Tmin+Tcod. The value of Tmin can be zero. However, it is advantageous for it to be non-zero. Indeed, if two events representing a value come from the same neuron or are received by the same neuron, the minimum interval Tmin>0 gives this neuron time to reset. Moreover, a choice of Tmin>0 allows certain arrangements of neurons to respond to the first input event and propagate a change of state before receiving a second event.
- The form (11) for the encoding function ƒ is not the only one possible. Another suitable choice is to take a logarithmic function, which allows a wide range of values to be encoded with dynamics that are suitable for certain uses, in this case with less precision for large values.
- To represent signed values, two different paths, one for each sign, can be used. Positive values are thus encoded using a particular neuron, and negative values using another neuron. Arbitrarily, zero can be represented as a positive value or a negative value. Hereinafter, it is represented as a positive value.
- Thus, to continue the example of form (11), if a value x has a value in the range [−1, +1], it is represented by a time interval Δt=Tmin+|x|·Tcod between two events propagated on the path associated with the positive values if x≥0 and on the path associated with the negative values if x<0.
- The choice of (9) or (11) for the encoding function leads to the definition of two standard weights for the ge-synapses. The weight wacc is defined as being the value of ge necessary to trigger a neuron, from its reset state, after the time Tmax=Tmin+Tcod, i.e., considering (1):
-
- Moreover, the weight
w acc is defined as being the value of ge necessary to trigger a neuron, from its reset state, after the time Tcod, or: -
- For the ge-synapses, another standard weight gmult can be given as:
-
- The connections between nodes of the device can further be each associated with a respective delay parameter. This parameter indicates a delay with which the receiver node of the connection carries out a change of state, with respect to the emission of an event on the connection. The indication of delay values by these delay parameters associated with the synapses allows suitable sequencing of the operations in the processing device to be ensured.
- Various technologies can be used to implement the processing nodes and their interconnections in order for them to behave in the way described by the equations (1)-(6), namely the technologies routinely used in the well-known field of artificial neural networks. Each node can, for example, be created using analogue technology, with resistive and capacitive elements in order to preserve and vary a voltage level and transistor elements in order to deliver events when the voltage level exceeds the threshold Vt.
- Another possibility is to use digital technologies, for example based on field-programmable gate arrays (FPGAs), which provide a convenient means for implementing artificial neurons.
- Below, a certain number of devices or circuits for processing data that are made using interconnected processing nodes are presented. In
FIGS. 1, 2, 4, 6, 8, 10, 11, 13, 15, 17, 18, 19, 20, 21, 23, 25, 27, 28, 29, 31 and 33 : -
- the connections between nodes shown as solid lines are V-synapses;
- the connections shown as dashed lines are ge-synapses;
- the connections shown as chain dotted lines are gƒ-synapses;
- the connections shown as dotted lines are gate-synapses;
- the connections are oriented with a symbol on the side of their receiver nodes. This symbol is an open square for an excitation connection, i.e. a connection having a positive weight, and a closed square for an inhibiting connection, i.e. a connection having a negative weight;
- the pair of parameters (w; T) next to a connection indicates the weight w and the delay T associated with the connection. Sometimes, only the weight w is indicated.
- Some of the nodes or neurons shown in these drawings are named in such a way as to evoke the functions resulting from their arrangement in the circuit: ‘input’ for an input neuron, ‘input+’ for the input of a positive value, ‘input’ for the input of a negative value, “output” for an output neuron, ‘output+’ for the output of a positive value, ‘output−’ for the output of a negative value, ‘recall’ for a neuron used to recover a value, ‘acc’ for an accumulator neuron, ‘ready’ for a neuron indicating the availability of a result or of a value, etc.
-
FIG. 1 shows a verysimple circuit 10 that can be used to produce the representation of a constant value x on demand. The two V- 11, 12 having weights greater than or equal to we (in the example shown, the weights are taken equal to we) each have asynapses recall neuron 15 as an emitter node and aoutput neuron 16 as a receiver node. Thesynapse 11 is configured with a delay parameter Tsyn, while thesynapse 12 is configured with a delay parameter Tsyn+ƒ(x). - The activation of the
recall neuron 15 triggers theoutput neuron 16 at times Tsyn and Tsyn+ƒ(x), and thus thecircuit 10 delivers two events separated in time by the value ƒ(x) representing the constant x. - A.1. Inverting Memory
-
FIG. 2 shows aprocessing circuit 18 forming an inverting memory. - This
device 18 stores an analogue value x encoded by a pair of input spikes provided at aninput neuron 21 with an interval Δtin=ƒ(x), using an integration of current over the dynamic range ge in anacc neuron 30. The value x is stored in the membrane potential of theacc neuron 30 and read during the activation of arecall neuron 31, which leads to delivering a pair of events separated by a time interval Δtt corresponding to thevalue 1−x at theoutput neuron 33, i.e. Δtout=ƒ(1−x). - The
input neuron 21 belongs to a group ofnodes 20 used to produce two events separated by ƒ(x) Tmin=x·Tcod on ge- 26, 27 directed towards thesynapses acc neuron 30. This group comprises a ‘first’neuron 23 and a ‘last’neuron 25. Two excitation V- 22, 24 having a delay Tsyn go from thesynapses input neuron 21 to thefirst neuron 23 and to thelast neuron 25, respectively. The V-synapse 22 has a weight we, while the V-synapse 24 has a weight equal to we/2. Thefirst neuron 23 inhibits itself via a V-synapse 28 having a weight wi and a delay Tsyn. - The excitation ge-
synapse 26 goes from thefirst neuron 23 to theacc neuron 30, and has the weight wacc and a delay of Tsyn+Tmin. The inhibiting ge-synapse 27 goes from thelast neuron 25 to theacc neuron 30, and has the weight −wacc and a delay Tsyn. An excitation V-synapse 32 goes from therecall neuron 31 to theoutput neuron 33, and has the weight we and a delay of 2Tsyn+Tneu. An excitation V-synapse 34 goes from therecall neuron 31 to theacc neuron 30, and has the weight wacc and a delay Tsyn. Finally, an excitation V-synapse 35 goes from theacc neuron 30 to theoutput neuron 33, and has the weight we and a delay Tsyn. - The operation of the inverting-
memory device 18 is illustrated byFIG. 3 . - Emission of a first event (spike) at time tin 1 at the
input neuron 21 triggers an event at the output of thefirst neuron 23 after the time Tsyn+Tneu, i.e. at time tfirst 1 inFIG. 3 , and raises the potential value of thelast neuron 25 to Vt/2. Thefirst neuron 23 then inhibits itself via thesynapse 28 by giving the value −Vt to its membrane potential, and it starts the accumulation by theacc neuron 30 after Tsyn+Tmin, i.e. at time tst 1, via the ge-synapse 26. - The emission of the second spike at time tin 2=tin 1+Tmin+x·Tcod at the
input neuron 21 brings thelast neuron 25 to the threshold potential Vt. An event is then produced at time tlast 1=tin 2+Tsyn+Tneu on the inhibiting ge-synapse 27. The second spike also triggers the resetting of the potential of thefirst neuron 23 to zero via thesynapse 22. The event transported by the ge-synapse 27 in response to the second spike stops the accumulation carried out by theacc neuron 30 at time tend 1=tst 1+x·Tcod. - At this stage, the potential value
-
- is stored in the
acc neuron 30 in order to memorise the value x. Itscomplement 1−x can then be read by activating therecall neuron 31, which takes place at time trecall 1 inFIG. 3 . This activation restarts the process of accumulation in theacc neuron 30 at time tst 2=trecall 1+Tsyn and triggers an event at time tout 1=trecall 1+2Tsyn+2Tneu on theoutput neuron 33. The accumulation continues in theacc neuron 30 until the time tend 2 at which its potential value reaches the threshold Vt, i.e. tend 2=tst 2+Tmax−x·Tcod. An event is emitted on the V-synapse 35 at time tend 2+Tneu and triggers another event on theoutput neuron 33 at time tout 2=tend 2+Tsyn+2Tneu=trecall 1+2Tsyn+2Tneu+Tmin+(1−x)·Tcod. - Finally, the two events delivered by the
output neuron 33 are separated by a time interval ΔTout=tout 2−tout 1=Tmin+(1−x)·Tcod=ƒ (1−x). - It is noted that the value x is stored in the
acc neuron 30 upon reception of the two input spikes and immediately available to be read by activating therecall neuron 31. - Since the standard weight we was defined as the minimum excitation weight that must be applied to a V-synapse in order to trigger a neuron from the reset state, it is noted that the
processing circuit 18 ofFIG. 2 functions similarly if certain weights are chosen in the following manner: the V-synapse 22 has a weight w greater than or equal to we, the V-synapse 24 has a weight at least equal to we/2 and less than Vt, thefirst neuron 23 inhibits itself via a recall V-synapse 28 having a weight −w, the excitation V-synapse 32 has a weight greater than or equal to we and the excitation V-synapse 35 has a weight greater than or equal to we. This observation extends to the following processing circuits. - A.2. Memory
-
FIG. 4 shows aprocessing circuit 40 forming a memory. - This
device 40 memorizes an analogue value x encoded by a pair of input spikes provided at ainput neuron 21 with an interval Δtin=ƒ(x), using an integration of current on the dynamic range ge in two cascaded 42, 44 in order to form a non-inverting output with a pair of events separated by a time interval Δtt=ƒ(x).acc neurons - The
memory circuit 40 has aninput neuron 21 in order to receive the value to be stored, a read-command input formed by arecall neuron 48, aready neuron 47 indicating the time from which a reading command can be presented to therecall neuron 48, and anoutput neuron 50 in order to return the stored value. All the synapses of this memory circuit have the delay Tsyn. - The
input neuron 21 belongs to a group ofnodes 20 similar to that described in reference toFIG. 2 , with afirst neuron 23 and alast neuron 25 in order to separate the two events produced with an interval of ƒ(x)=tmin+x·Tcod by theinput neuron 21. - A ge-
synapse 41 goes from thefirst neuron 23 to thefirst acc neuron 42, and has the weight wacc Theacc neuron 42 thus starts accumulation at time tst 1−tin 1+2·Tsyn+Tneu (FIG. 5 ). A ge-synapse 43 goes from thelast neuron 25 to thesecond acc neuron 44, and has the weight wacc. Theacc neuron 44 thus starts accumulation at time tst1 1=tin 2+2·Tsyn+Tneu. At the output of theacc neuron 42, another ge-synapse 45 having the weight wacc goes to theacc neuron 44, and a V-synapse 46 having the weight we goes to theready neuron 47. - The accumulation in the
acc neuron 42 continues until the time tend 1=tst 1+Tmax at which the potential of theacc neuron 42 reaches the threshold Vt, which triggers the emission of a spike at time tacc 1=tend 1 Tneu on the ge-synapse 45 (FIG. 5 ). This spike stops the accumulation in theacc neuron 44 at time tend 1=tacc 1=Tsyn=tin 1+3·Tsyn+2·Tneu+Tmax. The triggering of theacc neuron 42 also triggers an event on theready neuron 47 at time tready 1=tacc 1+Tsyn+Tneu. - At this stage, the potential value stored in the
acc neuron 44 is -
- which allows the value x to be memorized. The reading can then take place by activating the
recall neuron 48, which takes place at time trecall 1 inFIG. 5 . - The activation of the
recall neuron 48 triggers an event at time tout 1=trecall 1+Tsyn+Tneu on theoutput neuron 50 via the V-synapse 49, and restarts the process of accumulation in theacc neuron 44 via the ge-synapse 51 at time tst2 2=trecall 1 Tsyn. The accumulation continues in theacc neuron 44 until the time tend2 2 at which its potential value reaches the threshold Vt, i.e. tend2 2=tst2 2+ƒ (x)−Tsyn−Tneu. An event is emitted on the V-synapse 52 at time tacc 1=tend2 2=Tneu and triggers another event on theoutput neuron 50 at time tout 2=tacc2 1+Tsyn+Tneu=trecall 1+Tsyn+Tneu+ƒ(x). - Finally, the two events delivered by the
output neuron 50 are separated by a time interval ΔTout=tout 2−tout 1=ƒ(x). - It is noted that the
acc neuron 42 inFIG. 4 could be eliminated by configuring delays of Tsyn+Tmax on certain synapses. This could be of interest for reducing the number of neurons, but can pose a problem in an installation using specific integrated circuits (ASIC) because of the extension of the delays between neighbouring neurons. - It is also noted that the
memory circuit 40 functions for any encoding of the value x by a time interval between Tmin and Tmax, without being limited to the form (11) above. - A.3. Signed Memory
-
FIG. 6 shows aprocessing circuit 60 forming a memory for a signed value, between −1 and +1. Its absolute value is encoded by an interval Δtin=ƒ(|x|) between two events that, if x≥0, are provided by theinput+ neuron 61 and then returned by theoutput+ neuron 81 and, if x<0, are provided by the input−neuron 62 and then returned by the output−neuron 82. All the synapses of this memory circuit have the delay Tsyn. - The signed-
memory circuit 60 is based on amemory circuit 40 of the type shown inFIGS. 4A-B . The input+ and input− 61, 62 are connected, respectively, to theneurons input neuron 21 of thecircuit 40 by excitation V- 63, 64 having the weight we. Thus, the one out of thesynapses 61, 62 that receives the two spikes representing |x| activates theneurons input neuron 21 of thecircuit 40 twice, such that the time interval ƒ(|x|) is returned on theoutput neuron 50 of thecircuit 40. - Moreover, the
61, 62 are connected, respectively, to ready+ and ready−neurons 65, 66 by excitation V-neurons 67, 68 having a weight of we/4. The signed memory circuit has asynapses recall neuron 70 connected to the ready+ and ready− 65, 66 by respective excitation V-neurons 71, 72 having the weight we/2. Each of the ready+ and ready−synapses 65, 66 is connected to theneurons recall neuron 48 of thecircuit 40 by respective excitation V- 73, 74 having the weight we. An inhibiting V-synapses synapse 75 having a weight of wi/2 goes from theready+ neuron 65 to the ready−neuron 66, and reciprocally, an inhibiting V-synapse 76 having a weight of wi/2 goes from theready neuron 66 to theready+ neuron 65. Theready+ neuron 65 is connected to the output−neuron 82 of the signed memory circuit by an inhibiting V-synapse 77 having a weight of 2wi. The ready−neuron 66 is connected to theoutput+ neuron 81 of the signed memory circuit by an inhibiting V-synapse 78 having a weight of 2wi. - The
output neuron 50 of thecircuit 40 is connected to the output+ and output− 81, 82 by respective excitation V-neurons 79, 80 having the weight we.synapses - The output of the signed
memory circuit 60 comprises aready neuron 84 that is the receiver node of an excitation V-synapse 85 having the weight we coming from theready neuron 47 of thememory circuit 40. -
FIG. 7 shows the behaviour of the neurons of the signed-memory circuit 60 (a) in the case of a positive input and (b) in the case of a negative input. The appearance of the two events at times tin 1 and tin 2=tin 1+ƒ (x) on one of the 61, 62 raises the potential of the ready+ or ready−neurons 65, 66 to the value Vt/2 in two steps. In parallel, theneuron acc neuron 44 of thememory circuit 40 is charged to the value -
- and its
ready neuron 47 produces an event at time tready 1, as described above. - Once the
ready neuron 47 has produced its event, theready neuron 70 can be activated in order to read the signed piece of data, which takes place at time trecall 1 inFIG. 7 . - Activation of the
recall neuron 70 triggers the ready+ or ready− 65, 66 via the V-neuron 70 or 71, and this triggering resets the other ready− orsynapse 65, 66 to zero via the V-ready+ neuron 75 or 76. The event delivered by the ready+ or ready−synapse 65, 66 inhibits the output− orneuron 82, 81 via the V-output+ neuron 77 or 78 by bringing its potential to − 2Vt.synapse - The event delivered by the ready+ or ready−
65, 66 at time tsign 1 is provided via the V-neuron 73 or 74. This triggers the emission of a pair of spikes separated by a time interval equal to ƒ(|x|) by thesynapse output neuron 50 of thecircuit 40. This pair of spikes communicated to the output+ and output− 81, 82 via the V-neurons 79, 80 twice triggers, at times tout 1 and tout 2=tout 1+ƒ(|x|), the one of the output+ and output−synapses 81, 82 that corresponds to the sign of the input piece of data x, and resets the potential value of theneurons 81, 82 to zero.other neuron - It is noted that the signed-
memory circuit 60 shown inFIG. 6 is not optimised in terms of number of neurons, because the following is possible: -
- eliminating the
input neuron 21 of thememory circuit 40, by sending the V- 63 and 64 directly to thesynapses first neuron 23 of thecircuit 40 shown inFIG. 4 (instead of the V-synapse 22), and by adding excitation V-synapses having a weight of we/2 from the input+ and input− 61, 62 to the last neuron 25 (instead of the V-synapse 24);neurons - eliminating the
output neuron 50 of thememory circuit 40, by sending the ge-synapse 52 directly to the output+ and output−neurons 81, 82 (instead of the V-synapses 79, 80); and - eliminating the
recall neuron 48 of thememory circuit 40, by sending the V- 73 and 74 directly to the output+ and output−synapses neurons 81, 82 (instead of the V-synapse 49), and by adding excitation ge-synapses having the weight wacc from the ready+ and ready− 65, 66 to theneurons acc neuron 44 of the circuit 40 (instead of the ge-synapse 51).
- eliminating the
- A.4. Synchroniser
-
FIG. 8 shows aprocessing circuit 90 used to synchronise signals received on a number N of inputs (N≥2). All the synapses of this synchronisation circuit have the delay Tsyn. - Each signal encodes a value xk for k=0, 1, . . . , N−1 and is in the form of a pair of spikes occurring at times tink 1 andink 2=ink 1+Δtk with Δtk=ƒ(xk)∈[Tmin, Tmax]. These signals are returned at the output of the
circuit 90 in a synchronised manner, i.e. each signal encoding a value xk is found at the output in the form of a pair of spikes occurring at times toutk 1 and toutk 2=toutk 1+Δtk with tout0 1=tout1 1= . . . =toutN-1 1, as shown inFIG. 9 for a case where N=2. - The
circuit 90 shown inFIG. 8 comprises N neurons input 91 0, . . . , 91 N−1 and N neurons output 92 0, . . . , 92 N−1. Each input neuron 91 k is the emitter node of a V-synapse 93 k having the weight we, the receiver node of which is theinput neuron 21 k of arespective memory circuit 40 k. Theoutput neuron 50 k of eachmemory circuit 40 k is the emitter node of a V-synapse 94 k having the weight we, the receiver node of which is the output neuron 92 k of thesynchronisation circuit 90. - The
synchronisation circuit 90 comprises async neuron 95 that is the receiver node of N excitation V-synapses 96 0, . . . , 96 N−1 having a weight of we/N, the emitter nodes of which are, respectively, theready neurons 47 0, . . . , 47 N−1 of thememory circuits 40 0, . . . , 40 N−1. Thecircuit 90 also comprises excitation V-synapses 97 0, . . . , 97 N−1 having the weight we, thesync neuron 95 as an emitter node, and, respectively, therecall neurons 48 0, . . . , 48 N−1 of thememory circuits 40 0, . . . , 40 N−1 as receiver nodes. - The
sync neuron 95 receives the events produced by theready neurons 47 0, . . . , 47 N−1 as the N input signals are loaded into thememory circuits 40 0, . . . , 40 N−1, i.e. at times tridy0 1, trdy1 1 inFIG. 9 . When the last of these N events has been received, thesync neuron 95 delivers an event Tsyn later, i.e. at time tsync 1 inFIG. 9 . This triggers, via thesynapses 97 0, . . . , 97 N−1 and thesynapses 49 of thememory circuits 40 0, . . . , 40 N−1, the emission of a first synchronised spike (tout0 1= . . . =toutN-1 1) on each output neuron 92 0, . . . , 92 N−1. Then, eachmemory circuit 40 k produces its second respective spike at time toutk 2. - The presentation of the synchronisation circuit in reference to
FIG. 8 is given to facilitate the explanation, but it should be noted that a plurality of simplifications are possible by eliminating certain neurons. For example, the input neurons 91 0, . . . , 91 N−1 and the output neurons 92 0, . . . , 92 N−1 are optional, since the inputs can be provided directly by theinput neurons 21 0, . . . , 21 N−1 of thememory circuits 40 0, . . . , 40 N−1 and the outputs directly by theoutput neurons 50 0, . . . , 50 N−1 of thememory circuits 40 0, . . . , 40 N−1. The V-synapses 46 of thememory circuit 40 0, . . . , 40 N−1 can go directly to thesync neuron 95, without passing through aready neuron 47 0, . . . , 47 N−1. Thesynapses 97 0, . . . , 97 N−1 can be directly fed to theoutput neurons 50 0, . . . , 50 N−1 of the memory circuits (thus replacing their synapses 49), and thesync neuron 95 can also form the emitter node of the ge-synapses 51 of thememory circuits 40 0, . . . , 40 N−1 in order to control the restart of accumulation in the acc neurons 44 (FIGS. 4 and 5 ). - It is also possible to put out only a single event, at time tout 1=tout0 1=tout1 1= . . . =toutN-1 1, as the first event of all the pairs forming the synchronised output signals. The
sync neuron 95 thus directly controls the emission of the first spike on a particular output of the circuit (which can be one of the output neurons 92 0, . . . , 92 N−1 or a specific neuron), and then the second spike of each pair by reactivating theacc neurons 44 of thememory circuits 40 0, . . . , 40 N−1 via a ge-synapse. In other words, thesync neuron 95 acts as therecall neurons 48 of the various memory circuits. - Such a
synchroniser circuit 98 is illustrated for the case where N=2 byFIG. 10 , where, once again, all the synapses have the delay Tsyn. Thesync neuron 95 is excited by two V-synapses 46 having a weight of we/2 coming directly from theacc neurons 42 of the two memory circuits, and it is the emitter node of the ge-synapses 51 in order to restart the accumulation in theacc neurons 44. In this example, aspecific neuron 99, noted as ‘output ref’, delivers the first event of each of the two output pairs at time tout 1=tsync 1+Tsyn, in response to an excitation received from thesync neuron 95 via the V-synapse 97. The role of thisoutput ref neuron 99 could, alternatively, be played by one of the two output neurons 92 0, 92 1. - It should be noted that in the example of
FIG. 10 , the two events encoding the value of an output value of thecircuit 98 are produced by two different neurons (for example theneurons 99 and 92 1 for the value x1). - More generally, in the context of the present invention, it is not necessary for the two events of a pair representing a value to come from a single node (in the case of an output value) or to be received by a single node (in the case of an input value).
- B.1. Minimum
-
FIG. 11 shows aprocessing circuit 100 that calculates the minimum between two values received in a synchronised manner on two 101, 102 and delivers this minimum on aninput nodes output node 103. - Besides the
101, 102 and theinput neurons output neuron 103, thiscircuit 100 comprises two ‘smaller’ 104, 105. An excitation V-neurons synapse 106, having a weight of we/2, goes from theinput neuron 101 to thesmaller neuron 104. An excitation V-synapse 107, having a weight of we/2, goes from theinput neuron 102 to thesmaller neuron 105. An excitation V-synapse 108, having a weight of we/2, goes from theinput neuron 101 to theoutput neuron 103. An excitation V-synapse 109, having a weight of we/2, goes from theinput neuron 102 to theoutput neuron 103. An excitation V-synapse 110, having a weight of we/2, goes from thesmaller neuron 104 to theoutput neuron 103. An excitation V-synapse 111, having a weight of we/2, goes from thesmaller neuron 105 to theoutput neuron 103. An inhibiting V-synapse 112, having a weight of wi/2, goes from thesmaller neuron 104 to thesmaller neuron 105. An inhibiting V-synapse 113, having a weight of wi/2, goes from thesmaller neuron 105 to thesmaller neuron 104. An inhibiting V-synapse 114, having the weight wi, goes from thesmaller neuron 104 to theinput neuron 102. An inhibiting V-synapse 115, having the weight wi, goes from thesmaller neuron 105 to theinput neuron 101. All the synapses 106-115 shown inFIG. 11 are associated with a delay Tsyn, except the 108, 109 for which the delay is 2·Tsyn+Tneu.synapses - The emission of the first spike on each
101, 102 at time tin1 1=tin2 1 (input neuron FIG. 12 ) sets each of the 104, 105 to a potential value Vt/2 at time tin1 1+Tsyn, and triggers a first event on thesmaller neurons output neuron 103 at time tout 1=tin1 1+2·Tsyn+2·Tneu. The emission of the second spike on the input neuron having the smallest value, namely theneuron 101 at time tin1 2=tin1 1+Δt1 in the example ofFIG. 12 , sets one of the smaller neurons to the threshold voltage Vt, namely theneuron 104 in this example, which leads to an event at time tsmaller1 2=tin1 2+Tsyn+Tneu at the output of thisneuron 104. Thus, thesynapse 114 inhibits theother input neuron 102, which does not produce its second spike at time tin2 2=tin2 1+Δt2, and thesynapse 112 inhibits the othersmaller neuron 105, the potential of which is reset to zero. The triggering of thesmaller neuron 104 further causes the second triggering of theoutput neuron 103 at time tout1 2=tsmaller1 1=tsyn+Tneu=tin1 2+2·Tsyn+2·Tneu. - Finally, the
output neuron 103 does reproduce, between the events that it delivers, the minimum time interval tout 2−tout 1=tin1 2−tin1 1=Δt1 between the events of the two pairs produced by the 101, 102. This minimum is available at the output of theinput neurons circuit 100 upon reception of the second event of the pair that represents it at the input. - The
circuit 100 for calculating a minimum ofFIG. 11 functions when the function ƒ such that Δt=ƒ(x) is an increasing function. - B.2. Maximum
-
FIG. 13 shows aprocessing circuit 120 that calculates the maximum between two values received in a synchronised manner on two 121, 122 and delivers this maximum on aninput nodes output node 123. - Besides the
121, 122 and theinput neurons output neuron 123, thiscircuit 120 comprises two ‘larger’ 124, 125. An excitation V-neurons synapse 126, having a weight of we/2, goes from theinput neuron 121 to thelarger neuron 124. An excitation V-synapse 127, having a weight of we/2, goes from theinput neuron 122 to thelarger neuron 125. An excitation V-synapse 128, having a weight of we/2, goes from theinput neuron 121 to theoutput neuron 123. An excitation V-synapse 129, having a weight of we/2, goes from theinput neuron 122 to theoutput neuron 123. An inhibiting V-synapse 132, having theweight 142 goes from thelarger neuron 124 to thelarger neuron 125. An inhibiting V-synapse 133, having theweight 142 goes from thelarger neuron 125 to thelarger neuron 124. All the synapses shown inFIG. 13 are associated with the delay Tsyn. - The first spikes emitted in a synchronised manner (tin1 1=tin2 1) by the
121, 122 set theinput neurons 124, 125 to a potential value Vt/2 at time tin1 1+Tsyn, and trigger a first event on thelarger neurons output neuron 123 at time tout 1=tin1 1+Tsyn+Tneu, (FIG. 14 ). The emission of the second spike on the input neuron having the smallest value, namely theneuron 121 at time tin1 2=tin1 1+Δt1 in the example ofFIG. 14 , sets one of the larger neurons to the threshold voltage Vt, namely theneuron 124 in this example, which triggers an event at time tlarger2 1=tin1 2+Tsyn+Tneu at the output of thisneuron 124. Thus, thesynapse 132 inhibits the otherlarger neuron 125, the potential of which is set to the value Vt/2. When the second spike is emitted by theother input neuron 122 at time tin2 2=tin2 1++Δt2 (with Δt2>Δt1), the potential of thelarger neuron 125 is reset to zero via thesynapse 127, and theoutput neuron 123 is triggered via thesynapse 129 at time tout 2=tin2 2+Tsyn+Tneu. - Finally, the
output neuron 123 does reproduce, between the events that it delivers, the maximum time interval tout 2−tout 1=tin2 2−tin2 1=Δt2 between the events of the two pairs produced by the 121, 122. This maximum is available at the output of theinput neurons circuit 120 upon reception of the second event of the pair that represents it at the input. - The
circuit 120 for calculating a maximum ofFIG. 13 functions when the function ƒ such that Δt=ƒ(x) is an increasing function. - C.1. Subtraction
-
FIG. 15 shows asubtraction circuit 140 that calculates the difference between two values x1, x2 received in a synchronised manner on two 141, 142 and delivers the result x1−x2 on aninput nodes output node 143 if it is positive and on anotheroutput node 144 if it is negative. It is assumed here that the function ƒ such that Δt1=ƒ(x1) and Δt2=ƒ(x2) is a linear function, as is the case for the form (11). - Besides the
141, 142 and the neurons output+ 143 and output− 144, theinput neurons subtraction circuit 140 comprises two 145, 146 and two ‘inb’sync neurons 147, 148. An excitation V-neurons synapse 150, having a weight of we/2, goes from theinput neuron 141 to thesync neuron 145. An excitation V-synapse 151, having a weight of we/2, goes from theinput neuron 142 to thesync neuron 146. Three excitation V- 152, 153, 154, each having the weight of we, go from thesynapses sync neuron 145 to theoutput+ neuron 143, to the output−neuron 144 and to theinb neuron 147, respectively. Three excitation V- 155, 156, 157, each having the weight we, go from thesynapses sync neuron 146 to the output−neuron 144, to theoutput+ neuron 143 and to theinb neuron 148, respectively. An inhibiting V-synapse 158, having the weight iv goes from thesync neuron 145 to theinb neuron 148. An inhibiting V-synapse 159, having the weight wi goes from thesync neuron 146 to theinb neuron 147. An excitation V-synapse 160, having a weight of we/2, goes from theoutput+ neuron 143 to theinb neuron 148. An excitation V-synapse 161, having a weight of we/2, goes from the output−neuron 144 to theinb neuron 147. An inhibiting V-synapse 162, having a weight of 2wi, goes from theinb neuron 147 to theoutput+ neuron 143. An inhibiting V-synapse 163, having a weight of 2wi, goes from theinb neuron 163 to the output−neuron 144. The 150, 151, 154 and 157-163 are associated with a delay of Tsyn. Thesynapses 152 and 155 are associated with a delay of Tmin+3·Tsyn+2·Tneu. Thesynapses 153 and 156 are associated with a delay of 3·Tsyn+2·Tneu.synapses - The operation of the
subtraction circuit 140 according toFIG. 15 is illustrated byFIG. 16 for the case in which the result x1-x2 is positive. Everything happens symmetrically if the result is negative. - The first spikes emitted in a synchronised manner (tin1 1=tin2 1) by the
141, 142 set theinput neurons 145, 146 to the potential value Vt/2 at time tin1 1+Tsyn. The emission of the second spike on the input neuron providing the smallest value, namely thesync neurons neuron 142 at time tin2 2=tin2 1+Δt2 in the example ofFIG. 16 where Δt2<Δt1, sets one of the sync neurons to the threshold voltage Vt, namely theneuron 146 in this example, which triggers an event at time tsync2 1=tin2 2+Tsyn+Tneu at the output of thisneuron 146. Thus: -
- the
synapse 159 inhibits theinb neuron 147, the potential of which is set to the value −Vt at time tsync2 1+Tsyn=tin2 2++2·Tsyn+Tneu; - the
synapse 157 excites theinb neuron 148 that delivers an event at time tinb2 1=tsync2 1+Tsyn Tneu=tin2 2+2·Tsyn+2·Tneu, which event in turn inhibits, via thesynapse 163, the output−neuron 144, the potential of which is set to the value −2Vt at time tin2 2+3·Tsyn+2·Tneu; - the
synapse 155 then re-excites the output−neuron 144, the potential of which is set to the value −Vt at time tin2 2+Tmin+4·Tsyn+3·Tneu; - the
synapse 156 excites theoutput+ neuron 143 that delivers an event at time tout 1=tsync2 1+3·Tsyn+3·Tneu=tin2 2+4·Tsyn+4·Tneu, which event in turn excites theinb neuron 148, the potential of which, reset to zero after the previous event emitted at time tinb2 1, is set to the value Vt/2 at time tout 1+Tsyn+Tneu=tin2 2+5·Tsyn+5·Tneu.
- the
- Then, emission of a second spike on the
other input neuron 141 at time tin1 2=tin1 1+Δt1 sets theother sync neuron 145 to the threshold voltage Vt, which triggers an event at time tsync1 1=tin1 2+Tsyn+Tneu at the output of thisneuron 145. Thus: -
- the
synapse 158 inhibits theinb neuron 148, the potential of which is set to the value −Vt/2 at time tsync1 1+Tsyn=tin1 2+2·Tsyn+Tneu; - the
synapse 154 excites theinb neuron 147 that resets its membrane potential to zero; - the
synapse 152 excites theoutput+ neuron 143 that delivers an event at time tout 2=tsync1 1+Tmin+3·Tsyn+3·Tneu=tin1 2+Tmin+4·Tsyn+4·Tneu, which event in turn excites theinb neuron 148, the potential of which is reset to zero at time tout 2+Tsyn+Tneu=tin1 2+Tmin+5·Tsyn+5·Tneu; - the
synapse 153 excites the output−neuron 144, the potential of which is reset to zero at time tsync1 1+3·Tsyn+2·Tneu=tin1 2+4·Tsyn+3·Tneu.
- the
- The two excitation events received by the output−
neuron 144, at times tin2 2+Tmin+4·Tsyn+3·Tneu and tin1 2+4·Tsyn+3·Tneu are after the inhibiting event received at time tin2 2+3·Tsyn+2·Tneu. As a result, thisneuron 144 does not emit any event when Δt2<Δt1, and thus the sign of the result is suitably signalled. - Finally, the
output+ neuron 143 delivers two events having between them a time interval Δtout between the events of the two pairs produced by the 141, 142, with:input neurons -
- Over the output neuron having the correct sign at the output of the
subtractor circuit 140, two events are properly obtained having between them the time interval Δtout=ƒ(x1-x2). This result is available at the output of the circuit upon reception of the second event of the input pair having the greatest absolute value. - When two equal values are presented to it at the input, the
subtractor circuit 140 shown inFIG. 15 activates the two parallel paths and the result is delivered on both theoutput+ neuron 143 and the output−neuron 144, the 147, 148 not having the time to select a winning path. In order to avoid this, it is possible to add, to the subtractor circuit, a zeroinb neurons neuron 171 and fast V-synapses 172-178 in order to form asubtractor circuit 170 according toFIG. 17 . - In
FIG. 17 , the reference numerals of the neurons and synapses arranged in the same way as inFIG. 15 are not repeated. The zeroneuron 171 is the receiver node of two excitation V- 172, 173 having a weight of we/2 and the delay Tneu, one coming from thesynapses sync neuron 145 and the other from thesync neuron 146. It is also the receiver node of two inhibiting V- 174, 175 having a weight of we/2 and a delay of 2·Tneu, one coming from thesynapses sync neuron 145 and the other from thesync neuron 146. The zeroneuron 171 excites itself via a V-synapse 176 having the weight we and the delay Tneu. It is also the emitter node of two inhibiting V-synapses having the delay Tneu, one 177 having the weight wi directed towards theinb neuron 148 and the other 178 having a weight of 2wi directed towards the output−neuron 144. - The zero
neuron 171 acts as a detector of coincidence between the events delivered by the 145, 146. Given that these two neurons only deliver events at the time of the second encoding spike of their associated input, detecting this temporal coincidence is equivalent to detecting the equality of the two input values, if the latter are correctly synchronised. The zerosync neurons neuron 171 only produces an event if it receives two events separated by a time interval less than Tneu from the 145, 146. In this case, it directly inhibits the output−sync neurons neuron 144 via thesynapse 178, and deactivates theinb neuron 148 via thesynapse 177. - Consequently, two equal input values provided to the subtractor circuit of
FIG. 17 lead to two events separated by a time interval equal to Tmin, i.e. encoding a difference of zero, at the output of theoutput+ neuron 143, and to no event on the output−neuron 144. If the input values are not equal, the zeroneuron 171 is not activated and the subtractor functions in the same manner as that ofFIG. 15 . - C.2. Accumulation
-
FIG. 18 shows acircuit 180 for accumulating positive input values with weighting. Its goal is to load, into aacc neuron 184, a potential value related to a weighted sum: -
S=Σ k=0 N-1αk ·x k (16) - where α0, α1, . . . , αN−1 are positive or zero weighting coefficients and the input values x0, x1, . . . , xN−1 are positive or zero.
- For each input value xk (0≤k<N), the
circuit 180 comprises ainput neuron 181 k and an input− neuron 182 k each part of arespective group 20 of neurons arranged in the same way as in thegroup 20 described above in reference toFIG. 2 . - The outgoing connections of the first and last neurons of these N groups of
neurons 20 are configured as a function of the coefficients αk of the weighted sum to be calculated. - The first neuron connected to the input neuron 181 k (0≤k<N) is the emitter node of an excitation ge-synapse 182 k having a weight of αk, wacc and a delay of Tmin+Tsyn. The last neuron connected to the
input neuron 181 k is the emitter node of an inhibiting ge-synapse 183 k having a weight of −αk·wacc and the delay Tsyn. - The
acc neuron 184 accumulates the terms αk·xk. Thus, for each input k, theacc neuron 184 is the receiver node of the excitation ge-synapse 182 k and of the inhibiting ge-synapse 183 k. - The
circuit 180 further comprises async neuron 185 that is the receiver node of N V-synapses, each having a weight of we/N and the delay Tsyn, respectively coming from the last neurons connected to the N neurons input 181 k (0≤k<N). Thesync neuron 185 is the emitter node of an excitation ge-synapse 186 having the weight wacc and the delay Tsyn, the receiver node of which is theacc neuron 184. - For each input having two spikes separated by Δtk=Tmin+xk·Tcod on the
input neuron 181 k, theacc neuron 184 integrates the quantity αk·Vt/Tmax over a duration Δtk−Tmin xk·Tcod. - Once all the second spikes of the k input signals have been received, the
sync neuron 185 is triggered and excites theacc neuron 184 via the ge-synapse 186. The potential of theacc neuron 184 continues to grow for a residual time equal to Tmax−Σk=0 N-1αk·xk·Tcod. At this time, the threshold Vt is reached by theacc neuron 184 that triggers an event. - The delay of this event with respect to that delivered by the
sync neuron 185 is Tmax|Σk=0 N-1αk·xk·Tcod=ƒ(1−Σk=0 N-1αk·xk)=ƒ(1−s). The weighted sums is only made accessible by thecircuit 180 in its inverted form (1−s). - The
circuit 180 functions in the way that was just described under the condition that Tcod·Σk=0 N-1αk·xk<Tmax. The coefficients αk can be normalised in order for this condition to be met for all the possible values of the xk, i.e. such that -
- C.3. Weighted Sum
- A
weighted addition circuit 190 can have the structure shown inFIG. 19 . - In order to obtain the representation of the weighted sum s according to (16), a
circuit 180 for weighted accumulation of the type of that described in reference toFIG. 18 is associated with anotheracc neuron 188 and with anoutput neuron 189. - The
acc neuron 188 is the receiver node of an excitation ge-synapse 191 having the weight wacc and the delay Tsyn, and the emitter node of an excitation V-synapse 192 having the weight we and a delay of Tmin+Tsyn. Theoutput neuron 189 is also the receiver node of an excitation V-synapse 193 having the weight we and the delay Tsyn. - The linearly changing accumulation starts in the
acc neuron 188 at the same time as it restarts in theacc neuron 184 of thecircuit 180, the two 184, 188 being excited on the ge-acc neurons 186, 191 by the same event coming from thesynapses sync neuron 185. Their residual accumulation times, until the threshold Vt is reached, are, respectively, Tmax−Σk=0 N-1αk·xk·Tcod and Tmax. Because thesynapse 192 has a relative delay of Tmin, the two events triggered on theoutput neuron 189 have between them the time interval Tmin+Σk=0 N-1αk·xk·Tcod=ƒ(s). - The expected weighted sum is represented at the output of the
circuit 190. When N=2 and α0=α1=½, thiscircuit 190 becomes a simple adder circuit, with a scale factor of ½ in order to avoid overflows in theacc neuron 184. - C.4. Linear Combination
- The more general case of linear combination is also expressed by equation (16) above, but the coefficients αk can be positive or negative, just like the input values xk. Without losing generality, the coefficients and input values are ordered in such a way that the coefficients α0, α1, . . . , αM−1 are positive or zero and the coefficients αM+1, αM+2, . . . , αN−1 are negative (N≥2, M≥0, N−M≥0).
- In order to take into account the positive or negative values, the
circuit 200 for calculating a linear combination shown inFIG. 20 comprises two 180A, 180B of the type of that described in reference toaccumulation circuits FIG. 18 . - The
input neurons 181 k of theaccumulation circuit 180A are respectively associated with the coefficients αk for 0≤k<M and with the inverted coefficients −αk for M≤k<N. Theseinput neurons 181 k for 0≤k<M receive a pair of spikes representing xk when xk≥0 and thus form neurons of the input+ type for these values x0, . . . , xM−1. Theinput neurons 181 k of thecircuit 180A for M≤k<N receive a pair of spikes representing xk when xk<0 and thus form neurons of the input− type for these values xM, . . . , xN−1. - The
input neurons 181 k of thecircuit 180B for weighted accumulation are respectively associated with the inverted coefficients −αk for 0≤k<M and with the coefficients αk for M≤k<N. Theseinput neurons 181 k for 0≤k<M receive a pair of spikes representing xk when xk<0 and thus form neurons of the input− type for these values x0, . . . , xM−1. Theneurons input 181 k of thecircuit 180B for M≤k<N receive a pair of spikes representing xk when xk≥0 and thus form neurons of the input+ type for these values xM, . . . , xN−1. - The two
180A, 180B share theiraccumulation circuits sync neuron 185 that is thus the receiver node of 2N V-synapses, each having a weight of we/N and the delay Tsyn, coming from last neurons coupled with the2N input neurons 181 k. Thesync neuron 185 of the linearcombination calculation circuit 200 is therefore triggered once the N input values x0, . . . , xN−1, positive or negative, have been received on theneurons 181 k. - A time ΔTA=Tmax|Eα
k ·xk ≥0|αk·xk|·Tcod=ƒ (1−Σαk ·xk ≥0|αk·xk|) elapses between the respective events delivered by thesync neuron 185 and theacc neuron 184 of thecircuit 180A. - A time ΔTB=Tmax−Σα
k ·xk <0|αk·xk|·Tcod=ƒ (1−Σαk ·xk <0|αk·xk|) elapses between the respective events delivered by thesync neuron 185 and theacc neuron 184 of thecircuit 180B. - A
subtractor circuit 170 that can be of the type of that shown inFIG. 17 then combines the time intervals ΔTA and ΔTB in order to produce the representation of |s|=Σαk ·xk ≥0|Σαk ·xk <0| on an output indicative of the sign of s. For this, the linearcombination calculation circuit 200 ofFIG. 20 comprises two excitation V- 198, 199, having the weight we and a delay of Tmin+Tsyn, directed towards thesynapses 141, 142 of theinput neurons subtractor circuit 170. Moreover, an excitation V-synapse 201 having the weight we and the delay Tsyn goes from theacc neuron 184 of thecircuit 180A to theinput neuron 141 of thesubtractor circuit 170. An excitation V-synapse 202 having the weight we and the delay Tsyn goes from theacc neuron 184 of thecircuit 180B to theother input neuron 142 of thesubtractor circuit 170. - The output−
neuron 144 and theoutput+ neuron 143 of thesubtractor circuit 170 are respectively connected, via excitation V- 205, 206 having the weight we and the delay Tsyn, to two other output+ and output−synapses 203, 204 that form the outputs of theneurons circuit 200 for calculating a linear combination. - The one of these two neurons that is triggered indicates the sign of the result s of the linear combination. It delivers a pair of events separated by the time interval Δtout=Tmin+ΔTA−ΔTB=ƒ(|Σα
k ·xk ≥0|αk·xk|−Σαk ·xk <0|αk·xk∥)=ƒ(|s|). - The availability of this result is indicated on the outside by a ‘start’
neuron 207 receiving two excitation V- 208, 209, having the weight we and the delay Tsyn, coming from thesynapses output+ neuron 143 and the output−neuron 144 of thesubtractor circuit 170. Thestart neuron 207 inhibits itself via a V-synapse 210, having the weight wi and the delay Tsyn. Thestart neuron 207 delivers a spike simultaneously to the first spike of the output+ or output− 203, 204 which is activated.neuron - The coefficients αk can be normalised in order for the conditions Σα
k ·xk ≥0|αk·xk|·Tcod<Tmax and Σαk ·xk <0|αk·xk|·Tcod<Tmax to be met for all the possible values of the xk, i.e. in such a way that -
- in order for the
circuit 200 for calculating a linear combination to function as described above. The normalisation factor must therefore be taken into account in the result. - D.1. Logarithm
-
FIG. 21 shows acircuit 210 for calculating the natural logarithm of a number x∈]0, 1], an encoded representation of which is produced by ainput neuron 211 in the form of two events occurring at times tin 1 and tin 2=tin 1+Δt (FIG. 22 ) with Δt=ƒ(x)=Tmin+x·Tcod. - The
input neuron 211 belongs to a group ofnodes 20 similar to that described in reference toFIG. 2 . Thefirst neuron 213 of thisgroup 20 is the emitter node of an excitation ge-synapse 212 having the weightw acc and a delay of Tmin+Tsyn, while thelast neuron 215 is the emitter node of an inhibiting ge-synapse 214 having a weight of −w acc and the delay Tsyn. The two ge- 212, 214 have thesynapses same acc neuron 216 as a receiver node. From thelast neuron 215 to theacc neuron 216, there is also a gf-synapse 217 having the weight -
- and a gate-
synapse 218 having a weight of 1 and the delay Tsyn. - The
circuit 210 further comprises anoutput neuron 220 that is the receiver node of an excitation V-synapse 221 having the weight we and a delay of 2·Tsyn coming from thelast neuron 215, and of an excitation V-synapse 222 having the weight we and a delay of Tmin+Tsyn coming from theacc neuron 216. - The operation of the
logarithm calculation circuit 210 according toFIG. 21 is illustrated byFIG. 22 . - The emission of the first spike at time tin 1 at the
input neuron 211 triggers an event at the output of thefirst neuron 213 at time tfirst 1=tin 1+Tsyn Tneu. Thefirst neuron 213 starts the accumulation by theacc neuron 216 at time tst 1=tin 1+Tmin+2·Tsyn+Tneu via the ge-synapse 212. - The emission of the second spike at time tin 2=tin 1+Tmin+x·Tcod at the
input neuron 211 causes thelast neuron 215 to deliver an event at time tlast 1=tin 2+Tsyn+Tneu. This event transported by the ge-synapse 214 stops the accumulation carried out by theacc neuron 216 at time tend 1=tlast 1+Tsyn=tst 1+x·Tcod. At this time, the potential value Vt·x is stored in theacc neuron 216. - Via the
217 and 218, thesynapses last neuron 215 further activates the exponential change on theacc neuron 216 at the same time tend 1 via the gƒ-synapse 217 and the gate-synapse 218. It should be noted that alternatively, the event transported by the gƒ-synapse 217 could also arrive later at theacc neuron 216 if it is desired to store, in the latter, the potential value Vt·x while other operations are carried out in the device. - After activation by the
217 and 218, the component gƒ of thesynapses acc neuron 216 changes according to: -
- and its membrane potential according to:
-
- This potential V(t) reaches the threshold Vt and triggers an event on the V-
synapse 222 at time tacc 1=tend 1−τƒ·log(x). - A first event is triggered on the
output neuron 220 because of the V-synapse 221 at time tout 1=tlast 1+2Tsyn+Tneu=tend 1+Tsyn+Tneu. The second event triggered by thesynapse 222 occurs at time tout 2=tacc 1+Tmin+Tsyn+Tneu=tout 1+Tmin−τƒ·log(x). - Finally, the two events delivered by the
output neuron 220 are separated by a time interval -
- The representation of a number proportional to the natural logarithm log(x) of the input value x is properly obtained at the output. Since 0<x≤1, the logarithm log(x) is a negative value.
- If we call A the value
-
- the
circuit 210 ofFIG. 21 delivers the representation of logA(x) when it receives the representation of a real number x such that A≤x≤1, where logA(⋅) designates the base-A logarithm operation. If we consider that in the form (11), the time interval between the two events delivered by theoutput neuron 220 can exceed Tmax, thecircuit 210 delivers the representation of logA(x) for any number x such that 0<x≤1. - D.2. Exponentiation
-
FIG. 23 shows aexponentiation circuit 230 for a number x∈[0, 1], an encoded representation of which is produced by aninput neuron 231 in the form of two events occurring at times tin 1 and tin 2=tin 1+Δt (FIG. 24 ) with Δt=ƒ(x)=Tmin+x·Tcod. - The
input neuron 231 belongs to a group ofnodes 20 similar to that described in reference toFIG. 2 . Thefirst neuron 233 of thisgroup 20 is the emitter node of a gƒ-synapse 232 having the weight gmult and a delay of Tmin+Tsyn, as well as of an excitation gate-synapse 234 having a weight of 1 and a delay of Tmin+Tsyn. Thelast neuron 235 of thegroup 20 is the emitter node of an inhibiting gate-synapse 236 having a weight of −1 and the delay Tsyn, as well as of an excitation ge-synapse 237 having the weightw acc and the delay Tsyn. The synapses have thesame acc neuron 238 as a receiver node. - The
circuit 230 further comprises anoutput neuron 240 that is the receiver node of an excitation V-synapse 241 having the weight we and a delay of 2·Tsyn coming from thelast neuron 235, and of an excitation V-synapse 242 having the weight we and a delay of Tmin+Tsyn coming from theacc neuron 238. - The operation of the
exponentiation circuit 230 according toFIG. 23 is illustrated byFIG. 24 . - The emission of the first spike at time tin 1 at the
input neuron 231 triggers an event at the output of thefirst neuron 233 at time tfirst 1=tin 1+Tsyn+Tneu. Thefirst neuron 233 starts an exponentially-growing accumulation in theacc neuron 238 at time tst 1=tin 1+Tmin+2·Tsyn+Tneu via the gƒ-synapse 232 and the gate-synapse 234. - The component gƒ of the
acc neuron 238 changes according to: -
- and its membrane potential according to:
-
- The emission of the second spike at time tin 2=tin 1+Tmin+x·Tcod at the
input neuron 231 causes thelast neuron 235 to deliver an event at time tlast 1=tin 2+Tsyn+Tneu. This event transported by the gate-synapse 236 stops the exponentially-changing accumulation carried out by theacc neuron 238 at time tend 1=tlast 1+Tsyn=tst 1+x·Tcod. At this time, the potential value Vt·(1−Ax) is stored in theacc neuron 238, where, as above, -
- Via the ge-
synapse 237, thelast neuron 235 further activates the linear dynamics having the weightw acc on theacc neuron 238 at the same time tend 1. - The membrane potential of the
neuron 238 thus changes according to: -
- This potential V(t) reaches the threshold Vt and triggers an event on the V-
synapse 222 at time tacc 1=tend 1+Ax·Tcod. - A first event is triggered on the
output neuron 240 because of the V-synapse 241 at time tout 1=tlast 1+2Tsyn+Tneu=tend 1+Tsyn+Tneu. The second event triggered by thesynapse 242 occurs at time tout 2=tacc 1+Tmin+Tsyn+Tneu=tout 1+Tmin+Ax·Tcod. - Finally, the two events delivered by the
output neuron 240 are separated by a time interval ΔTout=tout 2−tout 1=Tmin+Ax·Tcod=ƒ(Ax). - The
circuit 230 ofFIG. 23 thus delivers the representation of Ax when it receives the representation of a number x between 0 and 1. This circuit can accept input values x greater than 1 (Δt>Tmax) and also deliver the representation of Ax on itsoutput neuron 240. - The
circuit 230 ofFIG. 23 carries out the inversion of the operation carried out by thecircuit 210 ofFIG. 21 . - This can be used to implement various non-linear calculations using simple operations between logarithm calculation and exponentiation circuits. For example, the sum of two logarithms allows multiplication to be carried out, the subtraction thereof allows division to be carried out, the sum of n times the logarithm allows a number x to be raised to a whole power n, etc.
- D.3. Multiplication
-
FIG. 25 shows amultiplier circuit 250 that calculates the product of two values x1, x2, the encoded representations of which are produced, respectively, by two input neurons 251 1, 251 2 in the form of two pairs of events occurring at times tin1 1 and tin1 2=tin1 1+Δt1 for the value x1 and at times tin2 1 and tin2 2=tin2 1+Δt2 for the value x2 (FIG. 25 ) with Δt1=ƒ(x1)=Tmin+x1·Tcod and Δt2=ƒ(x2)=Tmin+x2·Tcod. - Each input neuron 251 k (k=1 or 2) belongs to a group of
nodes 20 k similar to that described in reference toFIG. 2 . The first neuron 253 k of thisgroup 20 k is the emitter node of an excitation ge-synapse 252 k having the weightw acc and a delay of Tmin+Tsyn, while the last neuron 255 k is the emitter node of an inhibiting ge-synapse 254 k having a weight of −w acc and the delay Tsyn. The two ge-synapses 252 k, 254 k from the group ofnodes 20 k have, as a receiver node, the same acc neuron 256 k, which plays a role similar to theacc neuron 216 inFIG. 21 . - The
circuit 250 further comprises async neuron 260 that is the receiver node of two excitation V-synapses 261 1, 261 2 having a weight of we/2 and the delay Tsyn coming, respectively, from the last neurons 255 1, 255 2. A gƒ-synapse 262 having the weight gmult and the delay Tsyn and an excitation gate-synapse 264 having a weight of 1 and the delay Tsyn go from thesync neuron 260 to the acc neuron 256 1. - A gƒ-
synapse 265 having the weight gmult and the delay Tsyn and an excitation gate-synapse 266 having a weight of 1 and the delay Tsyn go from the acc neuron 256 1 to the acc neuron 256 2. - The
circuit 250 comprises anotheracc neuron 268 that plays a role similar to theacc neuron 238 inFIG. 23 . Theacc neuron 268 is the receiver node of a gƒ-synapse 269, having the weight gmult and a delay of 3Tsyn and of an excitation gate-synapse 270, having a weight of 1 and a delay of 3 Tsyn, both coming from thesync neuron 260. Moreover, theacc neuron 268 is the receiver node of an inhibiting gate-synapse 271, having a weight of −1 and the delay Tsyn, and of an excitation ge-synapse 272, having the weightw acc and the delay Tsyn, both coming from the acc neuron 256 2. - Finally, the
circuit 250 has anoutput neuron 274 that is the receiver node of an excitation V-synapse 275, having the weight we and a delay of 2Tsyn, coming from the acc neuron 256 2 and of an excitation V-synapse 276, having the weight we and a delay of Tsyn+Tsyn, coming from theacc neuron 268. - The operation of the
multiplier circuit 250 according toFIG. 25 is illustrated byFIG. 26 . - Each of the two acc neurons 256 1, 256 2 initially behaves like the
acc neuron 216 ofFIG. 21 , with a 278 1, 278 2 having the weightlinear progression w acc on a first period having a respective duration of x1·Tcod, x2·Tcod, leading to storage of the potential values Vt·x1 and Vt·x2 in the acc neurons 256 1, 256 2. - Emission of the second spike at time tin2 2=tin2 1+Tmin·x2·Tcod at the input neuron having the smallest value (the input neuron 251 2 in the example shown in
FIG. 26 where x1>x2) stops the linearly changing accumulation in the corresponding acc neuron 256 2 via the ge-synapse 254 2 at time tlast2 1+Tsyn=tin2 2+2Tsyn 'Tneu. The membrane potential of this acc neuron 256 2 thus has aplateau 279 that lasts until its reactivation via the 265, 266. At time tlast2 1+Tsyn=tin2 2+2Tsyn+Tneu, the potential of thesynapses sync neuron 260 moves to the value Vt/2 because of the event received from the last neuron 255 2 via the V-synapse 261 2. - Emission of the second spike at time tin1 2=tin1 1+Tmin+x1·Tcod at the input neuron having the largest value (the input neuron 251 1 in the case of
FIG. 26 ) stops the linearly-changing accumulation in the corresponding acc neuron 256 1 via the ge-synapse 254 1 at time tlast1 1+Tsyn=tin1 2+2Tsyn+Tneu. At the same time, the potential of thissync neuron 260 reaches the value Vt because of the event received on the V-synapse 261 1. This results in emission of an event at time tsync 1=tin1 2+2Tsyn+2Tneu on the 262 and 264. The exponential change 280 1 is then activated in the acc neuron 256 1 instead of thesynapses linear change 278 1 at time tst1 1=tsync 1+Tsyn. In parallel, the 269, 270 activate thesynapses exponential change 281 in theacc neuron 268 at time tst3 1=tsync 1+3Tsyn. - The potential of the acc neuron 256 1 reaches the value Vt and triggers an event on the
265, 266 at time tlog1 1=tst1 1−τƒ·log(x1).synapses - The exponential change 280 1 is thus activated in the acc neuron 256 2 at time tst2 1=tlog1 1+Tsyn. The potential of this acc neuron 256 2 reaches the threshold Vt and triggers an event on the
271, 272, 275 at time tlog2 1=tst2 1−τƒ·log(x2)=tsync 1−τƒ·log(x1·x2)+2Tsyn. The gate-synapses synapse 271 deactivates theexponential change 281 in theacc neuron 268 at time tend3 1=tlog2 1+Tsyn, and simultaneously, thelinear change 282 in theacc neuron 268 is activated via the ge-synapse 272, starting from the value: -
- The V-
synapse 275 triggers the emission of a first spike on theoutput neuron 274 at time tout 1=tlog2 1+2Tsyn+Tneu. - The
acc neuron 268 reaches the threshold Vt and triggers an event on the V-synapse 276 at time texp 1=tend3 1 x1·x2·Tcod. This results in emission of a second spike at theoutput neuron 274 at time tout 2=texp 1+Tmin+Tsyn+Tneu. - Finally, the two events delivered by the
output neuron 268 are separated by a time interval ΔTout=tout 2−tout 1=Tmin+x1·x2·Tcod=ƒ(x1·x2) - The
circuit 250 ofFIG. 25 thus delivers, on itsoutput neuron 268, the representation of the product x1·x2 of two numbers x1, x2 between A and 1, the respective representations of which it receives on its input neurons 251 1, 251 2. - For this, the pairs of events did not have to be received in a synchronised manner on the input neurons 251 1, 251 2 since the
sync neuron 260 handles the synchronisation. - D.4. Signed Multiplication
-
FIG. 27 shows amultiplier circuit 290 that calculates the product of two signed values x1, x2. All the synapses shown inFIG. 27 have the delay Tsyn. - For each input value xk (1≤k≤2), the
multiplier circuit 290 comprises a input+ neuron 291 k and a input− neuron 292 k that are the emitter nodes of two respective V-synapses 293 k and 294 k having the weight we. The V-synapses 293 1 and 294 1 are directed towards an input neuron 251 1 of amultiplier circuit 250 of the type shown inFIG. 25 , while the V-synapses 293 1 and 294 1 are directed towards the other input neuron 251 2 of thecircuit 250. - The
multiplier circuit 290 has aoutput+ neuron 295 and a output−neuron 296 that are the receiver nodes of two respective excitation V- 297 and 298 having the weight we coming from thesynapses output neuron 274 of thecircuit 250. - The
multiplier circuit 290 also comprises four sign neurons 300-303 connected to form logic for selecting the sign of the result of the multiplication. Each sign neuron 300-303 is the receiver node of two respective excitation V-synapses having a weight of we/4 coming from two of the four input neurons 291 k, 292 k. Thesign neuron 300 connected to the input+ neurons 291 1, 291 2 detects the reception of two positive inputs x1, x2. It forms the emitter node of an inhibiting V-synapse 305 having a weight of 214), going to the output−neuron 296. Thesign neuron 303 connected to the input− neurons 292 1, 292 2 detects the reception of two negative inputs x1, x2. It forms the emitter node of an inhibiting V-synapse 308 having a weight of 2wi going to the output−neuron 296. Thesign neuron 301 connected to the input neuron 292 1 and the input+ neuron 292 1 detects the reception of a negative input x1 and of a positive input x2. It forms the emitter node of an inhibiting V-synapse 306 having a weight of 2wi going to theoutput+ neuron 295. Thesign neuron 302 connected to the input+ neuron 291 1 and the input− neuron 292 2 detects the reception of a positive input x1 and of a negative input x2. It forms the emitter node of an inhibiting V-synapse 307 having a weight of 2wi going to theoutput+ neuron 295. - Inhibiting V-synapses are arranged between the sign neurons 300-303 in order to ensure that only one of them acts in order to inhibit one of the
output+ neuron 295 and the output−neuron 296. Each sign neuron 300-303 corresponding to a sign (+ or −) of the product is thus the emitter node of two inhibiting V-synapses having a weight of we/2 going, respectively, to the two sign neurons corresponding to the opposite sign. - Thus arranged, the
circuit 290 ofFIG. 27 delivers two events separated by the time interval ƒ(|x1·x2|) on one of its 295, 296, according to the sign of x1·x2, when the two numbers x1, x2 are presented with their respective signs on the inputs 291 k, 292 k.outputs - Logic for detecting a zero on one of the inputs can be added thereto, like in the case of
FIG. 17 , in order to make sure that an input of zero will produce the time interval Tmin between two events produced on theoutput+ neuron 295 and not the output−neuron 296. - E.1. Integration
-
FIG. 28 shows acircuit 310 that reconstructs a signal from its derivatives provided in signed form on a neuron of a pair of input+ and input− 311, 312. The integrated signal is presented, according to its sign, by a neuron of a pair of output+ and output−neurons 313, 314. The synapses 321-332 shown inneurons FIG. 28 are all excitation V-synapses having the weight we. They all have the delay Tsyn except thesynapse 329, the delay of which is Tmin+Tsyn. - In order to carry out the integration, the
circuit 310 uses alinear combination circuit 200 of the type shown inFIG. 20 , with N=2 and coefficients α0=1 and α1=dt, where dt is the selected integration step size. - The
input+ neuron 311 and theinput neuron 312 are connected, respectively, to the input+ and input−neurons 181 1 of thecircuit 200 that are associated with the coefficient α1=dt, by two V- 321, 322.synapses - The other input+ and input−
neurons 181 1 of thecircuit 200, which are associated with the coefficient α0=1, are connected, respectively, by two V- 323, 324 to two output+ and output−synapses 315, 316 of aneurons circuit 217, the role of which is to provide an initialisation value x0 for the integration process. Thecircuit 317 substantially consists of a pair of output+ and output− 315, 316 connected to theneurons same recall neuron 15 in the manner shown inFIG. 1 . - Another
init neuron 318 of theintegration circuit 310 is the emitter node of asynapse 325, the receiver node of which is therecall neuron 15 of thecircuit 317. Theinit neuron 318 loads the integrator with its initial value x0 stored in thecircuit 317. -
326, 327 are arranged to provide feedback from theSynapses output+ neuron 143 of thelinear combination circuit 200 to itsinput+ neuron 181 0 and from the output−neuron 144 of theintegration circuit 200 to its input−neuron 181 0. - A
start neuron 319 is the emitter node of two 328, 329 that feed a zero value in the form of two events separated by the time interval Tmin on thesynapses input+ neuron 181 1 of theintegration circuit 180. - The
output+ neuron 143 and the output−neuron 144 of thelinear combination circuit 200 are the respective emitter nodes of two 330, 331, the receiver nodes of which are, respectively, thesynapses output+ neuron 313 and the output−neuron 314 of theintegration circuit 310. - Finally, the
integration circuit 310 has anew input neuron 320 that is the receiver node of asynapse 332 coming from thestart neuron 207 of thelinear combination circuit 200. - The initial value x0 is, according to its sign, delivered on the
output+ neuron 313 or the output−neuron 314 once theinit neuron 318 and then thestart neuron 319 have been activated. At the same time, an event is delivered by thenew input neuron 320. This event signals, to the environment of thecircuit 310, that the derivative value g′ (k·dt), with k=0, can be provided. As soon as this derivative value g′(k·dt) is presented on theinput+ neuron 311 or the input−neuron 312, a new integral value is delivered by theoutput+ neuron 313 or the output−neuron 314 and a new event delivered by thenew input neuron 320 signals, to the environment of thecircuit 310, that the next derivative value g′ ((k+1)·dt) can be provided. This process is repeated as many times as derivative values g′ (k·dt) are provided (k=0, 1, 2, etc.). - After a (k+1)-th derivative value g′(k·dt) has been provided to the
integrator circuit 310, the representation of the following value is found at the output: -
x 0+Σi=0 k g′(i·dt)·dt (23) - which, up to an additive constant, is an approximation of g(T)=∫0 Tg′(t)·dt with T=(k+1)·dt.
- The circuits described above in reference to
FIGS. 1-28 can be assembled and configured to execute numerous types of calculations in which the values manipulated, at the input and/or output are represented by time intervals between events received or delivered by neurons. - In particular,
FIGS. 29, 31 and 33 illustrate examples of processing devices according to the invention used to solve differential equations. Calculations have been carried out with circuits built like in these figures, with parameters chosen, purely as an example, in the following manner: τm=100 s, τƒ=20 ms, Vt=10 mV, Tmin=10 ms and Tcod=100 ms. - E.2. First-Order Differential Equation
-
FIG. 29 shows a processing device that implements the resolution of the differential equation: -
- where τ and X∞ are parameters that can take on various values. The synapses shown in
FIG. 29 are all excitation V-synapses having the weight we and the delay Tsyn. - In order to solve equation (24), the device of
FIG. 29 uses: -
- a
linear combination circuit 200 as shown inFIG. 20 , with N=2 and coefficients α0=−1/τ and α1=+1/τ; - an
integrator circuit 310 as shown inFIG. 28 , with an integration step size dt; and - a
circuit 317 for providing the constant X∞, similar to thecircuit 317 described in reference toFIG. 28 , in the form of the time interval ƒ(|X∞|) between two spikes delivered either by itsoutput+ neuron 315 or by its output−neuron 316, according to the sign of X∞.
- a
- The constant X∞ is provided at one of the input+ and input−
neurons 181 1 associated with the coefficient α1=1/τ in thelinear combination circuit 200 after each activation of therecall neuron 15 that is the receiver node of asynapse 340 coming from thenew input neuron 320 of theintegrator circuit 310. Two 341, 342 provide feedback from thesynapses output node output+ 313 of theintegrator circuit 310 to the otherinput node input+ 181 0 of thelinear combination circuit 200, and from the output node output− 314 of thecircuit 310 to the other input node input− 181 0 of thecircuit 200. Two 343, 344 go from thesynapses output node output+ 203 of thelinear combination circuit 200 to theinput node input+ 311 of theintegrator circuit 310 and, respectively, from theoutput node output+ 204 of thecircuit 200 to the input node input− 312 of thecircuit 310. - The device of
FIG. 29 has a pair of output+ and output− 346, 347 that are the receiver nodes of two synapses coming from theneurons output+ neuron 313 and the output−neuron 314 of theintegrator circuit 310. - The init and start
348, 349 allow the process of integration to be initialised and started. Theneurons init neuron 348 must be triggered before the integration process in order to load the initial value into theintegrator circuit 310. Thestart neuron 349 is triggered in order to deliver the first value from thecircuit 310. - The device of
FIG. 29 is made using 118 neurons if the components as described in reference to the preceding figures are used. This number of neurons can be reduced via optimisation. - Results of simulation of this device with various sets of parameters τ, X∞ and with an integration step size dt=0.5 are presented in
FIG. 30A for various values of τ and inFIG. 30B for various values of X∞ (X∞=−0.2, X∞=0.1 and X∞=−0.4). Each point of the curves C1-C3, C′ 1-C′3 shown inFIGS. 30A and 30B corresponds to a respective output value encoded by a pair of spikes delivered by theoutput+ neuron 346 or the output−neuron 347. It is observed that the curves thus obtained for the solution X(t) of the differential equation (24) correspond to what is expected (via analytical resolution). - E.3. Second-Order Differential Equation
-
FIG. 31 shows a processing device that implements the resolution of the differential equation: -
- where ξ and ω0 are parameters that can take on various values. The synapses shown in
FIG. 31 are all excitation V-synapses having the weight we and the delay Tsyn. Since the values manipulated in this example are all positive, it is not necessary to provide two distinct paths for the positive values and for the negative values. Only the path relating to the positive values is therefore included. - In order to solve equation (25), the device of
FIG. 31 uses: -
- a
linear combination circuit 200 as shown inFIG. 20 , with N=3 and coefficients α0=α2=ω0 2 and α1=−ξ·ω0; - two
310A, 310B like the one shown inintegrator circuits FIG. 28 , with an integration step size dt; and - a
circuit 317 for providing the constant X∞, similar to the circuit described in reference toFIG. 1 , in the form of the time interval ƒ(X∞) between two spikes delivered by its output neuron 16 (X∞>0).
- a
- The constant X∞ is provided at the
input neuron 181 2 associated with the coefficient α2=ω0 2 in thelinear combination circuit 200 after each activation of therecall neuron 15 that is the receiver node of asynapse 350 coming from thenew input neuron 320 of thesecond integrator circuit 310B. Two 351, 352 provide feedback from thesynapses output node output 313 of thesecond integrator circuit 310B to theinput node input 181 1 of thelinear combination circuit 200 associated with the coefficient α1=−ξ·ω0 and, respectively, from theoutput node output 313 of thefirst integrator circuit 310A to the otherinput node input 181 0, of thecircuit 200, associated with the coefficient α0=ω0 2. Asynapse 353 goes from theoutput node output 203 of thelinear combination circuit 200 to theinput node input 311 of thefirst integrator circuit 310A. Asynapse 354 goes from theoutput node output 313 of thefirst integrator circuit 310A to theinput node input 311 of thesecond integrator circuit 310B. - The device of
FIG. 31 has anoutput neuron 356 that is the receiver node of a synapse coming from theoutput neuron 313 of thesecond integrator circuit 310B. - The init and start
neurons 358 359 allow the process of integration to be initialised and started. Theinit neuron 358 must be triggered before the integration process in order to load the initial value into the 310A, 310B. Theintegrator circuits start neuron 359 is triggered in order to deliver the first value from thesecond integrator circuit 310B. - The device of
FIG. 31 is made using 187 neurons if the components as described in reference to the preceding figures are used. This number of neurons can be reduced via optimisation. - Results of simulation of this device with various sets of parameters ξ, ω0 and with an integration step size dt=0.2 and X∞=−0.5 are presented in
FIG. 32A for various values of ω0 and inFIG. 32B for various values of ξ. Each point on the curves D1-D3, D′1-D′3 shown inFIGS. 32A and 32B corresponds to a respective output value encoded by a pair of spikes delivered by theoutput neuron 356. It is clear that the curves thus obtained for the solution X(t) of the differential equation (25) again correspond to what is expected. - E.4. Resolution of a System of Non-Linear Differential Equations
-
FIG. 33 shows a processing device that implements the resolution of the system of non-linear differential equations proposed by E. Lorenz for the modelling of a deterministic non-periodic flow (“Deterministic Nonperiodic Flow”, Journal of the Atmospheric Sciences, Vol. 20, No. 2, pages 130-141, March 1963): -
- In order to make sure that the system modelled has a chaotic behaviour, the device of
FIG. 33 was simulated with the choice of parameters σ=10, β=8/3 and ρ=28. The variables were scaled in order to obtain state variables X, Y and Z each changing within the interval [0, 1] in such a way that they could be represented in the form (11) above. The initial state of the system was set to X=−0.15, Y=−0.20, and Z=0.20. The integration step size used was dt=0.01. - The synapses shown in
FIG. 33 are all excitation V-synapses having the weight we and the delay Tsyn. In order to simplify the drawing, only one path is shown, but it should be understood that each time, there is a path for the positive values of the variables and, in parallel, a path for their negative values. - In order to solve the system (26), the device of
FIG. 33 uses: -
- two signed
290A, 290B like that shown inmultiplier circuits FIG. 27 in order to calculate the non-linearities contained in the derivatives of X, Y and Z; - three
200A, 200B, 200C like that shown inlinear combination circuits FIG. 20 in order to calculate the derivatives of X, Y and Z; - a signed
synchroniser circuit 90 of the type of that shown inFIG. 8 with N=3 in order to wait for the three derivatives to be calculated before changing the state of the system; - three
310A, 310B, 310C having a step size dt like that shown inintegrator circuits FIG. 28 in order to calculate the new state from the derivatives X, Y and Z.
- two signed
- The
linear combination circuit 200A is configured N=2 and coefficients α0=σ and α1=−σ. Its input neuron 181A0 is excited from theoutput neuron 313A of theintegrator circuit 310A, and its input neuron 181A1 is excited from theoutput neuron 313B of theintegrator circuit 310B. Its output neuron 203A is the emitter node of a synapse extending from the input neuron 91 0 to thesynchroniser circuit 90. - The
linear combination circuit 200B is configured N=3 and coefficients α0=ρ and α1=α2=−1. Its input neuron 181B0 is excited from theoutput neuron 313B of theintegrator circuit 310B, its input neuron 181B1 is excited from theoutput neuron 313A of theintegrator circuit 310A, and its input neuron 181B2 is excited from theoutput neuron 295A of themultiplier circuit 290A. Itsoutput neuron 203B is the emitter node of a synapse coming from the input neuron 91 1 to thesynchroniser circuit 90. - The
linear combination circuit 200C is configured N=2 and coefficients α0=1 and α1=−β. Itsinput neuron 181C0 is excited from theoutput neuron 295B of themultiplier circuit 290B, itsinput neuron 181C1 is excited from theoutput neuron 313C of theintegrator circuit 310C. Itsoutput neuron 203C is the emitter node of a synapse extending from the input neuron 91 2 to thesynchroniser circuit 90. - Three synapses go, respectively, from the output neuron 92 0 of the
synchroniser circuit 90 to theinput neuron 311A of theintegrator circuit 310A, from the output neuron 92 1 of thecircuit 90 to the input neuron 311B of theintegrator circuit 310B, and from the output neuron 92 2 of thecircuit 90 to theinput neuron 311C of theintegrator circuit 310C. - The
input neuron 291A1 of themultiplier circuit 290A is excited from theoutput neuron 313A of theintegrator circuit 310A, and itsinput neuron 291A2 is excited from theoutput neuron 313C of theintegrator circuit 310C. The input neuron 291B1 of themultiplier circuit 290B is excited from theoutput neuron 313A of theintegrator circuit 310A, and its input neuron 291B2 is excited from theoutput neuron 313B of theintegrator circuit 310B. - The device of
FIG. 33 has three 361, 362 and 363 that are the receiver nodes of three respective excitation V-synapses coming from theoutput neurons 313A, 313B and 313C of theoutput neurons 310A, 310B, 310C. These three output neurons 361-363 deliver pairs of events, the intervals of which represent values of the solution {X(t), Y(t), Z(t)} calculated for the system (26).integrator circuits - The device of
FIG. 33 is made using 549 neurons if the components as described in reference to the preceding figures are used. This number of neurons can be significantly reduced via optimisation. - The points in
FIG. 34 each correspond to a triplet {X(t), Y(t), Z(t)} of output values encoded by three pairs of spikes delivered by the three output neurons 361-363, respectively, in a three-dimensional graph illustrating a simulation of the device shown inFIG. 33 . The point P represents the initialisation values X(0), Y(0), Z(0) of the simulation. The other points represent triplets calculated by the device ofFIG. 33 . - The system behaves in the expected manner, in accordance with the strange attractor described by Lorenz.
- It has been shown that the calculation architecture proposed, with the representation of the data in the form of time intervals between events in a set of processing nodes, allows relatively simple circuits to be designed in order to carry out elementary functions in a very efficient and fast manner. In general, the results of the calculations are available as soon as the various input data have been provided (possible with a few synaptic delays).
- These circuits can then be assembled to carry out more sophisticated calculations. They form a sort of brick from which powerful calculation structures can be built. Examples of this have been shown with respect to the resolution of differential equations.
- When the elementary circuits are assembled, it is possible to optimise the number of neurons used. For example, some of the circuits were described with input neurons, and/or output neurons and/or first, last neurons. In practice, these neurons at the interfaces between elementary circuits can be eliminated without changing the functionality carried out.
- The processing nodes are typically organised as a matrix. This lends itself well in particular to an implementation using FPGA.
- A
programmable array 400 forming the set of processing nodes, or a portion of this set, in an exemplary implementation of the processing device is illustrated schematically inFIG. 35 . Thearray 400 consists of multiple neurons all having the same model of behaviour according to the events received on their connections. For example, the behaviour can be modelled by the equations (1) indicated above, with identical parameters τm and τƒ for the various nodes of the array. - Programming or
configuration logic 420 is associated with thearray 400 in order to adjust the synaptic weights and the delay parameters of the connections between the nodes of thearray 400. This configuration is carried out in a manner analogous to that which is routinely practice in the field of artificial neural networks. In the present context, the configuration of the parameters of the connections is carried out according to the calculation program that will be executed and while taking into account the relationship used between the time intervals and the values that they represent, for example the relationship (11). If the program is broken up into elementary operations, the configuration can result from an assembly of circuits of the type of those that were described above. This configuration is produced under the control of acontrol unit 410 provided with a man-machine interface. - Another role of the
control unit 410 and to provide the input values to theprogrammable array 400, in the form of events separated by suitable time intervals, in order for the processing nodes of thearray 400 to execute the calculation and deliver the results. These results are quickly recovered by thecontrol unit 410 in order to be presented to a user or to an application that uses them. - This calculation architecture is well suited for rapidly carrying out massively parallel calculations.
- Moreover, it is relatively easy to have a pipelined organisation of the calculations, for the execution of algorithms that are well suited to this type of organisation.
- The embodiments described above are illustrations of the present invention. Various modifications can be made to them without departing from the scope of the invention that emerges from the appended claims.
Claims (30)
Applications Claiming Priority (3)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| FR1556659A FR3038997A1 (en) | 2015-07-13 | 2015-07-13 | DATA PROCESSING DEVICE WITH VALUE REPRESENTATION THROUGH INTERVALS OF TIME BETWEEN EVENTS |
| FR1556659 | 2015-07-13 | ||
| PCT/FR2016/051717 WO2017009543A1 (en) | 2015-07-13 | 2016-07-06 | Data-processing device with representation of values by time intervals between events |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| US20180357527A1 true US20180357527A1 (en) | 2018-12-13 |
Family
ID=54848671
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| US15/743,642 Abandoned US20180357527A1 (en) | 2015-07-13 | 2016-07-06 | Data-processing device with representation of values by time intervals between events |
Country Status (9)
| Country | Link |
|---|---|
| US (1) | US20180357527A1 (en) |
| EP (1) | EP3323090A1 (en) |
| JP (1) | JP6732880B2 (en) |
| KR (1) | KR20180077148A (en) |
| CN (1) | CN108369660A (en) |
| CA (1) | CA2992036A1 (en) |
| FR (1) | FR3038997A1 (en) |
| IL (1) | IL256813A (en) |
| WO (1) | WO2017009543A1 (en) |
Cited By (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US10831447B2 (en) * | 2016-08-19 | 2020-11-10 | Sony Corporation | Multiply-accumulate operation device |
| US20220061818A1 (en) * | 2020-09-01 | 2022-03-03 | Canon Medical Systems Corporation | Hypercomplex-number operation device and medical image diagnostic apparatus |
| US20230068675A1 (en) * | 2021-08-26 | 2023-03-02 | Electronics And Telecommunications Research Institute | Encoder and operation method thereof |
Families Citing this family (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| EP3605401A1 (en) | 2018-07-31 | 2020-02-05 | GrAl Matter Labs S.A.S. | Data processing module, data processing system and data processing method |
| EP3617957A1 (en) | 2018-08-29 | 2020-03-04 | GrAl Matter Labs S.A.S. | Neuromorphic processing method and update utility for use therein |
| EP3640862A1 (en) | 2018-10-15 | 2020-04-22 | GrAl Matter Labs S.A.S. | Neural network evaluation tool and method |
| CN111506384B (en) * | 2019-01-31 | 2022-12-09 | 中科寒武纪科技股份有限公司 | Simulation operation method and simulator |
| EP3716155A1 (en) | 2019-03-27 | 2020-09-30 | Grai Matter Labs | Data processing node and data processing engine |
Family Cites Families (9)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US6581046B1 (en) * | 1997-10-10 | 2003-06-17 | Yeda Research And Development Co. Ltd. | Neuronal phase-locked loops |
| KR100272167B1 (en) * | 1998-07-13 | 2000-11-15 | 윤종용 | Reference signal generating circuit & sdram having the same |
| CN100390774C (en) * | 2001-11-16 | 2008-05-28 | 陈垣洋 | Plausible Neural Networks with Supervised and Unsupervised Cluster Analysis |
| JP5672489B2 (en) * | 2011-02-08 | 2015-02-18 | ソニー株式会社 | Data processing apparatus and data processing method |
| GB2496886A (en) * | 2011-11-24 | 2013-05-29 | Melexis Technologies Nv | Determining network address of integrated circuit network node |
| US8903746B2 (en) * | 2012-03-22 | 2014-12-02 | Audrey Kudritskiy | System and method for viewing, modifying, storing, and running artificial neural network components |
| US20140044206A1 (en) * | 2012-08-13 | 2014-02-13 | Telefonaktiebolaget L M Ericsson (Publ) | Methods of mapping retransmissions responsive to bundled nack messages and related devices |
| WO2014081671A1 (en) * | 2012-11-20 | 2014-05-30 | Qualcomm Incorporated | Dynamical event neuron and synapse models for learning spiking neural networks |
| CN104605845B (en) * | 2015-01-30 | 2017-01-25 | 南京邮电大学 | A Method of EEG Signal Processing Based on DIVA Model |
-
2015
- 2015-07-13 FR FR1556659A patent/FR3038997A1/en active Pending
-
2016
- 2016-07-06 CN CN201680045376.1A patent/CN108369660A/en active Pending
- 2016-07-06 WO PCT/FR2016/051717 patent/WO2017009543A1/en not_active Ceased
- 2016-07-06 CA CA2992036A patent/CA2992036A1/en not_active Abandoned
- 2016-07-06 KR KR1020187001017A patent/KR20180077148A/en not_active Withdrawn
- 2016-07-06 JP JP2018501204A patent/JP6732880B2/en not_active Expired - Fee Related
- 2016-07-06 EP EP16750928.0A patent/EP3323090A1/en not_active Withdrawn
- 2016-07-06 US US15/743,642 patent/US20180357527A1/en not_active Abandoned
-
2018
- 2018-01-09 IL IL256813A patent/IL256813A/en unknown
Cited By (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US10831447B2 (en) * | 2016-08-19 | 2020-11-10 | Sony Corporation | Multiply-accumulate operation device |
| US11392349B2 (en) | 2016-08-19 | 2022-07-19 | Sony Group Corporation | Multiply-accumulate operation device |
| US20220061818A1 (en) * | 2020-09-01 | 2022-03-03 | Canon Medical Systems Corporation | Hypercomplex-number operation device and medical image diagnostic apparatus |
| US20230068675A1 (en) * | 2021-08-26 | 2023-03-02 | Electronics And Telecommunications Research Institute | Encoder and operation method thereof |
| US12437189B2 (en) * | 2021-08-26 | 2025-10-07 | Electronics And Telecommunications Research Institute | Encoder and operation method thereof |
Also Published As
| Publication number | Publication date |
|---|---|
| CN108369660A (en) | 2018-08-03 |
| JP6732880B2 (en) | 2020-07-29 |
| IL256813A (en) | 2018-03-29 |
| FR3038997A1 (en) | 2017-01-20 |
| WO2017009543A1 (en) | 2017-01-19 |
| JP2018529143A (en) | 2018-10-04 |
| KR20180077148A (en) | 2018-07-06 |
| CA2992036A1 (en) | 2017-01-19 |
| EP3323090A1 (en) | 2018-05-23 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US20180357527A1 (en) | Data-processing device with representation of values by time intervals between events | |
| US20230196085A1 (en) | Residual quantization for neural networks | |
| US10482380B2 (en) | Conditional parallel processing in fully-connected neural networks | |
| US10891544B2 (en) | Event-driven universal neural network circuit | |
| KR102483639B1 (en) | Method for extending structure of neural network, method of dimension reduction, and apparatus thereof | |
| CN113826122B (en) | Training of artificial neural networks | |
| EP3340117A1 (en) | Unsupervised learning using neuromorphic computing | |
| US20180218257A1 (en) | Memory side acceleration for deep learning parameter updates | |
| EP3340124A1 (en) | Sparse coding using neuromorphic computing | |
| EP3602413A1 (en) | Projection neural networks | |
| EP3340120A1 (en) | Solving matrix inverse problems using neuromorphic computing | |
| KR20220030108A (en) | Method and system for training artificial neural network models | |
| KR102396447B1 (en) | Deep learning apparatus for ANN with pipeline architecture | |
| US20210019362A1 (en) | Method for Interfacing with Hardware Accelerators | |
| US20230139347A1 (en) | Per-embedding-group activation quantization | |
| US11769036B2 (en) | Optimizing performance of recurrent neural networks | |
| Singh et al. | Distributed quadratic programming solver for kernel SVM using genetic algorithm | |
| CN114548382A (en) | Migration training method, device, equipment, storage medium and program product | |
| CN113159297A (en) | Neural network compression method and device, computer equipment and storage medium | |
| Fraser et al. | A fully pipelined kernel normalised least mean squares processor for accelerated parameter optimisation | |
| Ayhan et al. | Hybrid processor population for odor processing | |
| Mohamad | Divide and conquer approach in reducing ann training time for small and large data | |
| AbdulQader et al. | Enabling incremental training with forward pass for edge devices | |
| KR102090109B1 (en) | Learning and inference apparatus and method | |
| Dinesh Babu et al. | Performance Analysis Study of Stochastic Computing Based Neuron |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| AS | Assignment |
Owner name: CENTRE NATIONAL DE LA RECHERCHE SCIENTIFIQUE - CNR Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:BENOSMAN, RYAD;LAGORCE, XAVIER;REEL/FRAME:046061/0467 Effective date: 20170309 Owner name: UNIVERSITE PIERRE ET MARIE CURIE (PARIS 6), FRANCE Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:BENOSMAN, RYAD;LAGORCE, XAVIER;REEL/FRAME:046061/0467 Effective date: 20170309 Owner name: INSERM (INSTITUT NATIONAL DE LA SANTE ET DE LA REC Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:BENOSMAN, RYAD;LAGORCE, XAVIER;REEL/FRAME:046061/0467 Effective date: 20170309 Owner name: SORBONNE UNIVERSITE, FRANCE Free format text: MERGER;ASSIGNORS:UNIVERSITE PIERRE ET MARIE CURIE (PARIS 6);UNIVERSITE PARIS SORBONNE;REEL/FRAME:046350/0854 Effective date: 20170421 |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: APPLICATION DISPATCHED FROM PREEXAM, NOT YET DOCKETED |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
| STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |