[go: up one dir, main page]

SE2250397A1 - A data processing system comprising a network, a method, and a computer program product - Google Patents

A data processing system comprising a network, a method, and a computer program product

Info

Publication number
SE2250397A1
SE2250397A1 SE2250397A SE2250397A SE2250397A1 SE 2250397 A1 SE2250397 A1 SE 2250397A1 SE 2250397 A SE2250397 A SE 2250397A SE 2250397 A SE2250397 A SE 2250397A SE 2250397 A1 SE2250397 A1 SE 2250397A1
Authority
SE
Sweden
Prior art keywords
nodes
weights
node
inputs
input
Prior art date
Application number
SE2250397A
Other languages
Swedish (sv)
Other versions
SE547197C2 (en
Inventor
Henrik Jörntell
Jonas Enander
Linus Mårtensson
Udaya Rongala
Original Assignee
IntuiCell AB
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by IntuiCell AB filed Critical IntuiCell AB
Priority to KR1020247031252A priority Critical patent/KR20240154584A/en
Priority to PCT/SE2023/050153 priority patent/WO2023163637A1/en
Priority to EP23760478.0A priority patent/EP4483300A1/en
Priority to US18/840,928 priority patent/US20250165779A1/en
Priority to JP2024549659A priority patent/JP2025508808A/en
Publication of SE2250397A1 publication Critical patent/SE2250397A1/en
Publication of SE547197C2 publication Critical patent/SE547197C2/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/18Complex mathematical operations for evaluating statistical data, e.g. average values, frequency distributions, probability functions, regression analysis
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/047Probabilistic or stochastic networks
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/088Non-supervised learning, e.g. competitive learning
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N7/00Computing arrangements based on specific mathematical models
    • G06N7/01Probabilistic graphical models, e.g. probabilistic networks
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B13/00Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion
    • G05B13/02Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion electric
    • G05B13/0265Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion electric the criterion being a learning criterion
    • G05B13/027Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion electric the criterion being a learning criterion using neural networks only

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biophysics (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Pure & Applied Mathematics (AREA)
  • Probability & Statistics with Applications (AREA)
  • Mathematical Optimization (AREA)
  • Mathematical Analysis (AREA)
  • Computational Mathematics (AREA)
  • Algebra (AREA)
  • Medical Informatics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Operations Research (AREA)
  • Databases & Information Systems (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Devices For Executing Special Programs (AREA)
  • Multi Processors (AREA)

Abstract

ABSTRACT The disclosure relates to a data processing system (100), configured to have one or more system input(s) (110a, 110b, ..., 110z) comprising data to be processed and a system output (120), comprising: a network, NW, (130) comprising a plurality of nodes (130a, 130b, ..., 130x), each node configured to have a plurality of inputs (132a, 132b, ..., 132y), each node (130a, 130b, ..., 130x) comprising a weight (Wa, ..., Wy) for each input (132a, 132b, ..., 132y), and each node configured to produce an output (134a, 134b, ..., 134x); and one or more updating units (150) configured to update the weights (Wa, ..., Wy) of each node based on correlation of each respective input (132a, ..., 132c) of the node (130a) with the corresponding output (134a) during a learning mode; one or more processing units (140x) configured to receive a processing unit input and configured to produce a processing unit output by changing the sign of the received processing unit input; and wherein the system output (120) comprises the outputs (134a, 134b, ..., 134x) of each node (130a, 130b, ..., 130x), wherein nodes (130a, 130b) of a first group (160) of the plurality of nodes are configured to excite one or more other nodes (..., 130x) of the plurality of nodes (130a, 130b, ..., 130x) by providing the output (134a, 134b) of each of the nodes (130a, 130b) of the first group (160) of nodes as input (132d, ..., 132y) to the one or more other nodes (..., 130x), wherein nodes (130x) of a second group (162) of the plurality of nodes are configured to inhibit one or more other nodes (130a, 130b, ...) of the plurality of nodes (130a, 130b, ..., 130x) by providing the output (134x) of each of the nodes (130x) of the second group (162) as a processing unit input to a respective processing unit (140x), each respective processing unit (140x) being configured to provide the processing unit output as input (132b, 132e, ...) to the one or more other nodes (130a, 130b, ...) and wherein each node of the plurality of nodes (130a, 130b, ..., 130x) belongs to one of the first and second groups (160, 162) of nodes. The disclosure further relates to a method, and a computer program product.

Description

A data processing system comprising a network, a method, and a computer program product Technical field The present disclosure relates to a data processing system comprising a network, a method, and a computer program product. More specifically, the disclosure relates to a data processing system comprising a network, a method and a computer program product as defined in the introductory parts of the independent claims.
Background art Artificial intelligence (Al) is known. One example of Al is Artificial Neural Networks (ANNs). ANNs can suffer from rigid representations that appear to make the network focusing on limited features for identification. Such rigid representations may lead to inaccuracy in predictions. Thus, it may be advantageous to create networks/data processing systems that do not rely on rigid representations, such as networks/data processing systems in which inference is instead based on widespread representations across all nodes/elements, and/or in which no individual features are allowed to become too dominant, thereby providing more accurate predictions and/or more accurate data processing systems. Networks in which all nodes contribute to all representations are known as dense coding networks. So far, implementations of dense coding networks have been hampered by a lack of rules for autonomous network formation, making it difficult to generate functioning networks with high capacity/variation.
Therefore, there may be a need for an Al system with increased capacity. processing. Preferably, such Al systems provide or enable one or more of improved performance, higher reliability, increased efficiency, faster training, use of less computer power, use of less training data, use of less storage space, less complexity and/or use of less energy.
SE 2051375 A1 mitigates some ofthe above-mentioned problems. However, there may still be a need for more efficient Al/data processing systems and/or alternative approaches.
Summary An object of the present disclosure is to mitigate, alleviate or eliminate one or more of the above-identified deficiencies and disadvantages in prior art and solve at least the above- mentioned prob|em(s).
According to a first aspect there is provided a data processing system. The data processing system is configured to have one or more system input(s) comprising data to be processed and a system output. The data processing system comprises: a network, NW, comprising a p|ura|ity of nodes, each node configured to have a p|ura|ity of inputs, each node comprising a weight for each input, and each node configured to produce an output; and one or more updating units configured to update the weights of each node based on correlation of each respective input of the node with the corresponding output during a learning mode; one or more processing units configured to receive a processing unit input and configured to produce a processing unit output by changing the sign ofthe received processing unit input. The system output comprises the outputs of each node. Furthermore, nodes of a first group of the p|ura|ity of nodes are configured to excite one or more other nodes of the p|ura|ity of nodes by providing the output of each of the nodes of the first group of nodes as input to the one or more other nodes. I\/|oreover, nodes of a second group of the p|ura|ity of nodes are configured to inhibit one or more other nodes of the p|ura|ity of nodes by providing the output of each ofthe nodes ofthe second group as a processing unit input to a respective processing unit, each respective processing unit being configured to provide the processing unit output as input to the one or more other nodes. Each node ofthe p|ura|ity of nodes belongs to one ofthe first and second groups of nodes.
According to some embodiments, the system input(s) comprises sensor data of a p|ura|ity of contexts/tasks.
According to some embodiments, the updating unit comprises, for each weight, a probability value for increasing the weight, and during the learning mode, the data processing system is configured to limit the ability of a node to inhibit or excite the one or more other nodes by providing a first set point for a sum of all weights associated with the inputs to the one or more other nodes, comparing the first set point to the sum of all weights associated with the inputs to the one or more other nodes, if the first set point is smaller than the sum of all weights associated with the inputs to the one or more other nodes decreasing the probability values associated with the weights associated with the inputs to the one or more other nodes and if the first set point is greater than the sum of all weights associated with the inputs to the one or more other nodes increasing the probability values associated with the weights associated with the inputs to the one or more other nodes.
According to some embodiments, during the learning mode, the data processing system is configured to limit the ability of a system input to inhibit or excite one or more nodes by providing the first set point for a sum of all weights associated with the inputs to the one or more nodes, comparing the first set point to the sum of all weights associated with the inputs to the one or more nodes, if the first set point is smaller than the sum of all weights associated with the inputs to the one or more nodes decreasing the probability values associated with the weights associated with the inputs to the one or more nodes and if the first set point is greater than the sum of all weights associated with the inputs to the one or more nodes increasing the probability values associated with the weights associated with the inputs to the one or more nodes.
According to some embodiments, each ofthe inputs to the one or more other nodes has a coordinate in a network space, and an amount of decreasing/increasing the weights of the inputs to the one or more other nodes is based on a distance between the coordinates of the inputs associated with the weights in the network space.
According to some embodiments, the system is further configured to set a weight to zero if the weight does not increase over a pre-set period of time.
According to some embodiments, the system is further configured to increase the the probability value of a weight having a zero value if the sum of all weights associated with the inputs to the one or more other nodes does not exceed the first set point for a pre-set period oftime.
According to some embodiments, during the learning mode, the data processing system is configured to increase the relevance of the output of a node to the one or more other nodes by providing a first set point for a sum of all weights associated with the inputs to the one or more other nodes, comparing the first set point to the sum of all weights associated with the inputs to the one or more other nodes over a first time period, if the first set point is smaller than the sum of all weights associated with the inputs to the one or more 4 other nodes over the entire length of the first time period increasing the probability of changing the weights of the inputs to the node and if the first set point is greater than the sum of all weights associated with the inputs to the one or more other nodes over the entire length of the first time period decreasing the probability of changing the weights of the inputs to the node.
According to some embodiments, the updating unit comprises, for each weight, a probability value for increasing the weight, and wherein, during the learning mode, the data processing system is configured to provide a second set point for a sum of all weights associated with the inputs to a node, configured to calculate the sum of all weights associated with the inputs to the node, configured to compare the calculated sum to the second set point and if the calculated sum is greater than the second set point, configured to decrease the probability values associated with the weights associated with the inputs to the node and if the calculated sum is smaller than the second set point, configured to increase the probability values associated with the weights associated with the inputs to the node.
According to some embodiments, each node comprises a plurality of compartments and each compartment is configured to have a plurality of compartment inputs, each compartment comprising a compartment weight for each compartment input, and each compartment is configured to produce a compartment output and each compartment comprises an updating unit configured to update the compartment weights based on correlation during the learning mode and the compartment output of each compartment is utilized to adjust the output of the node the compartment is comprised in based on a transfer function.
According to some embodiments, during the learning mode, the data processing system is configured to: detect whether the network is sparsely connected by comparing an accumulated weight change for the system input(s) over a second time period to a threshold value; and if the data processing system detects that the network is sparsely connected, increase the output of one or more of the plurality of nodes by adding a predetermined waveform to the output of one or more of the plurality of nodes for the duration of a third time period.
According to some embodiments, each node comprises an updating unit, each updating unit is configured to update the weights ofthe respective node based on correlation of each respective input of the node with the output ofthat node and each updating unit is configured to apply a first function to the correlation if the associated node belongs to the first group ofthe plurality of nodes and apply a second function, different from the first function, to the correlation if the associated node belongs to the second group of the plurality of nodes in order to update the weights during the learning mode.
According to some embodiments, the data processing system is configured to, after updating ofthe weights has been performed, calculate a population variance of the outputs of the nodes ofthe network, compare the calculated population variance to a power law; and minimizing an error or a mean squared error between the population and the power law by adjusting parameters of the network.
According to some embodiments, the data processing system is configured to from the sensor data learn to identify one or more entities while in a learning mode and thereafter configured to identify the one or more entities while in a performance mode and the identified entity is one or more of a speaker, a spoken letter, syllable, phoneme, word or phrase present in the sensor data or an object or a feature of an object present in sensor data or a new contact event, an end of a contact event, a gesture or an applied pressure present in the sensor data.
According to a second aspect there is provided a computer-implemented or hardware- implemented method for processing data. The method comprises a) receiving one or more system input(s) comprising data to be processed; b) providing a plurality of inputs, at least one of the plurality of inputs being a system input, to a network, NW, comprising a plurality of first nodes; c) receiving an output from each first node; d) providing a system output, comprising the output of each first node; e) exciting, by nodes of a first group of the plurality of nodes, one or more other nodes of the plurality of nodes by providing the output of each of the nodes ofthe first group of nodes as input to the one or more other nodes; f) inhibiting, by nodes of a second group of the plurality of nodes, one or more other nodes of the plurality of nodes by providing the output of each ofthe nodes of the second group as a processing unit input to a respective processing unit, each respective processing unit being configured to provide the processing unit output as input to the one or more other nodes; and g) optionally updating, by one or more updating units, weights based on correlation; h) optionally repeating a)-g) until a learning criterion is met; and i) repeating a)-f) until a stop criterion is met, and each node ofthe plurality of nodes belongs to one of the first and second groups of nodes.
According to some embodiments, the method further comprises initializing weights by setting the weights to zero and adding a predetermined waveform to the output of one or more of the plurality of nodes for the duration of a third time period, the third time period starting at the same time receiving one or more system input(s) comprising data to be processed starts.
According to some embodiments, the method further comprises initializing weights by randomly allocating values between 0 and 1 to the weights and adding a predetermined waveform to the output of one or more of the plurality of nodes for the duration of a third time period According to a third aspect there is provided a computer program product comprising a non-transitory computer readable medium, having stored thereon a computer program comprising program instructions, the computer program being loadable into a data processing unit and configured to cause execution ofthe method ofthe third aspect or any of the above mentioned embodiments when the computer program is run by the data processing unit.
Effects and features of the second and third aspects are to a large extent analogous to those described above in connection with the first aspect and vice versa. Embodiments mentioned in relation to the first aspect are largely compatible with the second and third aspects and vice versa.
An advantage of some embodiments is a more efficient processing of the data/information, e.g., during a learning/training mode.
A further advantage of some embodiments is that a more efficient network is provided, e.g., the utilization of available network capacity is maximized, thus providing a more efficient data processing system.
Another advantage of some embodiments is that the system/network is less complex, e.g., having fewer nodes (with the same precision and/or for the same context/input range).
Yet another advantage of some embodiments is a more efficient use of data.
A further advantage of some embodiments is that utilization of available network capacity is improved (e.g., maximized), thus providing a more efficient data processing system.
Yet a further advantage of some embodiments is that the system/network is more efficient and/or that training/learning is shorter/faster.
Another advantage of some embodiments is that a network with lower complexity is provided.
A further advantage of some embodiments is an improved/increased generalization (e.g., across different tasks/contexts).
Yet a further advantage of some embodiments is that the system/network is less sensitive to noise.
Other advantages of some of the embodiments are improved performance, higher/increased reliability, increased precision, increased efficiency (for training and/or performance), faster/shorter training/learning, less computer power needed, less training data needed, less storage space needed, less complexity and/or lower energy consumption.
The present disclosure will become apparent from the detailed description given below. The detailed description and specific examples disclose preferred embodiments of the disclosure by way of illustration only. Those skilled in the art understand from guidance in the detailed description that changes, and modifications may be made within the scope of the disclosure.
Hence, it is to be understood that the herein disclosed disclosure is not limited to the particular component parts of the device described or steps of the methods described since such apparatus and method may vary. lt is also to be understood that the terminology used herein is for purpose of describing particular embodiments only and is not intended to be limiting. lt should be noted that, as used in the specification and the appended claim, the articles "a", "an", "the", and "said" are intended to mean that there are one or more ofthe elements unless the context explicitly dictates otherwise. Thus, for example, reference to "a unit" or "the unit" may include several devices, and the like. Furthermore, the words "comprising", "including", "containing" and similar wordings does not exclude other elements or steps.
Brief descriptions of the drawings The above objects, as well as additional objects, features, and advantages of the present disclosure, will be more fully appreciated by reference to the following illustrative and non-limiting detailed description of example embodiments of the present disclosure, when taken in conjunction with the accompanying drawings.
Figure 1 is a schematic block diagram illustrating a data processing system according to some embodiments; Figure 2 is a schematic block diagram illustrating a data processing system according to some embodiments; Figure 3 is a flowchart illustrating method steps according to some embodiments; Figure 4 is a schematic drawing illustrating an example computer readable medium according to some embodiments; Figure 5 is a schematic drawing illustrating an updating unit according to some embodiments; and Figure 6 is a schematic drawing illustrating a compartment according to some embodiments.
Detailed description The present disclosure will now be described with reference to the accompanying drawings, in which preferred example embodiments of the disclosure are shown. The disclosure may, however, be embodied in other forms and should not be construed as limited to the herein disclosed embodiments. The disclosed embodiments are provided to fully convey the scope of the disclosure to the skilled person.
Terminology Below is referred to a "node". The term "node" may refer to a neuron, such as a neuron of an artificial neural network, another processing element, such as a processor, of a network of processing elements or a combination thereof. Thus, the term "network" (NW) may refer to an artificial neural network, a network of processing elements or a combination thereof.
Below is referred to a "processing unit". A processing unit may also be referred to as a synapse, such as an input unit (with a processing unit) for a node. However, in some embodiments, the processing unit is a (general) processing unit (other than a synapse) associated with (connected to, connectable to or comprised in) a node of a NW, or a (general) processing unit located between two different nodes of the NW.
Below is referred to "context". A context is the circumstances involved or the situation. Context relates to what type of (input) data is expected, e.g., different types of tasks, where every different task has its own context. As an example, if a system input is pixels from an image sensor, and the image sensor is exposed to different lighting conditions, each different lighting condition may be a different context for an object, such as a ball, a car, or a tree, imaged by the image sensor. As another example, if the system input is audio frequency bands from one or more microphones, each different speaker may be a different context for a phoneme present in one or more ofthe audio frequency bands.
Below is referred to "measurable". The term "measurable" is to be interpreted as something that can be measured or detected, i.e., is detectable. The terms "measure" and "sense" are to be interpreted as synonyms.
Below is referred to "entity". The term entity is to be interpreted as an entity, such as physical entity or a more abstract entity, such as a financial entity, e.g., one or more financial data sets. The term "physical entity" is to be interpreted as an entity that has physical existence, such as an object, a feature (of an object), a gesture, an applied pressure, a speaker, a spoken letter, a syllable, a phoneme, a word, or a phrase.
Below is referred to "updating unit". An updating unit may be an updating module or an updating object. ln the following, embodiments will be described where figure 1 is a schematic block diagram illustrating a data processing system 100 according to some embodiments and figure 2 is a schematic block diagram illustrating a data processing system 100 according to some embodiments. ln some embodiments, the data processing system 100 is a network (NW) or comprises an NW. ln some embodiments, the data processing system 100 is or comprises a deep neural network, a deep belief network, a deep reinforcement learning system, a recurrent neural network, or a convolutional neural network.
The data processing system 100 has, or is configured to have, one or more system input(s) 110a, 110b, ..., 110z. The one or more system input(s) 110a, 110b, ..., 110z comprises data to be processed. The data may be multidimensional. E.g., a plurality of signals is provided in parallel. ln some embodiments, the system input 110a, 110b, ..., 110z comprises or consists of time-continuous data. ln some embodiments, the data to be processed comprises data from sensors, such as image sensors, touch sensors and/or sound sensors (e.g., microphones). Furthermore, in some embodiments, the system input(s) 110a, 110b, ..., 110z comprises sensor data of a plurality of contexts/tasks, e.g., while the data processing system 100 is in a learning mode and/or while the data processing system 100 is in a performance mode.
Furthermore, the data processing system 100 has, or is configured to have, a system output 120. The data processing system 100 comprises a network (NW) 130. The NW 130 comprises a plurality of nodes 130a, 130b, ..., 130x. Each node 130a, 130b, ..., 130x has, or is configured to have, a plurality of inputs 132a, 132b, ..., 132y. ln some embodiments, at least one of the plurality of inputs 132a, 132b, ..., 132y is a system input 110a, 110b, ..., 110z. Furthermore, in some embodiments, all of the system inputs 110a, 110b, ..., 110z are utilized as inputs 132a, 132b, ..., 132y to one or more ofthe nodes 130a, 130b, ..., 130x. Moreover, in some embodiments, each of the nodes 130a, 130b, ..., 130x has one or more system inputs 110a, 110b, ..., 110z as input(s) 132a, 132b, ..., 132y. Each node 130a, 130b, ..., 130x has or comprises a weight Wa, Wb, ..., Wy for each input 132a, 132b, ..., 132y, i.e., each input 132a, 132b, ..., 132y is associated with a respective weight Wa, Wb, ..., Wy. ln some embodiments, each weight Wa, Wb, ..., Wy has a value in the range from 0 to 1. Furthermore, the NW 130 0r each node thereof produces, or is configured to produce, an output 134a, 134b, ..., 134x. ln some embodiments, each node 130a, 130b, ..., 130x calculates a combination, such as a (linear) sum, a squared sum, or an average, ofthe inputs 132a, 132b, ..., 132y (to that node) multiplied by a respective weight Wa, Wb, ..., Wy to produce the output(s) 134a, 134b, ..., 134x.
The data processing system 100 comprises one or more updating units 150 configured to update the weights Wa, ..., Wy of each node based on correlation of each respective input 132a, ..., 132c of a node (e.g., 130a) with the corresponding output (e.g., 134a), i.e., with the 11 output of the same node (e.g., 130a), during a learning mode. ln some embodiments, there is no updating of weights during a performance mode. ln one example, updating of the weights Wa, Wb, Wc is based on correlation of each respective input 132a, ..., 132c to a node 130a with the combined activity of all inputs 132a, ..., 132c to that node 130a, i.e., correlation of each respective input 132a, ..., 132c to a node 130a with the output 134a ofthat node 130a (as an example for the node 130a and applicable to all other nodes 130b,130x).Thus, correlation (values) between a first input 132a and the respective output 134a is calculated, correlation (values) between a second input 132b and the respective output 134a is calculated, and correlation (values) between a third input 132c and the respective output 134a is calculated. ln some embodiments, the different calculated correlation (series of) values are compared to each other, and the updating of weights is based on this comparison. ln some embodiments, updating the weights Wa, ..., Wy of each node based on correlation of each respective input (e.g., 132a, ..., 132c) of a node (e.g., 130a) with the corresponding output (e.g., 134a) comprises evaluating each input (e.g., 132a, ..., 132c) of a node (e.g., 130a) based on a score function. The score function gives an indication of how useful each input (e.g., 132a, ..., 132c) of a node (e.g., 130a) is spatially, e.g., for the corresponding output (e.g., 134a) compared to the other inputs (e.g., 132a, ..., 132c) to that node, and/or temporally, e.g., over the time the data processing system (100) processes the input (e.g., 132a).
Furthermore, the data processing system 100 comprises one or more processing units 140x configured to receive a processing unit input 142x and configured to produce a processing unit output 144x by changing the sign of the received processing unit input 142x. ln some embodiments, the sign ofthe received processing unit input 142x is changed by multiplying the processing unit input 142x by -1. However, in other embodiments, the sign of the received processing unit input 142x is changed by phase-shifting the received processing unit input 142x 180 degrees. Alternatively, the sign ofthe received processing unit input 142x is changed by inverting the sign, e.g., from plus to minus or from minus to plus. The system output 120 comprises the outputs 134a, 134b, ..., 134x of each node 130a, 130b, ..., 130x. ln some embodiments, the system output 120 is an array of outputs 134a, 134b, ..., 134x. Furthermore, in some embodiments, the system output 120 is utilized to identify one or more entities or a measurable characteristic (or measurable characteristics) thereofwhile in a performance mode, e.g., from sensor data. 12 ln some embodiments, the NW 130 comprises only a first group 160 ofthe plurality of nodes 130a, 130b, ..., 130x (as seen in figure 1). However, in some embodiments the NW 130 comprises a first group 160 of the plurality of nodes 130a, 130b, ..., 130x and a second group 162 ofthe plurality of nodes 130a, 130b, ..., 130x (as seen in figure 2). Each ofthe nodes (e.g., 130a, 130b) ofthe first group 160 of the plurality of nodes (i.e., excitatory nodes) are configured to excite one or more other nodes (e.g., 130x) of the plurality of nodes 130a, 130b, ..., 130x, such as all other nodes 130b, ..., 130x, by providing the output (e.g., 134a, 134b) of each ofthe nodes (e.g., 130a, 130b) of the first group 160 of nodes (directly) as input (132d, ..., 132y) to the one or more other nodes (e.g., 130x), such as to all other nodes 130b, ..., 130x.
Furthermore, the nodes (e.g., 130x) ofthe second group 162 ofthe plurality of nodes are configured to inhibit one or more other nodes 130a, 130b, ..., such as all other nodes 130a, 130b, ..., of the plurality of nodes 130a, 130b, ..., 130x by providing the output (e.g., 134x) of each ofthe nodes (e.g., 130x) of the second group 162 as a processing unit input 142x to a respective processing unit (e.g., 140x), each respective processing unit (e.g., 140x) being configured to provide the processing unit output 144x as input (e.g., 132b, 132e) to the one or more other nodes e.g., 130a, 130b). Each node of the plurality of nodes 130a, 130b, ..., 130x belongs to one ofthe first and second groups (160, 162) of nodes. Furthermore, as indicated above, in some embodiments, all nodes 130a, 130b, ..., 130x belong to the first group 160 of nodes. ln some embodiments, each node 130a, 130b, ..., 130x is configured to either inhibit or excite some/all other nodes 130b, ..., 130x of the plurality of nodes 130a, 130b, ..., 130x by providing the output 134a, 134b, ..., 134x (of each node 130a, 130b, ..., 130x) either multiplied by -1 or directly as an input 132d, ..., 132y to one or more other nodes 130b, ..., 130x. By configuring one group of nodes to inhibit other nodes and another group of nodes to excite other nodes and perform updating based on correlation during the learning mode, a more efficient network may be provided, e.g., the utilization of available network capacity may be maximized, thus providing a more efficient data processing system. ln some embodiments, the updating unit(s) 150 comprises, for each weight Wa, ..., Wy, a probability value Pa, ..., Py for increasing the weight (and possibly a probability value Pad, ..., Pyd for decreasing the weight which in some embodiments is 1-Pa, ..., 1-Py, i.e., Pad=1-Pa, Pbd=1-Pb etc.). ln some embodiments, the updating unit(s)150 comprises look-up tables (LUTs) for storing the probability values Pa, ..., Py. During the learning mode, the data processing system 100 is configured to limit the ability of a node (e.g., 130a) to inhibit or 13 excite the one or more other nodes (e.g., 130b, ..., 130x) by providing a first set point for a sum of all weights (e.g., Wd, Wy) associated with the inputs (e.g., 132d, ..., 132y) to the one or more other nodes (e.g., 130b, ..., 130x), by comparing the first set point to the sum of all weights (e.g., Wd, Wy) associated with the inputs (e.g., 132d, 132y) to the one or more other nodes (e.g., 130b, ..., 130x), by, if the first set point is smaller than the sum of all weights (e.g., Wd, Wy) associated with the inputs (e.g., 132d, 132y) to the one or more other nodes (e.g., 130b, ..., 130x), decreasing the probability values (e.g., Pd, Py) associated with the weights (e.g., Wd, Wy) for (associated with) the inputs (e.g., 132d, 132y) to the one or more other nodes (e.g., 130b, ..., 130x) and, by, if the first set point is greater than the sum of all weights (e.g., Wd, Wy) associated with the inputs (e.g., 132d, 132y) to the one or more other nodes (e.g., 130b, ..., 130x) increasing the probability values (e.g., Pd, Py) associated with the weights (e.g., Wd, Wy) for (associated with) the inputs (e.g., 132d, 132y) to the one or more other nodes (e.g., 130b, ..., 130x).
Furthermore, in some embodiments, the data processing system 100 is, during the learning mode, configured to limit the ability of a system input (e.g., 110z) to inhibit or excite one or more nodes (e.g., 130b, 130x) by providing the first set point for a sum of all weights (e.g., Wg, Wx) associated with the inputs (e.g., 132g, 132x) to the one or more nodes (e.g., 130b, 130x), by comparing the first set point to the sum of all weights (e.g., Wg, Wx) associated with the inputs (e.g., 132g, 132x) to the one or more nodes (e.g., 130b, 130x), by, if the first set point is smaller than the sum of all weights (e.g., Wg, Wx) associated with the inputs (e.g., 132g, 132x) to the one or more nodes (e.g., 130b, 130x) decreasing the probability values (e.g., Pg, Px) associated with the weights (e.g., Wg, Wx) associated with the inputs (e.g., 132g, 132x) to the one or more nodes (e.g., 130b, 130x) and by, if the first set point is greater than the sum of all weights (e.g., Wg, Wx) associated with the inputs (e.g., 132g, 132x) to the one or more nodes (e.g., 130b, 130x) increasing the probability values (e.g., Pg, Px) associated with the weights (e.g., Wg, Wx) for (associated with) the inputs (e.g., 132g, 132x) to the one or more nodes (e.g., 130b, 130x).
I\/|oreover, in some embodiments, each ofthe inputs (e.g., 132d, 132y) to the one or more other nodes (e.g., 130b, 130x) has a coordinate in a network space, and an amount of decreasing/increasing the weights (e.g., Wd, Wy) ofthe inputs (e.g., 132d, 132y) to the one or more other nodes (e.g., 130b, 130x) is based on a distance between the coordinates ofthe inputs (e.g., 132d, 132y) associated with the weights (e.g., Wd, Wy) in the network space. ln 14 these embodiments, the decreasing/increasing of the weights is based on the probability (indicated by the probability values) of decreasing/increasing the weights and based on the amount to decrease/increase the weights (which is calculated based on the distance in the network space between the coordinates of the inputs. ln some embodiments, the data processing system 100 is (further) configured to set a weight Wa, ..., Wy (e.g., any of one or more of the weights) to zero ifthe weight Wa, ..., Wy (in question) does not increase over a (first) pre-set period of time. Furthermore, in some embodiments, the data processing system 100 is (further) configured to increase the probability value Pa, ..., Py of a weight Wa, ..., Wy having a zero value ifthe sum of all weights (e.g., Wd, Wy) associated with the inputs (e.g., 132d, 132y) to the one or more other nodes (e.g., 130b, 130x) does not exceed the first set point for a (second) pre-set period of time. ln some embodiments, the data processing system 100 is, during the learning mode, configured to increase the relevance of the output (e.g., 134a) of a node (e.g., 130a) to the one or more other nodes (e.g., 130b, 130x) by providing a first set point for a sum of all weights (e.g., Wd, Wy) associated with the inputs (e.g., 132d, 132y) to the one or more other nodes (e.g., 130b, 130x), by comparing the first set point to the sum of all weights (e.g., Wd, Wy) associated with the inputs (e.g., 132d, 132y) to the one or more other nodes (e.g., 130b, 130x) over a first time period, by, if the first set point is smaller than the sum of all weights (e.g., Wd, Wy) associated with the inputs (e.g., 132d, 132y) to the one or more other nodes (e.g., 130b, 130x) over the entire length ofthe first time period increasing the probability of changing the weights (e.g., Wa, Wb, Wc) of the inputs (e.g., 132a, 132b, 132c) to the node (e.g., 130a) and, by, if the first set point is greater than the sum of all weights (e.g., Wd, Wy) associated with the inputs (e.g., 132d, 132y) to the one or more other nodes (e.g., 130b, 130x) over the entire length of the first time period decreasing the probability of changing the weights (e.g., Wa, Wb, Wc) ofthe inputs (e.g., 132a, 132b, 132c) to the node (e.g., 130a) (and in the rare occasion that the first set point is neither smaller nor greater than the sum of all weights during entire length of the first time period leave the probability of changing the weights unchanged).
Furthermore, in some embodiments, the updating unit(s) 150 comprises, for each weight Wa, ..., Wy, a probability value Pa, ..., Py for increasing the weight (and possibly a probability value Pad, ..., Pyd for decreasing the weight which in some embodiments is 1-Pa, ..., 1-Py, i.e., Pad=1-Pa, Pbd=1-Pb etcetera). ln these embodiments, during the learning mode, the data processing system 100 is configured to provide a second set point for a sum of all weights Wa, Wb, Wc associated with the inputs 132a, 132b, 132c to a node 130a, configured to calculate the sum of all weights Wa, Wb, Wc associated with the inputs 132a, 132b, 132c to the node 130a, configured to compare the calculated sum to the second set point and if the calculated sum is greater than the second set point, configured to decrease the probability values Pa, Pb, Pc associated with the weights Wa, Wb, Wc for (associated with) the inputs 132a, 132b, 132c to the node 130a and if the calculated sum is smaller than the second set point, configured to increase the probability values Pa, Pb, Pc associated with the weights Wa, Wb, Wc for (associated with) the inputs 132a, 132b, 132c to the node 130a (as an example for the node 130a and also applicable to all other nodes 130b, ..., 130x).
Moreover, in some embodiments, during the learning mode, the data processing system 100 is configured to detect whether the network 130 is sparsely connected by comparing an accumulated weight change for the system input(s)110a, 110b, ..., 110z to a threshold value over a second time period. The accumulated weight change is the change of the weights Wa, Wf, Wg, Wx associated with the system input(s) 110a, 110b, ..., 110z over a second time period. The second time period may be a predetermined time period. lf the accumulated weight change is greater than the threshold value, it is determined that the network 130 is sparsely connected. Furthermore, the data processing system 100 is configured to, if the data processing system 100 detects that the network 130 is sparsely connected, increase the output 134a, 134b, ..., 134x of one or more ofthe plurality of nodes 130a, 130b, ..., 130x by adding a predetermined waveform to the output 134a, 134b, ..., 134x of one or more of the plurality of nodes 130a, 130b, ..., 130x for the duration of a third time period. The third time period may be a predetermined time period. By adding a predetermined waveform to the output 134a, 134b, ..., 134x of one or more of the plurality of nodes 130a, 130b, ..., 130x for the duration of a third time period, nodes may be better grouped together.
I\/|oreover, in some embodiments, each node comprises an updating unit 150. Each updating unit 150 is configured to update the weights Wa, Wb, Wc of the respective node 130a based on correlation of each respective input 132a, ..., 132c ofthe node 130a with the output 134a ofthat node 130a. Furthermore, each updating unit 150 is configured to apply a first function to the correlation if the associated node belongs to the first group 160 of the plurality of nodes and apply a second function, different from the first function, to the 16 correlation if the associated node belongs to the second group 162 of the plurality of nodes in order to update the weights Wa, Wb, Wc during the learning mode (as an example for the node 130a and also applicable to all other nodes 130b, ..., 130x). ln some embodiments, the first (learning) function is a function in which ifthe input, i.e., the correlation (value), is increased, the output, i.e., a weight change (value) is exponentially increased and vice versa (decreased input gives exponentially decreased output). ln some embodiments, the second (learning) function is a function in which if the input, i.e., the correlation (value), is increased, the output, i.e., a weight change (value) is exponentially decreased and vice versa (decreased input gives exponentially increased output). ln some embodiments, the data processing system 100 is configured to, after updating ofthe weights Wa, ..., Wy has been performed, calculate a population variance ofthe outputs 134a, 134b, ..., 134x of the nodes 130a, 130b, ..., 130x of the network, compare the calculated population variance to a power law; and minimize an error, such as a mean absolute error or a mean squared error, between the population and the power law by adjusting parameters of the network. Thus, the population variance ofthe outputs 134a, 134b, ..., 134x of the nodes 130a, 130b, ..., 130x of the network may be distributed closely to the power law. Thereby, optimal resource utilization is achieved and/or every node is enabled to contribute optimally, thus providing more efficient utilization of data. The power law, may, for example, be based on the log of the amount of variance explained against the log of number of components resulting from a principal component analysis. ln another example, a power law is based on a principal component analysis of limited time vectors of activity/output across all neurons, where each principal component number in the abscissa is replaced with node number. lt is assumed that the input data that the system is exposed to has a higher number of principal components than there are nodes. ln such case, when a power law is followed, each added node to the system potentially extends the maximal capacity of the system. Examples of parameters (that can be adjusted) for the network include: the type of scaling of the learning (how the weights are composed, the range ofthe weights etc.), induced change in synaptic weight when updated (e.g., exponentially, linearly), the amount of gain in the learning, time constants of the state memory of the nodes, the specific learning functions, the transfer functions for each node, the total capacity of the connections between nodes and sensors, the total capacity of nodes across all nodes. 17 Furthermore, in some embodiments, the data processing system 100 is configured to from the sensor data learn to identify one or more entities or a measurable characteristic (or measurable Characteristics) thereofwhile in a learning mode and thereafter configured to identify the one or more entities or a measurable characteristic (or measurable characteristics) thereofwhile in a performance mode, e.g., from sensor data. ln some embodiments, the identified entity is one or more of a speaker, a spoken letter, syllable, phoneme, word, or phrase present in the (audio) sensor data or an object or a feature of an object present in sensor data (e.g., pixels) or a new contact event, the end of a contact event, a gesture, or an applied pressure present in the (touch) sensor data. Although, in some embodiments, all the sensor data is a specific type of sensor data, such as audio sensor data, image sensor data or touch sensor data, in other embodiments, the sensor data is a mix of different types of sensor data, such as audio sensor data, image sensor data and touch sensor data, i.e., the sensor data comprises different modalities. ln some embodiments, the data processing system 100 is configured to from the sensor data learn to identify a measurable characteristic (or measurable characteristics) of an entity. A measurable characteristic may be a feature of an object, a part of a feature, a temporally evolving trajectory of positions, a trajectory of applied pressures, or a frequency signature or a temporally evolving frequency signature of a certain speaker when speaking a certain letter, syllable, phoneme, word, or phrase. Such a measurable characteristic may then be mapped to an entity. For example, a feature of an object may be mapped to an object, a part of a feature may be mapped to a feature (of an object), a trajectory of positions may be mapped to a gesture, a trajectory of applied pressures may be mapped to a (largest) applied pressure, a frequency signature of a certain speaker may be mapped to the speaker, and a spoken letter, syllable, phoneme, word or phrase may be mapped to an actual letter, syllable, phoneme, word or phrase. Such mapping may simply be a look up in a memory, a look up table or a database. The look up may be based on finding the entity of a plurality of physical entities that has the characteristic, which is closest to the measurable characteristic identified. From such a look up, the actual entity may be identified. Furthermore, the data processing system 100 may be utilized in a warehouse, e.g., as part of a fully automatic warehouse (machine), in robotics, e.g., connected to robotic actuators (or robotics control circuits) via middleware (for connecting the data processing system 100 to the actuators), or in a system together with low complexity event- 18 based Cameras, whereby triggered data from the event-based cameras may be directly fed/sent to the data processing system 100.
Figure 3 is a flowchart illustrating example method steps according to some embodiments. Figure 3 shows a computer-implemented or hardware-implemented method 300 for processing data. The method may be implemented in analog hardware/electronics circuit, in digital circuits, e.g., gates and flipflops, in mixed signal circuits, in software and in any combination thereof. ln some embodiments, the method 300 comprises entering a learning mode. Alternatively, the method 300 comprises providing an already trained data processing system 100. ln this case, the steps 370 and 380 (steps g and h) are not performed. The method 300 comprises receiving 310 one or more system input(s) 110a, 110b, ..., 110z comprising data to be processed. Furthermore, the method 300 comprises providing 320 a plurality of inputs 132a, 132b, ..., 132y, at least one of the plurality of inputs being a system input, to a network, NW, 130 comprising a plurality of first nodes 130a, 130b, ..., 130x. I\/|oreover, the method 300 comprises receiving 330 an output 134a, 134b, ..., 134x from each first node 130a, 130b, ..., 130x. The method 300 comprises providing 340 a system output 120 comprising the output 34a, 134b, ..., 134x of each first node 130a, 130b, ..., 130x. Furthermore, the method 300 comprises exciting 350, by nodes 130a, 130b of a first group 160 of the plurality of nodes, one or more other nodes ..., 130x of the plurality of nodes 130a, 130b, ..., 130x by providing the output 134a, 134b of each of the nodes 130a, 130b of the first group 160 of nodes as input 132d, ..., 132y to the one or more other nodes ..., 130x. Moreover, the method 300 comprises inhibiting 360, by nodes 130x of a second group 162 of the plurality of nodes, one or more other nodes 130a, 130b, of the plurality of nodes 130a, 130b, ..., 130x by providing the output 134x of each of the nodes 130x ofthe second group 162 as a processing unit input 142x to a respective processing unit 140x, each respective processing unit 140x being configured to provide the processing unit output 144x as input 132b, 132e, to the one or more other nodes 130a, 130b, .The method 300 comprises updating 370, by one or more updating units 150, weights Wa, ..., Wy based on correlation (during the learning mode and as described in connection with figures 1 and 2 above). Furthermore, the method 300 comprises repeating 380 (during the learning mode) the steps 310, 320, 330, 340, 350, 360 and 370 (described above) until a learning criterion is met (thus exiting the learning mode when the learning criterion is met). ln some embodiments, the learning criterion is that the data processing the system 100 is fully trained. ln some embodiments, the learning criterion is 19 that the weights Wa, Wb, ..., Wy converge and/or that an overall error goes below an error threshold. ln some embodiments, the method 300 comprises entering a performance/identification mode. I\/|oreover, the method 300 comprises repeating 390 (during the performance/identification mode) the steps 310, 320, 330, 340, 350 and 360 (described above) until a stop criterion is met (thus exiting the performance/identification mode when the stop criterion is met). A stop criterion/condition may be that all data to be processed have been processed or that a specific amount of data/number of loops have been processed/performed. Alternatively, the stop criterion is that the whole data processing system 100 is turned off. As another alternative, the stop criterion is that the data processing system 100 (or a user of the system 100) has discovered that the data processing system 100 needs to be further trained. ln this case, the data processing system 100 enters/re-enters the learning mode (and performs the steps 310, 320, 330, 340, 350, 360, 370, 380 and 390). Each node of the plurality of nodes 130a, 130b, ..., 130x belongs to one of the first and second groups 160, 162 of nodes. ln some embodiments, the method 300 comprises initializing 304 weights Wa, ..., Wy by setting the weights Wa, ..., Wy to zero. Alternatively, the method 300 comprises initializing 306 the weights Wa, ..., Wy by randomly allocating values between 0 and 1 to the weights Wa, ..., Wy. Furthermore, in some embodiments the method 300 comprises adding 308 a predetermined waveform to the output 134a, 134b, ..., 134x of one or more of the plurality of nodes 130a, 130b, ..., 130x for the duration of a third time period. ln some embodiments, the third time period starts simultaneously with receiving 310 one or more system input(s) 110a, 110b, ..., 110z comprising data to be processed.
According to some embodiments, a computer program product comprises a non- transitory computer readable medium 400 such as, for example a universal serial bus (USB) memory, a plug-in card, an embedded drive, a digital versatile disc (DVD) or a read only memory (ROM). Figure 4 illustrates an example computer readable medium in the form of a compact disc (CD) ROM 400. The computer readable medium has stored thereon, a computer program comprising program instructions. The computer program is loadable into a data processor (PROC) 420, which may, for example, be comprised in a computer or a computing device 410. When loaded into the data processing unit, the computer program may be stored in a memory (MEM) 430 associated with or comprised in the data-processing unit. According to some embodiments, the computer program may, when loaded into and run by the data processing unit, cause execution of method steps according to, for example, the method illustrated in figure 3, which is described herein.
Figure 5 illustrates an updating unit according to some embodiments. The updating unit 150a is for the node 130a. However, all updating units 150, 150a (for all nodes) are the same or similar. The updating unit 150a receives each respective input 132a, ..., 132c ofthe node 130a (or ofall nodes if it is a central updating unit 150). Furthermore, the updating unit 150a receives the output 134a ofthe node 130a (or of all nodes if it is a central updating unit 150). I\/|oreover, the updating unit 150a comprises a correlator 152a. The correlator 152a calculates correlation of each respective input 132a, ..., 132c ofthe node 130a with the corresponding output (134a) during a learning mode, thus producing (a series of) correlation values for each ofthe inputs 132a, ..., 132c. ln some embodiments, the different calculated (series of) correlation values are compared to each other (to produce correlation ratio values) and the updating of weights is based on this comparison. Furthermore, in some embodiments, the updating unit 150a is configured to apply a first function 154 to the correlation (values, ratio values) if the node (130a) belongs to the first group 160 of the plurality of nodes and apply a second function 156, different from the first function, to the correlation (values, ratio values) if the node (130a) belongs to the second group 162 of the plurality of nodes in order to update the weights Wa, Wb, Wc during the learning mode. ln some embodiments, the updating unit 150a keeps track on whether a node belongs to the first or second group 160, 162 by utilizing lock-up tables (LUTs). I\/|oreover, in some embodiments, the updating unit 150a comprises, for each weight Wa, Wb, Wc, a probability value Pa, Pb, Pc for increasing the weight. ln some embodiments, the updating unit 150a comprises, for each weight Wa, Wb, Wc, a probability value Pad, Pbd, Pcd for decreasing the weight which in some embodiments is 1-Pa, 1-Pb, 1-Pc, i.e., Pad=1-Pa, Pbd=1-Pb and Pcd=1-Pc). ln some embodiments, the probability values Pa, Pb, Pc and optionally the probability values Pad, Pbd, Pcd are comprised in a memory unit 158a ofthe updating unit 150a. ln some embodiments, the memory unit 158a is a lock-up table (LUT). ln some embodiments, the updating unit 150a applies one of the first and second functions and/or the probability values Pa, Pb, Pc and optionally the probability values Pad, Pbd, Pcd to the calculated (series of) correlation values (or the produced correlation ratio values) to obtain an update signal 159, which is then applied to the weights Wa, Wb, Wc, thereby updating the weights Wa, Wb, Wc. The function/structure of update units for other nodes 150b, ..., 150x is the same as for the node 150a. Furthermore, in 21 some embodiments, a central updating unit 150 comprises each of the updating units for each of the nodes 130a, 130b, ..., 130x.
Figure 6 illustrates a compartment according to some embodiments. ln some embodiments, each node 130a, 130b, ..., 130x comprises a plurality of compartments 900. Each compartment is configured to have a plurality of compartment inputs 910a, 910b, ..., 910x. Furthermore, each compartment 900 comprises a compartment weight 920a, 920b, ..., 920x for each compartment input 910a, 910b, ..., 910x. Moreover, each compartment 900 is configured to produce a compartment output 940. The compartment output 940 is in some embodiments calculated, by the compartment, as a combination, such as a sum, of all weighted compartment inputs 930a, 930b, ..., 930x to that compartment. For calculating the sum/combination, the compartment may be equipped with a summer (or adder/summation unit) 935. Each compartment 900 comprises an updating unit 995 configured to update the compartment weights 920a, 920b, ..., 920x based on correlation during the learning mode (in the same manner as described for the updating unit 150a above in connection with figure 5 and elsewhere and may for one or more compartments comprise evaluating each input of a node based on a score function). Furthermore, the compartment output 940 of each compartment is utilized to adjust the output 134a, 134b, ..., 134x (e.g., 134a) of the node 130a, 130b, ..., 130x (e.g., 130a) the compartment 900 is comprised in based on a transfer function. Examples oftransfer functions that can be utilized are one or more of a time constant, such as an RC filter, a resistor, a spike generator, and an active element, such as a transistor or an op-amp. The compartments 900a, 900b, ..., 900x may comprise sub- compartments 900aa, 900ab, ..., 900ba, 900bb, ..., 900xx. Thus, each compartment 900a, 900b, ..., 900x may have sub-compartments 900aa, 900ab, ..., 900ba, 900bb, sub-sub- compartments etc., which functions in the same manner as the compartments, i.e., the compartments are cascaded. The number of compartments (and sub-compartments) for a specific node is based on the types of inputs, such as inhibitory input, sensor input and excitatory input, to that specific node. Furthermore, the compartments 900 of a node are arranged so that each compartment 900 has a majority of one of the types of input (e.g., inhibitory input, sensor input or excitatory input). Thus, none of the types of input (e.g., inhibitory input, sensor input or excitatory input) is allowed to become too dominant. ln some embodiments, still referring to figure 6, the updating unit 995 of each compartment 900 comprises, for each compartment weight 920a, 920b, ..., 920x, a probability 22 value PCa, ..., PCy for increasing the weight (and possibly a probability value PCad, ..., PCyd for decreasing the weight which in some embodiments is 1-PCa, ..., 1-PCy, i.e., PCad=1-PCa, PCbd=1-PCb etc.). ln these embodiments, during the learning mode, the data processing system 100 is configured to provide a third set point for a sum of all compartment weights 920a, 920b, ..., 920x associated with the compartment inputs 910a, 910b, ..., 910x to a compartment 900, configured to calculate the sum of all compartment weights 920a, 920b, ..., 920x associated with the compartment inputs 910a, 910b, ..., 910x to the compartment 900, configured to compare the calculated sum to the third set point and ifthe calculated sum is greater than the third set point, configured to decrease the probability values PCa, ..., PCy associated with the compartment weights 920a, 920b, ..., 920x for (associated with) the compartment inputs 910a, 910b, ..., 910x to the compartment 900 and if the calculated sum is smaller than the third set point, configured to increase the probability values PCa, ..., PCy associated with the weights 920a, 920b, ..., 920x for (associated with) the compartment inputs 910a, 910b, ..., 910x to the compartment 900. The third set point is based on a type of input, such as system input (sensor input), input from a node of the first group 160 ofthe plurality of nodes (excitatory input) or input from a node ofthe second group 162 ofthe plurality of nodes (inhibitory input).
The person skilled in the art realizes that the present disclosure is not limited to the preferred embodiments described above. The person skilled in the art further realizes that modifications and variations are possible within the scope of the appended claims. For example, signals from other sensors, such as aroma sensors or flavor sensors may be processed by the data processing system. I\/|oreover, the data processing system described may equally well be utilized for unsegmented, connected handwriting recognition, speech recognition, speaker recognition and anomaly detection in network traffic or intrusion detection systems (IDSs). Additionally, variations to the disclosed embodiments can be understood and effected by the skilled person in practicing the claimed disclosure, from a study of the drawings, the disclosure, and the appended claims.

Claims (21)

Claims
1. A data processing system (100), configured to have one or more system input(s) (110a, 110b, ..., 110z) comprising data to be processed and a system output (120), comprising: a network, NW, (130) comprising a plurality of nodes (130a, 130b, ..., 130x), each node configured to have a plurality of inputs (132a, 132b, ..., 132y), each node (130a, 130b, ..., 130x) comprising a weight (Wa, ..., Wy) for each input (132a, 132b, ..., 132y), and each node configured to produce an output (134a, 134b, ..., 134x); and one or more updating units (150) configured to update the weights (Wa, ..., Wy) of each node based on corre|ation of each respective input (132a, ..., 132c) of the node (130a) with the corresponding output (134a) during a learning mode; one or more processing units (140x) configured to receive a processing unit input and configured to produce a processing unit output by changing the sign of the received processing unit input; and wherein the system output (120) comprises the outputs (134a, 134b, ..., 134x) of each node (13oa,13ob, 13ox), wherein nodes (130a, 130b) of a first group (160) of the plurality of nodes are configured to excite one or more other nodes (..., 130x) of the plurality of nodes (130a, 130b, ..., 130x) by providing the output (134a, 134b) of each of the nodes (130a, 130b) of the first group (160) of nodes as input (132d, ..., 132y) to the one or more other nodes (..., 130x), wherein nodes (130x) of a second group (162) of the p|u ra|ity of nodes are configured to inhibit one or more other nodes (130a, 130b, ...) of the plurality of nodes (130a, 130b, ..., 130x) by providing the output (134x) of each of the nodes (130x) of the second group (162) as a processing unit input to a respective processing unit (140x), each respective processing unit (140x) being configured to provide the processing unit output as input (132b, 132e, ...) to the one or more other nodes (130a, 130b, ...) andwherein each node of the plurality of nodes (130a, 130b, ..., 130x) belongs to one of the first and second groups (160, 162) of nodes.
2. The data processing system of claim 1, wherein the system input(s) comprises sensor data of a plurality of contexts/tasks.
3. The data processing system of any of claims 1-2, wherein the updating unit 150 comprises, for each weight (Wa, ..., Wy), a probability value (Pa, ..., Py) for increasing the weight, and wherein, during the learning mode, the data processing system is configured to limit the ability of a node (130a) to inhibit or excite the one or more other nodes (130b, ..., 130x) by providing a first set point for a sum of all weights (Wd, Wy) associated with the inputs (132d, ..., 132y) to the one or more other nodes (130b, ..., 130x), comparing the first set point to the sum of all weights (Wd, Wy) associated with the inputs (132d, ..., 132y) to the one or more other nodes (130b, ..., 130x), if the first set point is smaller than the sum of all weights (Wd, Wy) associated with the inputs (132d, ..., 132y) to the one or more other nodes (130b, ..., 130x) decreasing the probability values (Pd, Py) associated with the weights (Wd, Wy) associated with the inputs (132d, ..., 132y) to the one or more other nodes (130b, ..., 130x) and if the first set point is greater than the sum of all weights (Wd, Wy) associated with the inputs (132d, ..., 132y) to the one or more other nodes (130b, ..., 130x) increasing the probability values (Pd, Py) associated with the weights (Wd, Wy) associated with the inputs (132d, ..., 132y) to the one or more other nodes (130b, ..., 130x).
4. The data processing system of any of claims 1-3, wherein, during the learning mode, the data processing system is configured to limit the ability of a system input (llOz) to inhibit or excite one or more nodes (130a, ..., 130x) by providing the first set point for a sum of all weights (Wg, Wx) associated with the inputs (132g, 132x) to the one or more nodes (130a, ..., 130x), comparing the first set point to the sum of all weights (Wg, Wx) associated with the inputs (132g, 132x) to the one or more nodes (130a, ..., 130x), if the first set point is smaller than the sum of all weights (Wg, Wx) associated with the inputs (132g, 132x) to the one or more nodes (130a, ..., 130x) decreasing the probability values (Pg, Px) associated with the weights (Wg, Wx) associated with the inputs (132g, 132x) to the one or more nodes (130a, ..., 130x) and if the first set point is greater than the sum of all weights (Wg, Wx) associated with the inputs (132g, 132x) to the one or more nodes (130a, ..., 130x) increasing the probability values (Pg, Px) associated with the weights (Wg, Wx) associated with the inputs (132g, 132x) to the one or more nodes (130a, ..., 130x).
5. The data processing system of any of c|aims 3-4, wherein each of the inputs (132d, ..., 132y) to the one or more other nodes (130b, ..., 130x) has a coordinate in a network space, wherein an amount of decreasing/increasing the weights (Wd, Wy) of the in puts (132d, 132y) to the one or more other nodes (130b, ..., 130x) is based on a distance between the coordinates of the inputs (132d, 132y) associated with the weights (Wd, Wy) in the network space.
6. The data processing system of any of c|aims 3-5, wherein the system is further configured to set a weight (Wa, ..., Wy) to zero if the weight (Wa, ..., Wy) does not increase over a pre-set period of time; and/or wherein the system is further configured to increase the the probability value (Pa, ..., Py) of a weight (Wa, ..., Wy) having a zero value if the sum of all weights (Wd, Wy) associated with the inputs (132d, ..., 132y) to the one or more other nodes (130b, ..., 130x) does not exceed the first set point for a pre-set period of time.
7. The data processing system of any of c|aims 1-2, wherein, during the learning mode, the data processing system is configured to increase the relevance of the output (134a) of a node (130a) to the one or more other nodes (130b, ..., 130x) by providing a first set point for a sum of all weights (Wd, Wy) associated with the inputs (132d, ..., 132y) to the one or more other nodes (130b, ..., 130x), comparing the first set point to the sum of all weights (Wd, Wy) associated with the inputs (132d, ..., 132y) to the one or more other nodes (130b, ..., 130x) over a first time period, if the first set point is smaller than the sum of all weights (Wd, Wy) associated with the inputs (132d, ..., 132y) to the one or more other nodes (130b, ..., 130x) over the entire length of the first time period increasing the probability of changing the weights (Wa, Wb, Wc) of the inputs (132a, 132b, 132c) to the node (130a) and ifthe first set point is greater than the sum of all weights (Wd, Wy) associated with the inputs (132d, ..., 132y) to the one or more other nodes(130b, ..., 130x) over the entire length of the first time period decreasing the probability of changing the weights (Wa, Wb, Wc) of the inputs (132a, 132b, 132c) to the node (130a).
8. The data processing system of any of claims 1-2, wherein the updating unit (150) comprises, for each weight (Wa, ..., Wy), a probability value (Pa, ..., Py) for increasing the weight, and wherein, during the learning mode, the data processing system is configured to provide a second set point for a sum of all weights (Wa, Wb, Wc) associated with the inputs (132a, 132b, 132c) to a node (130a), configured to calculate the sum of all weights (Wa, Wb, Wc) associated with the inputs (132a, 132b, 132c) to the node (130a), configured to compare the calculated sum to the second set point and if the calculated sum is greater than the second set point, configured to decrease the probability values (Pa, Pb, Pc) associated with the weights (Wa, Wb, Wc) associated with the inputs (132a, 132b, 132c) to the node (130a) and if the calculated sum is smaller than the second set point, configured to increase the probability values (Pa, Pb, Pc) associated with the weights (Wa, Wb, Wc) associated with the inputs (132a, 132b, 132c) to the node(130a)
9. The data processing system of any of claims 1-2, wherein each node (130a, 130b, ..., 130x) comprises a plurality of compartments (900) and each compartment being configured to have a plurality of compartment inputs (910a, 910b, ..., 910x), each compartment (900) comprising a compartment weight (920a, 920b, ..., 920x) for each compartment input (910a, 910b, ..., 910x), and each compartment (900) being configured to produce a compartment output (940) and wherein each compartment (900) comprises an updating unit (995) configured to update the compartment weights (920a, 920b, ..., 920x) based on correlation during the learning mode and wherein the compartment output (940) of each compartment is utilized to adjust the output (134a, 134b, ..., 134x) of the node (130a, 130b, ..., 130x) the compartment is comprised in based on a transfer function.
10. The data processing system of claim 9, wherein the updating unit (995) of each compartment (900) comprises, for each compartment weight (920a, 920b, ..., 920x), a probability value (PCa, ..., PCy) for increasing the weight, and wherein, during the learning mode, the data processing system is configured to provide a third set point for a sum of allcompartment weights (920a, 920b, ..., 920x) associated with the compartment inputs (910a, 910b, ..., 910x) to a compartment (900), configured to calculate the sum of all compartment weights (920a, 920b, ..., 920x) associated with the compartment inputs (910a, 910b, ..., 910x) to the compartment (900), configured to compare the calculated sum to the third set point and if the calculated sum is greater than the third set point, configured to decrease the probability values (PCa, ..., PCy) associated with the compartment weights (920a, 920b, ..., 920x) associated with the compartment inputs (910a, 910b, ..., 910x) to the compartment (900) and ifthe calculated sum is smaller than the third set point, configured to increase the probability values (PCa, ..., PCy) associated with the weights (920a, 920b, ..., 920x) associated with the compartment inputs (910a, 910b, ..., 910x) to the compartment (900) and wherein the third set point is based on a type of input, such as system input, input from a node of the first group (160) of the plurality of nodes or input from a node of the second group (162) of the plurality of nodes.
11. The data processing system of any of claims 1-2, wherein during the learning mode, the data processing system is configured to: detect whether the network (130) is sparsely connected by comparing an accumulated weight change for the system input(s) (110a, 110b, ..., 110z) over a second time period to a threshold value; and if the data processing system detects that the network (130) is sparsely connected, increase the output (134a, 134b, ..., 134x) of one or more of the plurality of nodes (130a, 130b, ..., 130x) by adding a predetermined waveform to the output (134a, 134b, ..., 134x) of one or more of the plurality of nodes (130a, 130b, ..., 130x) for the duration of a third time period.
12. The data processing system of any of claims 1-11, wherein each node comprises an updating unit (150), wherein each updating unit (150) is configured to update the weights (Wa, Wb, Wc) of the respective node (130a) based on correlation of each respective input (132a, ..., 132c) of the node (130a) with the output (134a) of that node (130a) and wherein each updating unit (150) is configured to apply a first function to the correlation if the associated node belongs to the first group (160) of the plurality of nodes and apply a second function, differentfrom the first function, to the correlation if the associated node belongs to the second group (162) of the plurality of nodes in order to update the weights (Wa, Wb, Wc) during the learning mode.
13. The data processing system of any ofclaims 1-12, wherein the data processing system is configured to, after updating of the weights (Wa, ..., Wy) has been performed, ca|cu|ate a population variance of the outputs (134a, 134b, ..., 134x) of the nodes (130a, 130b, ..., 130x) of the network, compare the calculated population variance to a power law; and minimizing an error or a mean squared error between the population and the power law by adjusting parameters of the network.
14. The data processing system of any of claims 2-13, wherein the data processing system is configured to from the sensor data learn to identify one or more entities while in a learning mode and thereafter configured to identify the one or more entities while in a performance mode and wherein the identified entity is one or more of a speaker, a spoken letter, syllable, phoneme, word or phrase present in the sensor data or an object or a feature of an object present in sensor data or a new contact event, an end of a contact event, a gesture or an applied pressure present in the sensor data.
15. A computer-implemented or hardware-implemented method (300) for processing data, comprising: a) receiving (310) one or more system input(s) (110a, 110b, ..., 110z) comprising data to be processed; b) providing (320) a plurality of inputs (132a, 132b, ..., 132y), at least one of the plurality of inputs being a system input, to a network, NW, (130) comprising a plurality of first nodes (130a, 130b, ..., 130x); c) receiving (330) an output (134a, 134b, ..., 134x) from each first node (130a, 130b, 13ox), d) providing (340) a system output (120), comprising the output (134a, 134b, ..., 134x) of each first node (130a, 130b, ..., 130x);e) exciting (350), by nodes (130a, 130b) of a first group (160) of the plurality of nodes, one or more other nodes (..., 130x) of the plurality of nodes (130a, 130b, ..., 130x) by providing the output (134a, 134b) of each of the nodes (130a, 130b) of the first group (160) of nodes as input (132d, ..., 132y) to the one or more other nodes (..., 130x); f) inhibiting (360), by nodes (130x) of a second group (162) of the plurality of nodes, one or more other nodes (130a, 130b, ...) of the plurality of nodes (130a, 130b, ..., 130x) by providing the output (134x) of each of the nodes (130x) of the second group (162) as a processing unit input to a respective processing unit (140x), each respective processing unit (140x) being configured to provide the processing unit output as input (132b, 132e, ...) to the one or more other nodes (130a, 130b, ...); and g) optionally updating (370), by one or more updating units (150), weights (Wa, ..., Wy) based on correlation; and h) optionally repeating (380) a)-g) until a learning criterion is met; i) repeating (390) a)-f) until a stop criterion is met, and wherein each node of the plurality of nodes (130a, 130b, ..., 130x) belongs to one of the first and second groups (160, 162) of nodes.
16. The method of claim 15, further comprising: initializing (304) weights (Wa, ..., Wy) by setting the weights (Wa, ..., Wy) to zero; and adding (308) a predetermined waveform to the output (134a, 134b, ..., 134x) of one or more of the plurality of nodes (130a, 130b, ..., 130x) for the duration of a third time period, the third time period starting at the same time receiving (310) one or more system input(s) (110a, 110b, ..., 110z) comprising data to be processed starts.
17. The method of claim 15, further comprising: initializing (306) weights (Wa, ..., Wy) by randomly allocating values between 0 and 1 to the weights (Wa, ..., Wy); and adding (308) a predetermined waveform to the output (134a, 134b, ..., 134x) of one or more of the plurality of nodes (130a, 130b, ..., 130x) for the duration of a third time period. 5
18. A computer program product comprising a non-transitory computer readable medium (400), having stored thereon a computer program comprising program instructions, the computer program being |oadab|e into a data processing unit (420) and configured to cause execution of the method according to any of c|aims 15-17 when the computer program is run by the data processing unit (420).
SE2250397A 2022-02-23 2022-03-30 A data processing system comprising a network, a method, and a computer program product SE547197C2 (en)

Priority Applications (5)

Application Number Priority Date Filing Date Title
KR1020247031252A KR20240154584A (en) 2022-02-23 2023-02-21 Data processing system, method and computer program product including network
PCT/SE2023/050153 WO2023163637A1 (en) 2022-02-23 2023-02-21 A data processing system comprising a network, a method, and a computer program product
EP23760478.0A EP4483300A1 (en) 2022-02-23 2023-02-21 A data processing system comprising a network, a method, and a computer program product
US18/840,928 US20250165779A1 (en) 2022-02-23 2023-02-21 A data processing system comprising a network, a method, and a computer program product
JP2024549659A JP2025508808A (en) 2022-02-23 2023-02-21 Data processing system with network, method and computer program product thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US202263313076P 2022-02-23 2022-02-23

Publications (2)

Publication Number Publication Date
SE2250397A1 true SE2250397A1 (en) 2023-08-24
SE547197C2 SE547197C2 (en) 2025-05-27

Family

ID=88018604

Family Applications (1)

Application Number Title Priority Date Filing Date
SE2250397A SE547197C2 (en) 2022-02-23 2022-03-30 A data processing system comprising a network, a method, and a computer program product

Country Status (2)

Country Link
CN (1) CN118871929A (en)
SE (1) SE547197C2 (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080258767A1 (en) * 2007-04-19 2008-10-23 Snider Gregory S Computational nodes and computational-node networks that include dynamical-nanodevice connections
US7814038B1 (en) * 2007-12-06 2010-10-12 Dominic John Repici Feedback-tolerant method and device producing weight-adjustment factors for pre-synaptic neurons in artificial neural networks
US7904398B1 (en) * 2005-10-26 2011-03-08 Dominic John Repici Artificial synapse component using multiple distinct learning means with distinct predetermined learning acquisition times
JP2011242932A (en) * 2010-05-17 2011-12-01 Honda Motor Co Ltd Electronic circuit
US20150278680A1 (en) * 2014-03-26 2015-10-01 Qualcomm Incorporated Training, recognition, and generation in a spiking deep belief network (dbn)
US20180189631A1 (en) * 2016-12-30 2018-07-05 Intel Corporation Neural network with reconfigurable sparse connectivity and online learning

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7904398B1 (en) * 2005-10-26 2011-03-08 Dominic John Repici Artificial synapse component using multiple distinct learning means with distinct predetermined learning acquisition times
US20080258767A1 (en) * 2007-04-19 2008-10-23 Snider Gregory S Computational nodes and computational-node networks that include dynamical-nanodevice connections
US7814038B1 (en) * 2007-12-06 2010-10-12 Dominic John Repici Feedback-tolerant method and device producing weight-adjustment factors for pre-synaptic neurons in artificial neural networks
JP2011242932A (en) * 2010-05-17 2011-12-01 Honda Motor Co Ltd Electronic circuit
US20150278680A1 (en) * 2014-03-26 2015-10-01 Qualcomm Incorporated Training, recognition, and generation in a spiking deep belief network (dbn)
US20180189631A1 (en) * 2016-12-30 2018-07-05 Intel Corporation Neural network with reconfigurable sparse connectivity and online learning

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
2015 IEEE/ACM International Symposium on Low Power Electronics and Design (ISLPED), "A neuromorphic neural spike clustering processor for deep-brain sensing and stimulation systems", Zhang Beinuo; Jiang Zhewei; Wang Qi; Seo Jae-Sun; Seok Mingoo, 2015-09-21, p. 91 - 97, doi:10.1109/ISLPED.2015.7273496 *
IEEE Transactions on Circuits and Systems I: Regular Papers, "sBSNN: Stochastic-Bits Enabled Binary Spiking Neural Network With On-Chip Learning for Energy Efficient Neuromorphic Computing at the Edge", Koo Minsuk; Srinivasan Gopalakrishnan; Shim Yong; Roy Kaushik, 2020-03-16, p. 2546 - 2555, doi:10.1109/TCSI.2020.2979826 *

Also Published As

Publication number Publication date
SE547197C2 (en) 2025-05-27
CN118871929A (en) 2024-10-29

Similar Documents

Publication Publication Date Title
US10671912B2 (en) Spatio-temporal spiking neural networks in neuromorphic hardware systems
CN110956256B (en) Method and device for realizing Bayes neural network by using memristor intrinsic noise
CN106663221B (en) The data classification biased by knowledge mapping
JP2021528745A (en) Anomaly detection using deep learning on time series data related to application information
CN112784976A (en) Image recognition system and method based on impulse neural network
Dessai Intelligent heart disease prediction system using probabilistic neural network
US11080592B2 (en) Neuromorphic architecture for feature learning using a spiking neural network
CN110706817A (en) Blood glucose sensing data discrimination method and device
US20250165779A1 (en) A data processing system comprising a network, a method, and a computer program product
CN117150402A (en) Power data anomaly detection method and model based on generation type countermeasure network
SE2250397A1 (en) A data processing system comprising a network, a method, and a computer program product
US20240385987A1 (en) A computer-implemented or hardware-implemented method, a computer program product, an apparatus, a transfer function unit and a system for identification or separation of entities
CN115699018A (en) Computer-implemented or hardware-implemented entity recognition method, computer program product and device for entity recognition
CN120387483A (en) Underwater robot fault diagnosis method based on online fine-tunable hybrid neural network
US20250148263A1 (en) Computer-implemented or hardware-implemented method of entity identification, a computer program product and an apparatus for entity identification
Fuchs et al. Processing short-term and long-term information with a combination of polynomial approximation techniques and time-delay neural networks
Gil et al. Supervised SOM based architecture versus multilayer perceptron and RBF networks
Wang Algorithm-Hardware Co-design for Ultra-Low-Power Machine Learning and Neuromorphic Computing
US20250200346A1 (en) Data processing device of spiking neural network and operating method thereof
US20230351165A1 (en) Method for operating neural network
Lim A new method of reducing network complexity in probabilistic neural network for target identification
Amin et al. Spike train learning algorithm, applications, and analysis
CN120705747A (en) Abnormal timing detection method and related equipment
Almarzouqi et al. Heart Attack Risk Prediction Using Spiking Neural Networks with Poisson-Based Temporal Encoding
KR20250152293A (en) Neural network platform and operating method of neural network platform