[go: up one dir, main page]

WO2024218836A1 - Associative memory device - Google Patents

Associative memory device Download PDF

Info

Publication number
WO2024218836A1
WO2024218836A1 PCT/JP2023/015379 JP2023015379W WO2024218836A1 WO 2024218836 A1 WO2024218836 A1 WO 2024218836A1 JP 2023015379 W JP2023015379 W JP 2023015379W WO 2024218836 A1 WO2024218836 A1 WO 2024218836A1
Authority
WO
WIPO (PCT)
Prior art keywords
potts
unit
calculation
cost function
associative
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
PCT/JP2023/015379
Other languages
French (fr)
Japanese (ja)
Inventor
謙介 稲葉
康博 山田
弘樹 武居
卓弘 稲垣
利守 本庄
拓也 生田
佑哉 米津
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
NTT Inc
Original Assignee
Nippon Telegraph and Telephone Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nippon Telegraph and Telephone Corp filed Critical Nippon Telegraph and Telephone Corp
Priority to PCT/JP2023/015379 priority Critical patent/WO2024218836A1/en
Publication of WO2024218836A1 publication Critical patent/WO2024218836A1/en
Anticipated expiration legal-status Critical
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N99/00Subject matter not provided for in other groups of this subclass

Definitions

  • the present invention relates to an associative memory device, and more specifically to an associative memory device based on a neural network.
  • association Humans can recall the face of an acquaintance from their name, and can also visualize the overall features of an acquaintance's face when they are wearing a mask, for example. When humans see an object, they can recall the entire object even if part of it is hidden, or recall another object related to the object. This is one of the intellectual processes in the human brain called association.
  • This association process can be thought of as an associative memory system that recalls the most relevant data from stored data corresponding to input data.
  • a neural network is a network that constructs associative memory by expressing basic building blocks called neurons using a mathematical model and connecting these together. It can model associative memory. As an example suitable for application to associative memory, it is known that it can be simply modeled using a Hopfield-type neural network, etc.
  • FIG. 9 is a diagram explaining a Hopfield type neural network.
  • (a) of FIG. 9 conceptually shows the configuration of a Hopfield type neural network.
  • neural network 50 a certain neuron 51 is connected to all other neurons, making it a completely interconnected network.
  • This network maps the spin of particles in physics to the state of neurons, and the interactions between spins to the connections of neurons, and uses a definition formula in physics to simulate a human neural network and realize associative memory processing.
  • Associative memory using a Hopfield type neural network is known to be described as a minimization problem of the Ijin Hamiltonian, as described below (Non-Patent Document 1).
  • Non-Patent Document 1 When the spurious attractors increase, the probability of making incorrect associations increases, and the memory capacity decreases.
  • the memory capacity is limited to a very small value of about 0.12N, where N is the number of neurons (Non-Patent Document 1).
  • Various methods have been proposed to improve the capacity, but even with these improvements, the capacity has only increased to about 5N at most (Non-Patent Document 2).
  • a method has been proposed to improve the Hopfield network and expand the capacity to 2N/2 (Non-Patent Document 3, Non-Patent Document 4).
  • a method has also been proposed to increase the capacity to about 2N by using high-cost quantum mechanics and quantum calculation, but it is difficult to actually realize this with current technology (Non-Patent Document 5).
  • the present invention was made in consideration of these problems, and its purpose is to provide a configuration for an associative memory device that significantly expands memory capacity.
  • One embodiment of the present invention is an associative memory device that includes a calculation unit that executes a feedback calculation to minimize the Potts Hamiltonian H potts and a memory holding unit that stores in advance information ⁇ to be used for association, and is configured to output, as an association result, stored information that is closest to input information x, based on the result of the feedback calculation to minimize the Potts Hamiltonian H potts .
  • Another embodiment of the present invention is an associative memory device comprising an input/output unit accessible to information ⁇ configured as a learning input, and a calculation unit that executes a feedback calculation to minimize the Potts Hamiltonian H potts , and configured to output, as an associative result, information within the information ⁇ that is closest to information x input to the input/output unit based on a result of the feedback calculation.
  • the present invention provides a configuration for an associative memory device that significantly expands memory capacity.
  • FIG. 2 is a conceptual diagram of a neural network corresponding to the associative memory device of the present disclosure.
  • FIG. 2 is a diagram showing more specific functions and a configuration of the content addressable memory device of the present disclosure.
  • FIG. 2 illustrates an algorithm for implementing associations in the content addressable memory device of the present disclosure.
  • 13 is a graph showing an example of numerical calculation of the accuracy rate of associative memory in comparison with the prior art.
  • FIG. 11 is a diagram showing the configuration of a content addressable memory device according to a second embodiment.
  • FIG. 11 is a diagram showing an algorithm for performing association in the content addressable storage device of the second embodiment.
  • FIG. 1 is a diagram illustrating an example of the configuration of an associative memory device including a physically coherent Ising machine.
  • FIG. 11 is a diagram showing the configuration of a content addressable memory device according to a third embodiment.
  • FIG. 1 is a diagram illustrating a Hopfield type neural network.
  • the associative memory device of the present disclosure is based on a new neural network configuration that minimizes the Potts Hamiltonian, and can be easily realized by a general-purpose computer, etc.
  • the currently realized memory capacity of 2 N /2 is significantly improved, and expanded to a capacity of about 2 N . At first glance, this may seem like a minor change, but even when the number of bits N of stored information is 100, this is an improvement of about 100 million times, and when the number of bits N is even larger, the improvement is an astronomical multiple.
  • the Hopfield-type associative memory of the prior art is described as a minimization problem of the Ising Hamiltonian H described below.
  • the Ising model is originally a model of a lattice model that statistically analyzes the interactions of spins arranged at each site of a lattice point of a magnetic material. This Ising model is applied to modeling of neural networks, and is used not only for associative memories but also for solving combinatorial optimization problems, for example.
  • the Ising Hamiltonian H in equation (1) represents the energy of the Ising model system.
  • the Ising spin is a binary spin that takes any value between ⁇ 1.
  • "associating" means outputting a recalled result for an input that corresponds to the input data.
  • a Hopfield neural network 50 based on the Ising model is represented as having neurons 51 interconnected by lines that correspond to coupling coefficient constants.
  • a state corresponding to the input data is given as the initial state to the neural network 50, and the final equilibrium state in which the Ising Hamiltonian H is minimized corresponds to the recalled result.
  • the minimum energy state of the Ising Hamiltonian H in equation (1) may take a state different from the stored information ⁇ as the number of stored pieces of information M increases. This is the spurious attractor shown in Figure 9(b).
  • the memory capacity (the number of pieces of information that can be stored M) is limited to a very small value.
  • the associative memory device disclosed herein is based on a neural network configuration described by Potts-type interactions, rather than the Hopfield-type neural network of the prior art. Below, an embodiment of the associative memory device disclosed herein will be described in detail with reference to the drawings.
  • Fig. 1 is a conceptual diagram showing the associative memory device of the present disclosure abstractly with a corresponding neural network.
  • the associative memory device 100 of the present disclosure is physically implemented by arithmetic processing using a general computer equipped with a memory, a central processing unit (CPU), etc.
  • a neural network described by Potts-type interactions is modeled by mathematical expressions, and the associative memory device operates based on the new model of the neural network.
  • the associative memory device 100 of Fig. 1 is abstractly represented for comparison with the Hopfield-type neural network of Fig. 9(a), and the more specific operation of the neural network is represented by a series of mathematical expressions described later.
  • the N-bit multi-layer neuron 101 means the time evolution of an N-bit neuron 111, as described later.
  • the time-evolving neuron configuration in FIG. 1 is in contrast to the interconnected type configuration of a Hopfield neural network shown in FIG. 9(a).
  • the interaction calculation unit 103 represents the interaction between the memory storage bits 112 and the neurons 111, and is described by a complex mathematical formula described below.
  • the associative memory device 100 in FIG. 1 visualizes the action of a neural network described by Potts-type interactions and the mathematical formulas that define it. Next, we will formulate the neural network described by Potts-type interactions on which the associative memory device 100 of the present disclosure is based.
  • the ⁇ in equation (4) is a Kronecker delta function, which, as shown in definition (5), is 1 only when the two vectors x and y in the parentheses are equal, and is 0 in other cases.
  • the ⁇ m in the parentheses in equation (4) represents the stored information ⁇ as an N-dimensional vector.
  • the Hamiltonian H Potts in equation (4) can be written as follows using each component of the vector:
  • is a hyperparameter having a large value
  • the cost function F has the following proportional relationship:
  • minimizing the cost function F is equivalent to minimizing the Potts Hamiltonian H Potts defined in equation (4).
  • minimizing the cost function F is equivalent to minimizing the Potts Hamiltonian H Potts
  • minimizing the Potts Hamiltonian H Potts is equivalent to minimizing the cost function F.
  • This replacement is a measure for the convenience of numerical calculation, and is intended to allow the number x i to change gradually with time evolution by considering a continuous variable x i , thereby enabling good "association.”
  • the cost function F is further rewritten as follows. This rewriting is a device for convergence in the minimization process for the cost function F, which will be described later, and is necessary to replace the spin with a real number x i .
  • the Potts Hamiltonian H Potts is reproduced by setting ⁇ to the limit of infinity in the cost function F of equation (10).
  • is stopped at a relatively large finite value ( ⁇ >1) and the Potts Hamiltonian H Potts of equation (11) is minimized.
  • can be set to 5, for example. This minimization can be achieved by updating x k by the following equation.
  • Equation (12) The update equation for x k in equation (12) describes the time evolution of the neuron.
  • is a hyperparameter and is set to an appropriate small value (0 ⁇ 1).
  • the differential term of the second term on the right side of equation (12) is expressed by the following equation. ⁇ can be set to 0.5, for example.
  • the interaction calculation unit 103 performs minimization processing given by equations (10) and (11) using the memory holding unit 102 and input information 104. Specifically, it performs updates in equations (12) and (13) and outputs x new (with ⁇ ) as the final output 105 after repeating the update of x (with ⁇ ) a certain number of times. This series of processes associates the stored ⁇ ⁇ that is closest to the initial input x (with ⁇ ). Next, a more specific possible configuration and algorithm of the content addressable memory device 100 of the present disclosure will be described.
  • Fig. 2 is a diagram showing more specific functions and configurations of the associative memory device of the present disclosure.
  • Fig. 2(a) shows an associative memory device 100 represented by functional blocks, and corresponds to the abstract configuration diagram represented as a neural network in Fig. 1.
  • the reference numerals of each block in Fig. 2(a) correspond to those in Fig. 1.
  • a neuron 101 represents the multi-layer neuron described in conjunction with Fig. 1, and a novel neural network represented based on the Potts Hamiltonian H Potts defined by equation (4) is shown as one functional block.
  • the associative memory device of the present disclosure can be implemented as one that includes calculation units 101, 103 that execute a feedback calculation to minimize the Potts Hamiltonian H potts , and a memory holding unit 102 that stores in advance information ⁇ to be used for association, and is configured to output, as the association result, stored information 105 that is closest to input information x 102, based on the result of the feedback calculation to minimize the Potts Hamiltonian H potts .
  • the computer 100-1 includes a CPU 120 and a memory 121, and also has an interface unit (I/O) 123 for inputting and outputting information to and from the outside.
  • the memory 122 can be various storage devices capable of temporary or semi-permanent storage, and may include multiple memories of different types.
  • an arithmetic program 122 for implementing associative storage according to an algorithm described below is loaded into the memory 122.
  • the program 122 may be stored in a memory within the computer, or may be provided from outside the computer 100-1. It may be provided to the computer 100-1 via a network. Also, the information stored in the memory retention unit 102 in FIG. 1 and FIG. 2(a) may be stored in a storage means outside the computer 121, different from the memory of the computer 121.
  • the neuron 101 and the interaction calculation unit 103 in the functional block of FIG. 2(a) correspond to the calculation processing based on the program 122 in the CPU 120 in FIG. 2(b), and the memory holding unit 102 corresponds to the physical memory 121 in FIG. 2(b).
  • the processing of the algorithm described below it is not limited to the configuration of the computer 100-1 described above, and some of the calculation processing can be carried out by dedicated hardware, or can be carried out by distributed processing using multiple computers.
  • FIG. 3 is a diagram showing an algorithm for performing association in the associative memory device of the present disclosure.
  • a flow chart 300 shows the processing shown in formulas (4) to (13) of the neural network based on the Potts Hamiltonian H Potts in FIG. 1. The algorithm will be described below with reference to the associative memory device 100 in FIG. 2(a) and the above-mentioned formulas.
  • the flow chart 300 starts with step S301 of initial input to the associative memory device 100 and ends with step S305 of outputting the associative result x. By going through all the steps of the flow chart 300, the associative result 105 is output for the input data 104, and the associative processing in the associative memory device 100 is performed.
  • data x i is input to the device 100, which corresponds to the time-evolving neuron 101 being set to its initial state in Fig. 1.
  • the four neurons lined up vertically on the left side of Fig. 1 can be considered as the initial state.
  • the repetition of steps S303 and S304 to update x k according to equation (12) corresponds to the time evolution of neuron 111 in the conceptual diagram of associative memory device 100 in Fig. 1.
  • the repetitive computations of steps S303 and S304 execute a feedback calculation to minimize the Potts Hamiltonian H potts .
  • the associative memory device 100 of the present disclosure minimizes a cost function represented by an interaction operation between a neural network modeled based on the Potts Hamiltonian H Potts and memory bits in a memory holding unit.
  • This minimization of the cost function is performed by a repetitive operation described as the time evolution of neurons, and association is performed.
  • This is a relatively complex operation compared to the operation of minimizing the Isin Hamiltonian H, which is expressed only by coupling constants and is modeled by fully interconnected neurons in a Hopfield neural network of the prior art.
  • the algorithm shown in FIG. 3 can be implemented by a normal computer 100-1 as shown in FIG. 2(b) if a CPU with a certain level of computing power is used. A configuration for reducing the computing power of the CPU will be described later as embodiment 2.
  • the neural network-based associative memory device 100 modeled on the Potts Hamiltonian H Potts of the present disclosure has been shown numerically to be free of spurious attractors.
  • Figure 4 is a graph showing an example of numerical calculation of the accuracy rate of associative memory, comparing the conventional technology with the associative memory device disclosed herein.
  • the horizontal axis of the graph shows the M/N value, where M is the number of stored information and N is the number of bits of information, and the horizontal axis shows the accuracy rate.
  • the accuracy rate was determined as the case where one of the stored items was associated with a randomly given input.
  • the ⁇ (inverted triangle) plot shows the case of a conventional associative memory device using a Hopfield neural network that minimizes the Ising Hamiltonian. It can be seen that the accuracy rate drops when M/N is around 0.12. This drop in accuracy rate is due to spurious attractors, as already mentioned in FIG. 9(b).
  • the ⁇ (circle) plot shows the accuracy rate in the associative memory device 100 of the present disclosure, which is based on a neural network configuration described by Potts-type interactions. Regardless of the value of M/N, there is no drop in accuracy rate at all.
  • FIG. 5 is a diagram showing the functional blocks and configuration of the associative memory device of embodiment 2.
  • the associative memory device 200 includes a neural network 210 and a weight update unit 220.
  • the neural network 210 may be, for example, a Hopfield-type neural network that performs association by minimizing the Isin Hamiltonian H. In other words, it corresponds to the associative memory device of the prior art.
  • the neural network 210 forms a first calculation loop that minimizes the Isin Hamiltonian H, as described below.
  • the weight update unit 210 minimizes a cost function using the Potts Hamiltonian H Potts , and is implemented in the same manner as the calculation in the algorithm in the associative storage device of embodiment 1. As will be described later, the weight update unit 210 forms a second calculation loop that minimizes the Potts Hamiltonian H Potts .
  • the weighting section 202 and calculation section 203 of the neural network 210, and the calculation section 222 of the weight updating section 210 are all calculations that can be performed by a single CPU. Therefore, the associative memory device 200 of the embodiment of FIG. 5 can also be implemented by a computer 100-1 as shown in FIG. 2(b). Also, a neural network in which the calculations of the weighting section 202 and calculation section 203 of the neural network 210 are implemented using elements that can perform specific calculations at high speed, such as an FPGA, can also be used. In this case, the calculation section 222 of the weight updating section 210 can also be implemented by a CPU.
  • the neural network 210 may also be implemented by a calculation device that uses optical pulses as will be described later.
  • the input 104, output 105, and learning input 106 of the associative memory device 200 of the second embodiment are the same as those of the first embodiment.
  • the input information 104 to the input/output unit 211 is a real number between -1 and 1, and is actually N-bit input data, and association is performed on the input information 104.
  • the associative result 105 is output for the input data 104, and the associative processing in the associative memory device 200 is performed.
  • FIG. 6 is a diagram showing an algorithm for performing association in the associative storage device of embodiment 2.
  • Flowchart 400 shows the processing performed in the associative storage device 200 of FIG. 5. Below, the computation algorithm of the associative storage device 200 will be explained with reference to flowchart 400.
  • initial input information x i is input to the input/output unit 211.
  • the convergence determination step in S403 is bypassed, and the determination in S403 will be described later.
  • the weighting unit 212 multiplies the input x i by the weights J ij and B i to obtain y i as shown in the following equation.
  • x 0 in formula (14) is a special input that is always 1, and its weight B i is also a kind of special weight called a magnetic field term.
  • y i in formula (14) is the sum of the ⁇ term of the coupling coefficient (interaction) J ij and the magnetic field term, and the form of y i in formula (14) is the Ising Hamiltonian H Ising . That is, it means that y i weighted by formula (14) is expressed by a Hopfield-type neural network by formulas (1) and (2).
  • the weighted input y i is further replaced by an activation function in the calculation unit 213 as shown in the following equation.
  • Equation (15) corresponds to updating the current x i to new x i
  • S405 means that "association" is performed in the Hop-Filled neural network 210. Equation (15) may be replaced with another activation function, and another neural network other than the Hop-Filled type may be used. The step of S403 is performed again for the updated x i .
  • a convergence test is performed on the updated x i .
  • the condition for this convergence test is when the updated x new is no longer different from the x old before the update, that is,
  • 0. If the convergence test result is No (false), the calculation of y i is repeated again in S404, and the update of x i is repeated in S404 (first calculation loop). If the convergence test result is Yes (true), the process proceeds to S406, where processing is performed by the calculation unit 222 of the weight update unit 220. Therefore, the above-mentioned steps S402 to S405 are processing in the neural network 210 as shown in FIG. 6, and can be called the first calculation loop processing.
  • the last updated x i for which the convergence judgment in S403 was performed is output to the calculation unit 222 of the weight update unit 220 as a temporary associative output (provisional x output).
  • the calculation unit 222 uses x updated by the neural network 210 to update the weights J ij and B i based on the information ⁇ stored in the memory retention unit 221.
  • the stored information ⁇ is expressed as M N-dimensional vectors, as in definition (6).
  • the Ising Hamiltonian H Ising with the weights updated in the above equations (16) and (17) can be rewritten as follows.
  • the calculation unit 222 judges whether the weights J ij and B i updated in S407 are the same as the J ij and B i before the update, that is, whether the weights have converged.
  • the following equation is the convergence condition.
  • the process returns to S406 of the neural network 210, where y i is calculated again in S404, and x i is updated in S404, i.e., a new loop operation is repeated. If the convergence determination result is Yes (true), the process proceeds to S406 of the weight update unit 220, where x i last used in the process of the weight update unit 220 is output as the final output, that is, as the final result 105 associated in S409. Therefore, the above-mentioned steps S407 to S408 become the process in the weight update unit 220 as shown in FIG. 6, and can be called the second operation loop process.
  • the above-mentioned updating of weights J ij and B i in S407 and the convergence test in S408 include the first calculation loop of the convergence test of x i in S403, and form a large second calculation loop.
  • the associative memory device 200 alternates between the processing in the neural network 210 and the processing in the weight update unit 220, and associates the closest stored data ⁇ with the initial input information 104, and outputs it as the result 105.
  • the weights J ij and B i are updated in the weight update unit 220 by a calculation process based on the Potts Hamiltonian. Also, as is clear from formulas (16) and (17), the weights J ij and B i are updated by an interaction calculation performed by the x i updated by the neural network 210 and the information ⁇ stored in the memory holding unit 221.
  • the associative memory device 200 of the second embodiment operates in the same manner as the interaction calculation 103 in the conceptual diagram shown in FIG. 1.
  • the difference between the associative memory device 100 of the first embodiment in FIG. 1 is that the N-bit multi-layer neuron 101 is a Hopfield type neural network in FIG. 5.
  • the reason why the associative memory device 200 uses the neural network 210 is to replace a part of the calculation process of the association process with the processing of the Ising Hamiltonian H Ising of the conventional technology, which has a low calculation load.
  • the present invention also has an aspect of a method implemented in an associative memory device that sequentially implements the steps of the flowcharts 300 and 400 in Figures 3 and 6.
  • an associative memory device that includes a calculation unit that executes a feedback calculation to minimize the Potts Hamiltonian H potts and a memory holding unit that stores information ⁇ to be used for association in advance.
  • the method for implementing associative memory can be implemented as a method that includes the steps of receiving an initial input x i corresponding to the flowchart 300, successively updating the Potts Hamiltonian H Potts , which is a cost function, determining whether the cost function has been minimized and outputting the association result when minimization has been achieved.
  • the flow 500 in FIG. 6 is composed of a first computation loop for x i based on the Ising Hamiltonian H Ising , and a second computation loop for weights J ij and B i based on the Potts Hamiltonian H Potts . If the flow 500 in FIG. 6 is arranged in a straight line from the start (S401) to the end (S409), the steps corresponding to the first computation loop and the steps corresponding to the second computation loop will appear alternately.
  • the total number of steps is likely to be increased compared to the flow 300, but the computation load of S403 to S405 of the processing of the Ising Hamiltonian H Ising does not include an interaction calculation with the information ⁇ stored in the memory holding unit 221, and the computation load is light. Therefore, the computation load until the associated output is obtained can be lighter than that of the associative storage device 100 of the first embodiment. Even if the associative storage device 200 is implemented on a computer with a small computing power, the association can be performed in a shorter time.
  • the weight unit 202 and calculation unit 203 of the neural network 210 of the associative memory device 200 in FIG. 5, and the calculation unit 222 of the weight update unit 210 can all be implemented by a single CPU.
  • the memory holding unit 221 can be a general memory, so the associative memory device 200 can be implemented by a general computer.
  • Associative memory can be realized by executing a program for the calculation processing of the algorithm shown in FIG. 6 on a computer.
  • the neural network 210 is a Hopfield-type neural network of the prior art, and is equivalent to the minimization problem of the Ising model, so it can be implemented in a form other than a configuration that is only a computer.
  • the neural network 210 can be implemented by a computing device known as a coherent Ising machine.
  • the configuration of the coherent Ising machine is not limited in any way.
  • a computing device for a coherent Ising machine a device that injects a pump light pulse into a phase-sensitive amplifier in a ring resonator made of optical fiber and associates the phase of the pulse with the spin of the Ising model can be used (Non-Patent Document 6).
  • the coherent Ising machine may include a dedicated FPGA, GPU, or a combination of these for calculating the coupling coefficients.
  • the update calculation of the coupling coefficient in the Ising model is performed by a coupling coefficient setting unit 7 and a calculation result holding unit 6, and the elements of the calculator 4, the coupling coefficient setting unit 7, and the calculation result holding unit 6 can be implemented by the CPU and memory of a computer.
  • an Ising model calculation device can be realized by physical hardware including the optical circuit elements described above, and the neural network 210 of embodiment 2 can be replaced with an Ising model calculation device.
  • the CPU and memory implementing the coupling coefficient setting unit 7 and calculation result holding unit 6 can also perform the processing of the calculation unit 222 and memory holding unit 221 of the weight update unit 210 of the associative memory device 200 in FIG. 5. Therefore, the associative memory device 200 in FIG. 5 can be implemented with the same configuration as the physical coherent Ising machine in FIG. 7.
  • the neural network 210 of embodiment 2 can be implemented using various physical configurations that enable calculations based on the Ising model of the Hopfield neural network of conventional technology.
  • the associative storage device 100 of the first embodiment and the associative storage device 200 of the second embodiment described above both include a memory storage unit 102, 221 in which stored information ⁇ is stored.
  • the memory storage unit 102, 221 may be a storage device such as a computer memory or a hard disk drive.
  • the stored information ⁇ is information that is input to the memory storage unit as a learning input and is stored therein, and is information ⁇ configured as a learning input. In either case, it is sufficient that the calculation unit (CPU) can access the stored information ⁇ so that an interaction calculation between the input information x and the stored information ⁇ in the algorithms shown in Figures 3 and 5 is possible.
  • FIG. 8 is a diagram showing the functional blocks and configuration of the associative memory device of the third embodiment.
  • the associative memory device 600 is obtained by omitting the memory retention unit 102 from the associative memory device 100 of the first embodiment shown in FIG. 2(a). Also, an input/output unit 107 is shown as an interface for exchanging information with the outside of the device.
  • the memory retention unit 102 having the stored information ⁇ is provided in a device 610 separate from the associative memory device 600.
  • the associative memory device 600 is connected to the memory retention unit 102 of the separate device 610 via a communication path 601 by the input/output unit 106. Therefore, the associative memory device 600 is configured to be able to access the stored information ⁇ prepared in the memory retention unit 102 of the separate device 610.
  • the communication path 601 may be wired or wireless, and there are no limitations on the type of communication interface or protocol. Therefore, the other device 610 may be another computer, or may have any function, such as a server, a storage device, or other electronic device, so long as it is different from the computer that implements the associative storage device 600. It may also be a device located in another location via a network. For example, the associative storage device 600 may be implemented on a first server computer, and association may be performed in cooperation with a second server computer in a remote location that has a memory retention unit 102.
  • the learning input 106-1 may be input to another device 610, or the learning input 106-2 may be input to the associative memory device 600 and stored in the memory storage unit 102 via a communication path 601.
  • the input information 104 to the associative memory device and the associated result output 105 are input or output via a speaker/microphone, a keypad, a display/touchpad, etc.
  • the input information 104 and the associated result output 105 may also be input or output from yet another input/output device via another communication path not shown.
  • memory retention unit 102 in another device 610 executes arithmetic processing related to the Potts Hamiltonian H Potts and feedback calculation for minimizing the Potts Hamiltonian H potts under the control of the associative memory device 600. That is, it performs the function of interaction calculation unit 103 shown in Fig. 1. Based on the result of the feedback calculation, the associative memory device 600 outputs, as association result 105, information that is closest to information x input to input/output unit 107 among stored information ⁇ .
  • the associative memory device 600 does not hold stored information ⁇ , it should be noted that when the associative memory device 600 is implemented by a computer, a physical memory, for example, the memory 121 in Fig. 2(b) is naturally provided in order to execute the calculation program.
  • the associative memory device 200 of the second embodiment can also be configured in such a way that the memory holding unit 221 is located in a separate device, exactly like in Fig. 8. There is no change in the algorithm of the feedback calculation process for minimizing the Potts Hamiltonian H potts in Figs. 3 and 5.
  • the storage capacity of the associative memory device can be significantly increased.
  • the calculation to minimize the Potts Hamiltonian can be implemented by arithmetic processing in a general computer.
  • the present invention can be used in devices that implement associative memory using computers and other devices.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Data Mining & Analysis (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Memory System (AREA)

Abstract

This novel associative memory device models, by a mathematical expression, a neural network that is described with Potts-type interactions, and operates on the basis of a novel model of neural network. An associative memory device 100 comprises: an N-bit multilayer neuron 101 (xi, where i=1, ..., N), an N×M-bit memory holding unit 102, and an interaction computation unit 103. The N-bit multilayer neuron 101 means the time evolution of an N-bit neuron 111. It is also possible to reduce the calculation load in the whole process of associative memory by combining arithmetic processing of associative memory based on a Hopfield-type neural network of the prior art with a computation for minimizing a Potts Hamiltonian.

Description

連想記憶装置Associative Memory

 本発明は、連想記憶装置に関し、より詳細には、ニューラルネットワークに基づく連想記憶装置に関する。 The present invention relates to an associative memory device, and more specifically to an associative memory device based on a neural network.

 人間は、知人の名前からその人物の顔を思い出すことができ、また、例えばマスクをした知人の顔から全体の特徴を思い浮かべることができる。人間はある物を見たとき、「対象物の一部が隠れていても、その対象物の全体を想起」または「対象物に関連する別の物を想起する」ことができる。これは連想と呼ばれる人間の脳の持つ知的プロセスの1つである。この連想の処理は、入力されるデータに対応する、記憶されたデータの中から最も関連するデータを想起する、連想記憶システムとして考えることができる。ニューラルネットワークは、ニューロンと呼ばれる基本構成単位を数式モデルによって表現し、これらをつなぎ合わせてネットワークを構築したものであり、連想記憶をモデル化できる。連想記憶へ応用に適した例として、ホップフィールド型のニューラルネットワークなどを用いて単純にモデル化できることが知られている。 Humans can recall the face of an acquaintance from their name, and can also visualize the overall features of an acquaintance's face when they are wearing a mask, for example. When humans see an object, they can recall the entire object even if part of it is hidden, or recall another object related to the object. This is one of the intellectual processes in the human brain called association. This association process can be thought of as an associative memory system that recalls the most relevant data from stored data corresponding to input data. A neural network is a network that constructs associative memory by expressing basic building blocks called neurons using a mathematical model and connecting these together. It can model associative memory. As an example suitable for application to associative memory, it is known that it can be simply modeled using a Hopfield-type neural network, etc.

 図9は、ホップフィールド型のニューラルネットワークを説明する図である。図9の(a)は、ホップフィールド型のニューラルネットワークの構成を概念的に示している。ニューラルネットワーク50は、あるニューロン51が他のすべてのニューロンと結合されており、完全な相互結合型のネットワークである。このネットワークは、物理学における粒子のスピンをニューロンの状態に、スピン間の相互作用をニューロンの結合に対応付け、物理学における定義式を使って、人間の神経回路網を模擬し、連想記憶処理を実現する。ホップフィールド型ニューラルネットワークによる連想記憶は、後述するよう、イジンハミルトニアンの最小化問題として記述されることが知られている(非特許文献1)。 FIG. 9 is a diagram explaining a Hopfield type neural network. (a) of FIG. 9 conceptually shows the configuration of a Hopfield type neural network. In neural network 50, a certain neuron 51 is connected to all other neurons, making it a completely interconnected network. This network maps the spin of particles in physics to the state of neurons, and the interactions between spins to the connections of neurons, and uses a definition formula in physics to simulate a human neural network and realize associative memory processing. Associative memory using a Hopfield type neural network is known to be described as a minimization problem of the Ijin Hamiltonian, as described below (Non-Patent Document 1).

特許第625507号明細書Patent No. 625507 specification

J. J. Hopfield, Proceedings of National Academy of Sciences of the United States of America Vol79, No.8, pp 2554-2558, (1982年)J. J. Hopfield, Proceedings of National Academy of Sciences of the United States of America Vol79, No.8, pp 2554-2558, (1982) H. K. Sulehria and Y. Zhang, Information Technology Journal Vol7 No.4 pp 684-688 (2008年)H. K. Sulehria and Y. Zhang, Information Technology Journal Vol7 No.4 pp 684-688 (2008) Krotov, D. & Hopfield, J. Dense associative memory for pattern recognition. 325 Neural Information Processing Systems 29, 1172-1180  (2016年)Krotov, D. & Hopfield, J. Dense associative memory for pattern recognition. 325 Neural Information Processing Systems 29, 1172-1180 (2016) Ramsauer, H. et al. Hopfield networks is all you need. Arxiv:2008.02217Ramsauer, H. et al. Hopfield networks is all you need. Arxiv:2008.02217 D. Ventura and T. Martinez, Proceeding of the International Joint Conference on Neural Networks, 4-9May, IEEE, Alaska, USA, pp 509-513D. Ventura and T. Martinez, Proceeding of the International Joint Conference on Neural Networks, 4-9May, IEEE, Alaska, USA, pp 509-513 T. Inagaki, Y. Haribara, etal, "A coherent ising machine for 2000-node optimization problems," Science 354, 603-606 (2016年)T. Inagaki, Y. Haribara, etal, “A coherent ising machine for 2000-node optimization problems,” Science 354, 603-606 (2016)

 しかしながら、ホップフィールド型ニューラルネットワークによる従来技術の連想記憶メモリは、誤ったパターン(スプリアス記憶)を想起してしまい、記憶可能な容量が少なくなる問題があった。後述するように、ホップフィールド型ニューラルネットワークでは、与えられる解に対してエネルギー関数が極小値となるようにニューロン間の結合を定める。実際には、図9の(b)に示したように、スピン状態に対するエネルギー関数の値には、極小値52の周りに、偽の解に対応する極値、すなわちスプリアスアトラクタ53a、53bが多数存在している。 However, the prior art associative memory using a Hopfield neural network had the problem of recalling incorrect patterns (spurious memories), resulting in a reduced memory capacity. As will be described later, in a Hopfield neural network, the connections between neurons are determined so that the energy function for a given solution becomes a minimum value. In reality, as shown in Figure 9(b), the value of the energy function for the spin state has many extreme values corresponding to false solutions, i.e., spurious attractors 53a, 53b, around the minimum value 52.

 スプリアスアトラクタが増加すると、間違った連想をする確率が増加するため、記憶容量が減ってしまう。従来技術の連想記憶メモリでは、ニューロンの数をNとすると、記憶可能な容量は0.12N程度のごく小さいものに限られている(非特許文献1)。さまざまな工夫で容量を改善する手法が提案されているが、これらの改良でも、高々5N程度までしか増加していない(非特許文献2)。最新の研究では、ホップフィールド型ネットワークを改良し、容量を2N/2まで拡張する手法が提案されている(非特許文献3、非特許文献4)。高コストな量子力学、量子計算を利用して容量を2N程度まで増大する手法も提案されているが、現在の技術で実際に実現するのは困難である(非特許文献5)。 When the spurious attractors increase, the probability of making incorrect associations increases, and the memory capacity decreases. In the conventional associative memory, the memory capacity is limited to a very small value of about 0.12N, where N is the number of neurons (Non-Patent Document 1). Various methods have been proposed to improve the capacity, but even with these improvements, the capacity has only increased to about 5N at most (Non-Patent Document 2). In the latest research, a method has been proposed to improve the Hopfield network and expand the capacity to 2N/2 (Non-Patent Document 3, Non-Patent Document 4). A method has also been proposed to increase the capacity to about 2N by using high-cost quantum mechanics and quantum calculation, but it is difficult to actually realize this with current technology (Non-Patent Document 5).

 本発明はこのような問題に鑑みてなされたものであって、その目的とするところは、記憶容量を大幅に拡大する連想記憶装置の構成を提供することにある。 The present invention was made in consideration of these problems, and its purpose is to provide a configuration for an associative memory device that significantly expands memory capacity.

 本発明の1つの実施態様は、ポッツハミルトニアンHpottsを最小化するフィードバック計算を実行する演算部と、連想のために使用される情報ξを予め記憶する記憶保持部とを備え、前記ポッツハミルトニアンHpottsを最小化するフィードバック計算の結果に基づいて、連想結果として、入力された情報xに最も近い記憶された情報を出力するよう構成された連想記憶装置である。 One embodiment of the present invention is an associative memory device that includes a calculation unit that executes a feedback calculation to minimize the Potts Hamiltonian H potts and a memory holding unit that stores in advance information ξ to be used for association, and is configured to output, as an association result, stored information that is closest to input information x, based on the result of the feedback calculation to minimize the Potts Hamiltonian H potts .

 本発明の別の実施態様は、学習入力として構成された情報ξとアクセス可能な入出力部と、ポッツハミルトニアンHpottsを最小化するフィードバック計算を実行する演算部とを備え、前記フィードバック計算の結果に基づいて、前記情報ξの内で、前記入出力部に入力された情報xに最も近い情報を連想結果として出力するよう構成された連想記憶装置である。 Another embodiment of the present invention is an associative memory device comprising an input/output unit accessible to information ξ configured as a learning input, and a calculation unit that executes a feedback calculation to minimize the Potts Hamiltonian H potts , and configured to output, as an associative result, information within the information ξ that is closest to information x input to the input/output unit based on a result of the feedback calculation.

 本発明により、記憶容量を大幅に拡大する連想記憶装置の構成を提供する。 The present invention provides a configuration for an associative memory device that significantly expands memory capacity.

本開示の連想記憶装置に対応するニューラルネットワークの概念図である。FIG. 2 is a conceptual diagram of a neural network corresponding to the associative memory device of the present disclosure. 本開示の連想記憶装置のより具体的な機能および構成を示す図である。FIG. 2 is a diagram showing more specific functions and a configuration of the content addressable memory device of the present disclosure. 本開示の連想記憶装置の連想を実施するアルゴリズムを示す図である。FIG. 2 illustrates an algorithm for implementing associations in the content addressable memory device of the present disclosure. 連想記憶の正答率の数値計算例を従来技術と比較して示したグラフである。13 is a graph showing an example of numerical calculation of the accuracy rate of associative memory in comparison with the prior art. 実施形態2の連想記憶装置の構成を示す図である。FIG. 11 is a diagram showing the configuration of a content addressable memory device according to a second embodiment. 実施形態2の連想記憶装置で連想を実施するアルゴリズムを示す図である。FIG. 11 is a diagram showing an algorithm for performing association in the content addressable storage device of the second embodiment. 物理コヒーレントイジングマシンを含む連想記憶装置の構成例の図である。FIG. 1 is a diagram illustrating an example of the configuration of an associative memory device including a physically coherent Ising machine. 実施形態3の連想記憶装置の構成を示す図である。FIG. 11 is a diagram showing the configuration of a content addressable memory device according to a third embodiment. ホップフィールド型のニューラルネットワークを説明する図である。FIG. 1 is a diagram illustrating a Hopfield type neural network.

 本開示の連想記憶装置は、ポッツハミルトニアンを最小化するような、新たなニューラルネットワークの構成に基づいたものであり、汎用的なコンピュータなどによって容易に実現可能である。現時点で実現されている記憶容量2N/2を大幅に改良して、2N乗程度の容量まで拡張する。一見すると軽微な変化に見えるかもしれないが、記憶された情報のビット数Nが100の場合でも1億倍程度の容量の改善になり、さらにビット数Nが大きいと天文学的な倍数の改善となる。 The associative memory device of the present disclosure is based on a new neural network configuration that minimizes the Potts Hamiltonian, and can be easily realized by a general-purpose computer, etc. The currently realized memory capacity of 2 N /2 is significantly improved, and expanded to a capacity of about 2 N . At first glance, this may seem like a minor change, but even when the number of bits N of stored information is 100, this is an improvement of about 100 million times, and when the number of bits N is even larger, the improvement is an astronomical multiple.

 以下では、まず従来技術の、イジンハミルトニアンを最小化するホップフィールド型ニューラルネットワークによる連想記憶装置の問題により詳細に触れる。その後、本開示の連想記憶装置の構成と、基礎としているニューラルネットワークの概念を説明する。 Below, we will first go into more detail about the problems with the prior art associative memory device using a Hopfield neural network that minimizes the Ijin Hamiltonian. We will then explain the configuration of the associative memory device of the present disclosure and the concept of the neural network on which it is based.

 従来技術のホップフィールド型の連想記憶は、下で説明するイジンハミルトニアンHの最小化問題として記述される。イジングモデルは、元来、磁性材料の格子点の各サイトに配置されたスピンの相互作用を統計力学的に解析する格子模型をモデル化したものである。このイジングモデルがニューラルネットワークのモデル化に適用され、連想記憶のほかに、例えば組み合せ最適化問題の解法にも利用されている。

Figure JPOXMLDOC01-appb-I000002
The Hopfield-type associative memory of the prior art is described as a minimization problem of the Ising Hamiltonian H described below. The Ising model is originally a model of a lattice model that statistically analyzes the interactions of spins arranged at each site of a lattice point of a magnetic material. This Ising model is applied to modeling of neural networks, and is used not only for associative memories but also for solving combinatorial optimization problems, for example.
Figure JPOXMLDOC01-appb-I000002

 式(1)のイジンハミルトニアンHはイジングモデル系のエネルギーを表している。結合定数Jijは次式で与えられ、σiはi番目(i=1…N)のイジングスピンである。イジングスピンは、±1のいずれかの値をとる2値スピンである。

Figure JPOXMLDOC01-appb-I000003
The Ising Hamiltonian H in equation (1) represents the energy of the Ising model system. The coupling constant J ij is given by the following equation, and σ i is the ith (i = 1...N) Ising spin. The Ising spin is a binary spin that takes any value between ±1.
Figure JPOXMLDOC01-appb-I000003

 式(2)に含まれている、定義(3)に示した記憶された情報のi番目(i=1…N)のビット情報ξは、±1で表現される。 The i-th (i = 1...N) bit of information ξ in the stored information shown in definition (3) included in equation (2) is expressed as ±1.

 連想記憶装置において「連想する」ことは、入力されるデータに対応する入力に対して想起した結果を出力することである。図9の(a)でも示したように、イジングモデルによるホップフィールド型ニューラルネットワーク50は、各ニューロン51が結合係数定数に対応する線で相互接続されたものとして表される。入力されるデータに対応する状態をニューラルネットワーク50に初期状態として与え、イジンハミルトニアンHが最小となる最終的な平衡状態が、想起した結果に対応する。 In an associative memory device, "associating" means outputting a recalled result for an input that corresponds to the input data. As shown in FIG. 9(a), a Hopfield neural network 50 based on the Ising model is represented as having neurons 51 interconnected by lines that correspond to coupling coefficient constants. A state corresponding to the input data is given as the initial state to the neural network 50, and the final equilibrium state in which the Ising Hamiltonian H is minimized corresponds to the recalled result.

 式(1)のイジングハミルトニアンHのエネルギーの極小状態は、記憶された情報の数Mの増加とともに、記憶した情報ξとは異なる状態をとることがある。これが図9の(b)に示したスプリアスアトラクタである。連想記憶装置において、スプリアスアトラクタへの収束が増加すると、間違った連想をする確率が増加するため、記憶容量が減ってしまう。上述のように記憶容量(記憶可能な情報の数M)が非常に少ない値に制限されていた。 The minimum energy state of the Ising Hamiltonian H in equation (1) may take a state different from the stored information ξ as the number of stored pieces of information M increases. This is the spurious attractor shown in Figure 9(b). In an associative memory device, as convergence to the spurious attractor increases, the probability of making incorrect associations increases, resulting in a reduction in memory capacity. As mentioned above, the memory capacity (the number of pieces of information that can be stored M) is limited to a very small value.

 本開示の連想記憶装置では、従来技術のホップフィールド型ニューラルネットワークではなく、ポッツ型の相互作用で記述されるニューラルネットワークの構成に基づいたものである。以下、図面を参照しながら本開示の連想記憶装置の実施形態について詳細に説明する。 The associative memory device disclosed herein is based on a neural network configuration described by Potts-type interactions, rather than the Hopfield-type neural network of the prior art. Below, an embodiment of the associative memory device disclosed herein will be described in detail with reference to the drawings.

[実施形態1]
 図1は、本開示の連想記憶装置を、対応するニューラルネットワークで抽象的に示した概念図である。後述するように本開示の連想記憶装置100は、物理的にはメモリおよび中央制御装置(CPU)等を備えた一般的なコンピュータによる演算処理によって実施される。ポッツ型の相互作用で記述されるニューラルネットワークを数式によってモデル化して、そのニューラルネットワークの新規なモデルに基づいて、連想記憶装置が動作する。図1の連想記憶装置100は、図9の(a)のホップフィールド型ニューラルネットワークとの対比のために、抽象的に表現されたものであって、より具体的なニューラルネットワークの動作は、後述する一連の数式によって表現される。
[Embodiment 1]
Fig. 1 is a conceptual diagram showing the associative memory device of the present disclosure abstractly with a corresponding neural network. As described later, the associative memory device 100 of the present disclosure is physically implemented by arithmetic processing using a general computer equipped with a memory, a central processing unit (CPU), etc. A neural network described by Potts-type interactions is modeled by mathematical expressions, and the associative memory device operates based on the new model of the neural network. The associative memory device 100 of Fig. 1 is abstractly represented for comparison with the Hopfield-type neural network of Fig. 9(a), and the more specific operation of the neural network is represented by a series of mathematical expressions described later.

 図1を参照すれば、連想記憶装置100は、Nビットの多層ニューロン101(xi:i=1,…,N)と、N×Mビットの記憶保持部102と、相互作用演算部103を含む。Nビットの多層ニューロン101は、後述するようにNビットのニューロン111の時間発展を意味している。図1における時間発展するニューロンの構成は、図9の(a)に示したホップフィールド型ニューラルネットワークの相互結合型の構成とは対照的である。記憶保持部102の個々の要素は、式(3)で定義した、m番目(m=1…M)の記憶された情報のi番目(i=1…N)ビット情報ξに対応し、記憶保持ビット112を表している。 1, an associative memory device 100 includes an N-bit multi-layer neuron 101 (x i :i=1,...,N), an NxM-bit memory holding unit 102, and an interaction calculation unit 103. The N-bit multi-layer neuron 101 means the time evolution of an N-bit neuron 111, as described later. The time-evolving neuron configuration in FIG. 1 is in contrast to the interconnected type configuration of a Hopfield neural network shown in FIG. 9(a). Each element of the memory holding unit 102 corresponds to the i-th (i=1...N) bit information ξ of the m-th (m=1...M) stored information defined in equation (3), and represents a memory holding bit 112.

 相互作用演算部103は、記憶保持ビット112とニューロン111との相互作用を表しており、後述する複雑な数式によって記述される。図1の連想記憶装置100は、ポッツ型の相互作用で記述されるニューラルネットワークと、これを定義する数式の作用をイメージ化したものである。次に、本開示の連想記憶装置100が基礎とするポッツ型の相互作用で記述されるニューラルネットワークを定式化する。 The interaction calculation unit 103 represents the interaction between the memory storage bits 112 and the neurons 111, and is described by a complex mathematical formula described below. The associative memory device 100 in FIG. 1 visualizes the action of a neural network described by Potts-type interactions and the mathematical formulas that define it. Next, we will formulate the neural network described by Potts-type interactions on which the associative memory device 100 of the present disclosure is based.

 以下では、図1に示した連想記憶装置100が基礎とするニューラルネットワークにおいて最小化されるポッツハミルトニアンHPotts、コスト関数Fおよび関連する定義他を説明する。ポッツハミルトニアンHPottsは、次式で表される。

Figure JPOXMLDOC01-appb-I000004
The following describes the Potts Hamiltonian H Potts , the cost function F, and related definitions that are minimized in the neural network on which the associative memory device 100 shown in Figure 1 is based. The Potts Hamiltonian H Potts is expressed by the following equation.
Figure JPOXMLDOC01-appb-I000004

 式(4)のΣ内は、クロネッカーのデルタ関数であって、定義(5)で示したように、カッコ内の2つのベクトルx, yが等しい時のみ1となり、その他の場合は0となる。式(4)のカッコ内のξmは、記憶された情報ξをN次元のベクトルで表現したものである。式(4)のハミルトニアンHPottsを、ベクトルの各成分を用いて次のように書き下すことができる。

Figure JPOXMLDOC01-appb-I000005
The Σ in equation (4) is a Kronecker delta function, which, as shown in definition (5), is 1 only when the two vectors x and y in the parentheses are equal, and is 0 in other cases. The ξ m in the parentheses in equation (4) represents the stored information ξ as an N-dimensional vector. The Hamiltonian H Potts in equation (4) can be written as follows using each component of the vector:
Figure JPOXMLDOC01-appb-I000005

 ここで、次式のようなコスト関数Fを考える。

Figure JPOXMLDOC01-appb-I000006
Here, consider a cost function F as follows:
Figure JPOXMLDOC01-appb-I000006

 式(8)において、βは大きな値をもつハイパーパラメータであり、β→∞としたとき、コスト関数Fは、次のような比例関係を持っている。

Figure JPOXMLDOC01-appb-I000007
In equation (8), β is a hyperparameter having a large value, and when β→∞, the cost function F has the following proportional relationship:
Figure JPOXMLDOC01-appb-I000007

 式(9)から明らかなように、コスト関数Fを最小化することは、式(4)で定義されたポッツハミルトニアンHPottsを最小化することと同値となることがわかる。すなわち、コスト関数Fを最小化することはポッツハミルトニアンHPottsを最小化することであり、ポッツハミルトニアンHPottsを最小化することはコスト関数Fを最小化することになる。 As is clear from equation (9), minimizing the cost function F is equivalent to minimizing the Potts Hamiltonian H Potts defined in equation (4). In other words, minimizing the cost function F is equivalent to minimizing the Potts Hamiltonian H Potts , and minimizing the Potts Hamiltonian H Potts is equivalent to minimizing the cost function F.

 ここで、式(8)に示したコスト関数Fについて、下のようにスピンσ(=±1)を-1から1までの実数をとる変数xiに置き換える。この置き換えは、数値計算の都合上の処置であり、連続的な変数xiを考えることで、時間発展とともに徐々に数xiを変化させ、うまく「連想」することができるようにするためである。

Figure JPOXMLDOC01-appb-I000008
Here, for the cost function F shown in formula (8), the spin σ (=±1) is replaced with a variable x i that takes real numbers from -1 to 1 as shown below. This replacement is a measure for the convenience of numerical calculation, and is intended to allow the number x i to change gradually with time evolution by considering a continuous variable x i , thereby enabling good "association."
Figure JPOXMLDOC01-appb-I000008

 式(10)に対して、コスト関数Fを下のようにさらに書き換える。この書き換えは、コスト関数Fに対する後述する最小化処理における収束のための工夫であり、スピンを実数xiに置き換えるために必要となる。

Figure JPOXMLDOC01-appb-I000009
For equation (10), the cost function F is further rewritten as follows. This rewriting is a device for convergence in the minimization process for the cost function F, which will be described later, and is necessary to replace the spin with a real number x i .
Figure JPOXMLDOC01-appb-I000009

 書き換えられた式(11)において、置き換えたxiを再びσiへ戻す。そうすると、xii→σiσi=1となるため、式(11)は、カッコ内における定数項を除けば、コスト関数Fを定義した最初の式(8)と一致することがわかる。 In the rewritten formula (11), the replaced x i is returned to σ i again. Then, x i x i →σ i σ i = 1, so it can be seen that formula (11) is identical to the original formula (8) that defines the cost function F, except for the constant term in parentheses.

 上述の式(8)から式(11)までの検討から明らかなように、式(10)のコスト関数Fにおいてβを無限大の極限とすることで、ポッツハミルトニアンHPottsが再現される。実際には、数値計算の都合上、βをある程度大きい有限の値(β>1)で止め、式(11)のポッツハミルトニアンHPottsを最小化する。βは例えば5とすることができる。この最小化は、次式によってxkを更新することで達成できる。

Figure JPOXMLDOC01-appb-I000010
As is clear from the above examination of equations (8) to (11), the Potts Hamiltonian H Potts is reproduced by setting β to the limit of infinity in the cost function F of equation (10). In reality, for convenience of numerical calculation, β is stopped at a relatively large finite value (β>1) and the Potts Hamiltonian H Potts of equation (11) is minimized. β can be set to 5, for example. This minimization can be achieved by updating x k by the following equation.
Figure JPOXMLDOC01-appb-I000010

 式(12)のxkの更新式は、ニューロンの時間発展を記述している。式(12)において、αはハイパーパラメータであり、適切な小さい値(0<α<1)を設定する。式(12)の右辺第2項の微分項は次式で表される。αは例えば0.5とすることができる。

Figure JPOXMLDOC01-appb-I000011
The update equation for x k in equation (12) describes the time evolution of the neuron. In equation (12), α is a hyperparameter and is set to an appropriate small value (0<α<1). The differential term of the second term on the right side of equation (12) is expressed by the following equation. α can be set to 0.5, for example.
Figure JPOXMLDOC01-appb-I000011

 上述の式(8)から式(13)を使ったポッツハミルトニアンHPottsの最小化のプロセスは、図1に示した連想記憶装置100の概念図における相互作用演算部103に対応する。 The process of minimizing the Potts Hamiltonian H Potts using the above-mentioned equations (8) to (13) corresponds to the interaction calculation unit 103 in the conceptual diagram of the associative memory device 100 shown in FIG.

 式(13)において、xkは、図1の連想記憶装置100の概念図における多層ニューロン101への入力情報104であり、-1から1までの間をとる実数である。実際には、Nビットの入力データであって、入力情報104に対して連想が実施される。図1は抽象モデルなので、ニューロンの〇が4つ示してあり、N=4の例である。式(13)において、添え字m、kを有するξは、記憶保持部102に記憶された情報であり、前述の定義(6)のようにm番目(m=1…M)の記憶された情報のk番目(k=1…N)ビット情報である。 In formula (13), x k is the input information 104 to the multi-layer neuron 101 in the conceptual diagram of the associative memory device 100 in FIG. 1, and is a real number ranging from −1 to 1. In reality, it is N-bit input data, and association is performed on the input information 104. Since FIG. 1 is an abstract model, four neuron circles are shown, and this is an example where N=4. In formula (13), ξ with subscripts m and k is information stored in the memory retention unit 102, and is the k-th (k=1...N) bit information of the m-th (m=1...M) stored information as in the above definition (6).

 相互作用演算部103は、記憶保持部102と入力情報104を用いて、式(10)、式(11)で与えられる最小化処理を行う。具体的には、式(12)および式(13)における更新を行って、x(→付き)の更新を一定回繰り返したのちxnew(→付き)を最終出力105として出力する。この一連の処理によって、初期入力x(→付き)にもっとも近い、記憶されたξμが連想される。次に、本開示の連想記憶装置100のより具体的に実施可能な構成およびアルゴリズムを説明する。 The interaction calculation unit 103 performs minimization processing given by equations (10) and (11) using the memory holding unit 102 and input information 104. Specifically, it performs updates in equations (12) and (13) and outputs x new (with →) as the final output 105 after repeating the update of x (with →) a certain number of times. This series of processes associates the stored ξ μ that is closest to the initial input x (with →). Next, a more specific possible configuration and algorithm of the content addressable memory device 100 of the present disclosure will be described.

 図2は、本開示の連想記憶装置のより具体的な機能および構成を示す図である。図2の(a)は、機能ブロックで表した連想記憶装置100を示しており、図1のニューラルネットワークとして表現した抽象的構成図に対応している。図2の(a)の各ブロックの符号は、図1のものと対応している。図2の(a)において、ニューロン101は、図1とともに説明した多層ニューロンを表しており、式(4)で定義されるポッツハミルトニアンHPottsに基づいて表現される新規なニューラルネットワークを1つの機能ブロックとして示している。 Fig. 2 is a diagram showing more specific functions and configurations of the associative memory device of the present disclosure. Fig. 2(a) shows an associative memory device 100 represented by functional blocks, and corresponds to the abstract configuration diagram represented as a neural network in Fig. 1. The reference numerals of each block in Fig. 2(a) correspond to those in Fig. 1. In Fig. 2(a), a neuron 101 represents the multi-layer neuron described in conjunction with Fig. 1, and a novel neural network represented based on the Potts Hamiltonian H Potts defined by equation (4) is shown as one functional block.

 したがって本開示の連想記憶装置は、ポッツハミルトニアンHpottsを最小化するフィードバック計算を実行する演算部101、103と、連想のために使用される情報ξを予め記憶する記憶保持部102とを備え、前記ポッツハミルトニアンHpottsを最小化するフィードバック計算の結果に基づいて、連想結果として、入力された情報x102に最も近い記憶された情報105を出力するよう構成されたものとして実施できる。 Therefore, the associative memory device of the present disclosure can be implemented as one that includes calculation units 101, 103 that execute a feedback calculation to minimize the Potts Hamiltonian H potts , and a memory holding unit 102 that stores in advance information ξ to be used for association, and is configured to output, as the association result, stored information 105 that is closest to input information x 102, based on the result of the feedback calculation to minimize the Potts Hamiltonian H potts .

 図2の(b)は、連想記憶装置100を実施する具体的な構成を示しており、連想記憶装置はコンピュータ100-1による演算処理によって実施される。コンピュータ100-1は、CPU120およびメモリ121を備え、さらに外部との間で、情報を入力または出力するインタフェース部(I/O)123も持っている。メモリ122は、一時的な記憶または半永続的な記憶が可能な様々な記憶デバイスであり得るし、異なるタイプの複数のメモリを含んでも良い。CPU120が動作するとき、メモリ122には、後述するアルゴリズムによる連想記憶を実施する演算プログラム122がロードされる。プログラム122は、コンピュータ内のメモリに記憶されていたものでも良いし、コンピュータ100-1の外部から提供されても良い。ネットワークを経由してコンピュータ100-1に提供されても良い。また、図1および図2の(a)において記憶保持部102における記憶された情報は、コンピュータ121のメモリとは異なる、コンピュータ121の外部の記憶手段に保持されていても良い。 2(b) shows a specific configuration for implementing the associative storage device 100, which is implemented by the arithmetic processing of the computer 100-1. The computer 100-1 includes a CPU 120 and a memory 121, and also has an interface unit (I/O) 123 for inputting and outputting information to and from the outside. The memory 122 can be various storage devices capable of temporary or semi-permanent storage, and may include multiple memories of different types. When the CPU 120 operates, an arithmetic program 122 for implementing associative storage according to an algorithm described below is loaded into the memory 122. The program 122 may be stored in a memory within the computer, or may be provided from outside the computer 100-1. It may be provided to the computer 100-1 via a network. Also, the information stored in the memory retention unit 102 in FIG. 1 and FIG. 2(a) may be stored in a storage means outside the computer 121, different from the memory of the computer 121.

 図2の(a)の機能ブロックにおけるニューロン101および相互作用演算部103は、図2の(b)のCPU120におけるプログラム122に基づいた演算処理に対応し、記憶保持部102は、図2の(b)の物理メモリ121に対応する。次に述べるアルゴリズムの処理を実施できる限り、上述のコンピュータ100-1の構成だけに限られず、一部の演算処理を専用のハードウェアで実施することもできるし、複数のコンピュータによって分散処理によって実施することもできる。 The neuron 101 and the interaction calculation unit 103 in the functional block of FIG. 2(a) correspond to the calculation processing based on the program 122 in the CPU 120 in FIG. 2(b), and the memory holding unit 102 corresponds to the physical memory 121 in FIG. 2(b). As long as the processing of the algorithm described below can be carried out, it is not limited to the configuration of the computer 100-1 described above, and some of the calculation processing can be carried out by dedicated hardware, or can be carried out by distributed processing using multiple computers.

 図3は、本開示の連想記憶装置において連想を実施するアルゴリズムを示した図である。フローチャート300は、図1のポッツハミルトニアンHPottsに基づくニューラルネットワークの、式(4)から式(13)で示した処理を表したものである。以下では、図2の(a)の連想記憶装置100および前述の数式を参照しながらアルゴリズムの説明を行う。フロー300は、連想記憶装置100への初期入力の工程S301で始まり、連想した結果xを出力する工程S305で終わる。フロー300の全工程を経ることで、入力データ104に対して、連想した結果105が出力されて、連想記憶装置100における連想処理が実施される。 FIG. 3 is a diagram showing an algorithm for performing association in the associative memory device of the present disclosure. A flow chart 300 shows the processing shown in formulas (4) to (13) of the neural network based on the Potts Hamiltonian H Potts in FIG. 1. The algorithm will be described below with reference to the associative memory device 100 in FIG. 2(a) and the above-mentioned formulas. The flow chart 300 starts with step S301 of initial input to the associative memory device 100 and ends with step S305 of outputting the associative result x. By going through all the steps of the flow chart 300, the associative result 105 is output for the input data 104, and the associative processing in the associative memory device 100 is performed.

 S302において、装置100にデータxiが入力され、図1において時間発展するニューロン101が初期状態に設定されることに対応する。すなわち図1の左端の縦に並んだ4つのニューロンを初期状態と見ることができる。 In S302, data x i is input to the device 100, which corresponds to the time-evolving neuron 101 being set to its initial state in Fig. 1. In other words, the four neurons lined up vertically on the left side of Fig. 1 can be considered as the initial state.

 S303およびS304は、収束条件が満たされるまでループ状に繰り返しされる。ここでは、説明の簡単化のため、最初の状態では入力データは収束していなものとする。式(12)および式(13)を使ってxkが計算され、更新される。S304において更新されたxkに(xnew)よって、再度S303で収束判定がなされる。 S303 and S304 are repeated in a loop until the convergence condition is satisfied. For the sake of simplicity, it is assumed that the input data has not converged in the initial state. x k is calculated and updated using equations (12) and (13). Convergence is judged again in S303 using x k (x new ) updated in S304.

 S303において、式(13)のdF/dxkの値が0となったとき、収束したと判定する。判定結果がNo(偽)であれば、再びS304でxkが更新される。尚、収束の判断の方法は様々なバリエーションが可能である。 In S303, it is determined that convergence has occurred when the value of dF/dx k in equation (13) becomes 0. If the determination result is No (false), x k is updated again in S304. Note that the method of determining convergence can be varied in various ways.

 判定結果がYes(真)となれば、S305において、最新のXkを、連想した結果として出力する。 If the determination result is Yes (true), the latest X k is output as the association result in S305.

 フロー300の上述の演算アルゴリズムにおいて、S303およびS304の繰り返しによって、式(12)にしたがってxkを更新することは、図1の連想記憶装置100の概念図における、ニューロン111の時間発展に対応する。またS303およびS304の繰り返し演算は、 ポッツハミルトニアンHpottsを最小化するフィードバック計算を実行している。 In the above-described computation algorithm of flow 300, the repetition of steps S303 and S304 to update x k according to equation (12) corresponds to the time evolution of neuron 111 in the conceptual diagram of associative memory device 100 in Fig. 1. The repetitive computations of steps S303 and S304 execute a feedback calculation to minimize the Potts Hamiltonian H potts .

 上述の説明では、S302の段階で、初期の入力データ104は、収束していないものとして説明したが、入力データが連想結果と極めて類似している場合は、S304の更新なしで、直ちに収束する場合もある。 In the above explanation, it was described that the initial input data 104 has not converged at the step S302, but if the input data is very similar to the association result, it may converge immediately without updating in step S304.

 上述のように、本開示の連想記憶装置100は、ポッツハミルトニアンHPottsに基づいてモデル化したニューラルネットワークと、記憶保持部における記憶ビットの相互作用演算で表されるコスト関数を最小化する。このコスト関数の最小化を、ニューロンの時間発展として記述される繰り返し演算によって実施し、連想を実施する。従来技術のホップフィールド型ニューラルネットワークが、完全に相互結合したニューロンでモデル化され、結合定数だけで表現されるイジンハミルトニアンHを最小化する演算に比べれば、比較的複雑な演算が行わる。 As described above, the associative memory device 100 of the present disclosure minimizes a cost function represented by an interaction operation between a neural network modeled based on the Potts Hamiltonian H Potts and memory bits in a memory holding unit. This minimization of the cost function is performed by a repetitive operation described as the time evolution of neurons, and association is performed. This is a relatively complex operation compared to the operation of minimizing the Isin Hamiltonian H, which is expressed only by coupling constants and is modeled by fully interconnected neurons in a Hopfield neural network of the prior art.

 図3に示したアルゴリズムの実施は、ある程度の計算能力を持つCPUを利用すれば、図2の(b)に示したような通常のコンピュータ100-1で十分に実施可能である。CPUの計算パワーを軽減する構成については、実施形態2として後述する。 The algorithm shown in FIG. 3 can be implemented by a normal computer 100-1 as shown in FIG. 2(b) if a CPU with a certain level of computing power is used. A configuration for reducing the computing power of the CPU will be described later as embodiment 2.

 本開示のポッツハミルトニアンHPottsに基づいてモデル化したニューラルネットワークに基づく連想記憶装置100は、スプリアスアトラクタを生じないことが数値計算で示されている。 The neural network-based associative memory device 100 modeled on the Potts Hamiltonian H Potts of the present disclosure has been shown numerically to be free of spurious attractors.

 図4は、連想記憶の正答率の数値計算例を従来技術と本開示の連想記憶装置で比較して示したグラフである。グラフの横軸に、Mを記憶した情報の数、Nを情報のビット数とした時のM/Nの値を、横軸に正答率を示した。正答率は、ランダムに与えた入力から記憶したもののうち、いずれか1つを連想できた場合を正答とした。数値計算に用いたパラメータはα=0.5、β=10である。すべて100回以内の更新で十分収束することが確認されている。 Figure 4 is a graph showing an example of numerical calculation of the accuracy rate of associative memory, comparing the conventional technology with the associative memory device disclosed herein. The horizontal axis of the graph shows the M/N value, where M is the number of stored information and N is the number of bits of information, and the horizontal axis shows the accuracy rate. The accuracy rate was determined as the case where one of the stored items was associated with a randomly given input. The parameters used in the numerical calculation were α = 0.5 and β = 10. It was confirmed that all converged sufficiently with less than 100 updates.

 ▽(逆三角)のプロットは、イジンハミルトニアンを最小化するホップフィールド型ニューラルネットワークによる、従来技術の連想記憶装置の場合を示している。M/Nが0.12程度で、正答率が落ち込んでいるのがわかる。この正答率の低下は、図9の(b)で既に述べたようにスプリアスアトラクタに因るものである。一方で、●(丸)のプロットは、ポッツ型の相互作用で記述されるニューラルネットワークの構成に基づいた本開示の連想記憶装置100における正答率を示している。M/Nの値に関わらず、正答率は全く低下していない。 The ▽ (inverted triangle) plot shows the case of a conventional associative memory device using a Hopfield neural network that minimizes the Ising Hamiltonian. It can be seen that the accuracy rate drops when M/N is around 0.12. This drop in accuracy rate is due to spurious attractors, as already mentioned in FIG. 9(b). On the other hand, the ● (circle) plot shows the accuracy rate in the associative memory device 100 of the present disclosure, which is based on a neural network configuration described by Potts-type interactions. Regardless of the value of M/N, there is no drop in accuracy rate at all.

 図4の比較から明らかなように、連想記憶装置で正答率が全く低下しないことは、図1に示したポッツハミルトニアンに基づくニューラルネットワークの構成によれば、スプリアスアトラクタが生じないことを示唆している。ポッツハミルトニアンのスピン状態に対するエネルギー関数は、クロネッカーδが1になる点すべてエネルギーの最小値となり、他はすべて0になる単純な構造を持っている。一方、イジングハミルトニアンのスピン状態に対するエネルギー関数は、図9の(b)に示したように、局所的な極小値がいろいろな場所にいろいろな値で存在し、スプリアスアトラクタが出現する。 As is clear from the comparison in Figure 4, the accuracy rate does not decrease at all in the associative memory device, suggesting that spurious attractors do not occur when the neural network configuration based on the Potts Hamiltonian shown in Figure 1 is used. The energy function for the spin state of the Potts Hamiltonian has a simple structure in which all points where the Kronecker δ is 1 have a minimum energy value, and all other points are 0. On the other hand, the energy function for the spin state of the Ising Hamiltonian has local minima at various values in various places, as shown in Figure 9(b), and spurious attractors appear.

 式(8)に示したような、ポッツハミルトニアンを最小化するコスト関数を設定することは、これまで行われていない。さらに、式(8)から式(11)まで検討をしたように、式(10)のコスト関数Fにおいてβを無限大の極限にすることで、ポッツハミルトニアンを再現させる操作の工夫を行い、実用的な計算でニューロンの時間発展を記述することができた。 Setting a cost function that minimizes the Potts Hamiltonian, as shown in equation (8), has not been attempted up until now. Furthermore, as examined in equations (8) to (11), by taking β to the limit of infinity in the cost function F in equation (10), we devised an operation that reproduces the Potts Hamiltonian, and were able to describe the time evolution of neurons with practical calculations.

[実施形態2]
 実施形態1の連想記憶装置における、ポッツハミルトニアンを最小化する式(11)~式(13)による記憶された情報との相互作用演算は、従来技術の式(2)のように結合定数の演算よりもより大きい演算能力が求められる。CPUの計算パワーが不足している場合は、ホップフィールド型ニューラルネットワークによる従来技術と比べて、連想する処理に時間が掛かる場合もある。本実施形態では、従来技術のホップフィールド型ニューラルネットワークに基づく連想記憶の演算処理を、ポッツハミルトニアンを最小化する演算と組み合わせて、連想記憶の全体プロセスにおいて計算負荷を軽減する構成を開示する。
[Embodiment 2]
In the associative memory device of the first embodiment, the calculation of interaction with stored information by formulas (11) to (13) for minimizing the Potts Hamiltonian requires a greater computing power than the calculation of the coupling constant as in formula (2) of the prior art. If the computing power of the CPU is insufficient, the associative processing may take longer than the prior art using a Hopfield neural network. In this embodiment, a configuration is disclosed that combines the calculation processing of the associative memory based on the Hopfield neural network of the prior art with the calculation for minimizing the Potts Hamiltonian to reduce the computation load in the entire process of the associative memory.

 図5は、実施形態2の連想記憶装置の機能ブロックおよび構成を示す図である。連想記憶装置200は、ニューラルネットワーク210およびウェイト更新部220を含む。ニューラルネットワーク210は、例えば、イジンハミルトニアンHを最小化することで連想を行う、ホップフィールド型のニューラルネットワークであり得る。すなわち、従来技術の連想記憶装置に相当する。ニューラルネットワーク210は、後述するように、イジンハミルトニアンHを最小化する第1の演算ループを構成する。 FIG. 5 is a diagram showing the functional blocks and configuration of the associative memory device of embodiment 2. The associative memory device 200 includes a neural network 210 and a weight update unit 220. The neural network 210 may be, for example, a Hopfield-type neural network that performs association by minimizing the Isin Hamiltonian H. In other words, it corresponds to the associative memory device of the prior art. The neural network 210 forms a first calculation loop that minimizes the Isin Hamiltonian H, as described below.

 ウェイト更新部210は、ポッツハミルトニアンHPottsを利用してコスト関数を最小化する、実施形態1の連想記憶装置におけるアルゴリズムにおける演算と同様に実施される。ウェイト更新部210は後述するように、ポッツハミルトニアンHPottsを最小化する第2の演算ループを構成する。 The weight update unit 210 minimizes a cost function using the Potts Hamiltonian H Potts , and is implemented in the same manner as the calculation in the algorithm in the associative storage device of embodiment 1. As will be described later, the weight update unit 210 forms a second calculation loop that minimizes the Potts Hamiltonian H Potts .

 アルゴリズムの詳細は後述するが、ニューラルネットワーク210のウェイト(重みづけ)部202および演算部203、並びに、ウェイト更新部210の演算部222は、いずれも単一のCPUによって実施できる演算である。したがって、図5の実施形態の連想記憶装置200は、図2の(b)に示したようなコンピュータ100-1によって実施することもできる。また、ニューラルネットワーク210のウェイト(重みづけ)部202および演算部203の演算が、FPGAなどの特定の演算を高速に行える素子で実装されているニューラルネットワークも利用できる。この場合には、ウェイト更新部210の演算部222をCPUによって実施することもできる。また、ニューラルネットワーク210を、後述するような光パルスを用いた計算装置によって実施しても良い。 The details of the algorithm will be described later, but the weighting section 202 and calculation section 203 of the neural network 210, and the calculation section 222 of the weight updating section 210 are all calculations that can be performed by a single CPU. Therefore, the associative memory device 200 of the embodiment of FIG. 5 can also be implemented by a computer 100-1 as shown in FIG. 2(b). Also, a neural network in which the calculations of the weighting section 202 and calculation section 203 of the neural network 210 are implemented using elements that can perform specific calculations at high speed, such as an FPGA, can also be used. In this case, the calculation section 222 of the weight updating section 210 can also be implemented by a CPU. The neural network 210 may also be implemented by a calculation device that uses optical pulses as will be described later.

 実施形態2の連想記憶装置200の入力104、出力105、学習入力106は、実施形態1の場合と同じである。入出力部211への入力情報104は、-1から1までの間をとる実数であり、実際には、Nビットの入力データであって、入力情報104に対して連想が実施される。後述する図6のフロー400の全工程を経ることで、入力データ104に対して、連想した結果105が出力されて、連想記憶装置200における連想処理が実施される。 The input 104, output 105, and learning input 106 of the associative memory device 200 of the second embodiment are the same as those of the first embodiment. The input information 104 to the input/output unit 211 is a real number between -1 and 1, and is actually N-bit input data, and association is performed on the input information 104. By going through all the steps of flow 400 in FIG. 6, which will be described later, the associative result 105 is output for the input data 104, and the associative processing in the associative memory device 200 is performed.

 図6は、実施形態2の連想記憶装置で連想を実施するアルゴリズムを示す図である。フローチャート400は、図5の連想記憶装置200で実施される処理を表したものである。以下、フローチャート400を参照しながら、連想記憶装置200の演算アルゴリズムを説明する。 FIG. 6 is a diagram showing an algorithm for performing association in the associative storage device of embodiment 2. Flowchart 400 shows the processing performed in the associative storage device 200 of FIG. 5. Below, the computation algorithm of the associative storage device 200 will be explained with reference to flowchart 400.

 まずS402において、入出力部211への初期の入力情報xiが入力される。初期入力に対しては、説明の簡単化のため、S403の収束判定の工程はバイパスされるものとし、S403の判定については後述する。 First, in S402, initial input information x i is input to the input/output unit 211. For the initial input, in order to simplify the explanation, the convergence determination step in S403 is bypassed, and the determination in S403 will be described later.

 次に、S404において、ウェイト部212によって、入力xiに対してウェイトJijおよびBiを掛け合わせて重みづけを行い、次式のようにyiを求める。

Figure JPOXMLDOC01-appb-I000012
Next, in S404, the weighting unit 212 multiplies the input x i by the weights J ij and B i to obtain y i as shown in the following equation.
Figure JPOXMLDOC01-appb-I000012

 式(14)で、iおよびjは、i=1, …,N、j=1, …,Nであって、入力xiのビットのインデックスである。また式(14)のx0は、常に1となる特殊な入力であって、そのウェイトBiも磁場項とも呼ばれる特殊なウェイトの一種である。式(14)のyiは、結合係数(相互作用)JijのΣの項と、磁場項の足し算になっており、式(14)のyiの形式が、イジングハミルトニアンHIsingとなっている。すなわち、式(14)によって重みづけされたyiは、式(1)および式(2)による、ホップフィルールド型のニューラルネットワークで表されていることを意味する。 In formula (14), i and j are i=1,...,N, j=1,...,N, and are bit indexes of the input x i . Also, x 0 in formula (14) is a special input that is always 1, and its weight B i is also a kind of special weight called a magnetic field term. y i in formula (14) is the sum of the Σ term of the coupling coefficient (interaction) J ij and the magnetic field term, and the form of y i in formula (14) is the Ising Hamiltonian H Ising . That is, it means that y i weighted by formula (14) is expressed by a Hopfield-type neural network by formulas (1) and (2).

 さらにS405において、重みづけされた入力yiは、演算部213において、活性化関数によってさらに次式のように置き換えられる。

Figure JPOXMLDOC01-appb-I000013
Furthermore, in S405, the weighted input y i is further replaced by an activation function in the calculation unit 213 as shown in the following equation.
Figure JPOXMLDOC01-appb-I000013

 式(15)は、現在のxiを新しいxiに更新していることに対応し、S405は、ホップフィルールド型のニューラルネットワーク210において「連想」を実施していると言うことを意味している。式(15)は、別の活性化関数に置き換えて、ホップフィルールド型とは異なる、別のニューラルネットワークを利用しても構わない。更新されたxiに対して、再びS403の工程を実施する。 Equation (15) corresponds to updating the current x i to new x i , and S405 means that "association" is performed in the Hop-Filled neural network 210. Equation (15) may be replaced with another activation function, and another neural network other than the Hop-Filled type may be used. The step of S403 is performed again for the updated x i .

 S403において、更新されたxiについて、収束判定が行われる。この収束判定は、更新されたxnewが、更新前のxoldと変わらなくなった時、つまり|xnew-xold|=0 が条件となる。収束判定結果がNo(偽)の場合は、再びS404でyiの計算、S404でxiの更新が繰り返される(第1の演算ループ)。収束判定結果がYes(真)の場合はS406に進み、ウェイト更新部220の演算部222による処理となる。したがって、上述のS402~S405の工程は、図6に示したようにニューラルネットワーク210における処理であり、第1の演算ループ処理と呼ぶことができる。 In S403, a convergence test is performed on the updated x i . The condition for this convergence test is when the updated x new is no longer different from the x old before the update, that is, |x new -x old |=0. If the convergence test result is No (false), the calculation of y i is repeated again in S404, and the update of x i is repeated in S404 (first calculation loop). If the convergence test result is Yes (true), the process proceeds to S406, where processing is performed by the calculation unit 222 of the weight update unit 220. Therefore, the above-mentioned steps S402 to S405 are processing in the neural network 210 as shown in FIG. 6, and can be called the first calculation loop processing.

 S406において、S403における収束判定が行われた最後に更新されたxiを、一時的な連想出力(仮のx出力)として、ウェイト更新部220の演算部222へ出力する。 In S406, the last updated x i for which the convergence judgment in S403 was performed is output to the calculation unit 222 of the weight update unit 220 as a temporary associative output (provisional x output).

 次に、S407において、ニューラルネットワーク210により更新されたxiを使って、演算部222は、記憶保持部221に記憶された情報ξに基づいて、ウェイトJijおよびBiを更新する。記憶された情報ξは、定義(6)の通り、M個のN次元のベクトルで表現したものである。

Figure JPOXMLDOC01-appb-I000014
Next, in S407, the calculation unit 222 uses x updated by the neural network 210 to update the weights J ij and B i based on the information ξ stored in the memory retention unit 221. The stored information ξ is expressed as M N-dimensional vectors, as in definition (6).
Figure JPOXMLDOC01-appb-I000014

 ここでiおよびjは、i=1, …,N、j=1, …,Nであって、入力xiのビットのインデックスである。同じくlはl=1,…,i-1,i+1,…,j-1,j+1, ,…,Nであって入力xiのビットのインデックスであるが、iおよびjを含まない。上述の式(16)、式(17)で更新されたウェイトを持つイジングハミルトニアンHIsingは、次のように書き換えられる。

Figure JPOXMLDOC01-appb-I000015
Here, i and j are indices of bits of input x i, where i = 1, ..., N, j = 1, ..., N. Similarly, l is an index of bits of input x i, where l = 1, ..., i-1, i+1, ..., j-1, j+1, ..., N, but does not include i and j. The Ising Hamiltonian H Ising with the weights updated in the above equations (16) and (17) can be rewritten as follows.
Figure JPOXMLDOC01-appb-I000015

 式(18)で、σiをxiに置き換えると、次式が得られる。

Figure JPOXMLDOC01-appb-I000016
In equation (18), by replacing σ i with x i , we obtain the following equation:
Figure JPOXMLDOC01-appb-I000016

 式(19)から、イジングハミルトニアンHIsingは、定数倍を除いて、式(7)で表されるポッツハミルトニアンと等しいことが分かる。したがって、式(16)および式(17)で更新されるウェイトを持ち、式(14)で表される重みづけされたイジングハミルトニアンンHIsingは、ポッツハミルトニアンに等しい。 From equation (19), it can be seen that the Ising Hamiltonian H Ising is equal to the Potts Hamiltonian expressed by equation (7) except for the constant multiple. Therefore, the weighted Ising Hamiltonian H Ising expressed by equation (14) with weights updated by equations (16) and (17) is equal to the Potts Hamiltonian.

 引き続きS408において、演算部222によって、S407における更新されたウェイトJijおよびBiが、更新前のJijおよびBiと変わらなくなったかどうか、すなわち、ウェイトの収束が判定される。次式が収束条件となる。

Figure JPOXMLDOC01-appb-I000017
Subsequently, in S408, the calculation unit 222 judges whether the weights J ij and B i updated in S407 are the same as the J ij and B i before the update, that is, whether the weights have converged. The following equation is the convergence condition.
Figure JPOXMLDOC01-appb-I000017

 上記の収束判定結果がNo(偽)の場合は、ニューラルネットワーク210の処理のS406に戻り、再びS404でyiの計算、S404でxiの更新、すなわち新たなループ演算が繰り返される。収束判定結果がYes(真)の場合は、ウェイト更新部220の処理のS406に進み、ウェイト更新部220の処理で最後に使用されたxiを最終出力として、S409において連想した最終結果105として、出力する。したがって、上述のS407~S408の工程は、図6に示したようにウェイト更新部220における処理となり、第2の演算ループ処理と呼ぶことができる。 If the convergence determination result is No (false), the process returns to S406 of the neural network 210, where y i is calculated again in S404, and x i is updated in S404, i.e., a new loop operation is repeated. If the convergence determination result is Yes (true), the process proceeds to S406 of the weight update unit 220, where x i last used in the process of the weight update unit 220 is output as the final output, that is, as the final result 105 associated in S409. Therefore, the above-mentioned steps S407 to S408 become the process in the weight update unit 220 as shown in FIG. 6, and can be called the second operation loop process.

 上述のS407のウェイトJijおよびBiの更新およびS408の収束判定は、上述のS403におけるxiの収束判定の第1の演算ループを包含しており、大きな第2の演算ループを形成している。図5を再び参照すれば、連想記憶装置200は、ニューラルネットワーク210の処理と、ウェイト更新部220における処理を交互に実施しながら、初期の入力情報104に対して、もっとも近い記憶されたデータξが連想され、結果105として出力する。 The above-mentioned updating of weights J ij and B i in S407 and the convergence test in S408 include the first calculation loop of the convergence test of x i in S403, and form a large second calculation loop. Referring again to Fig. 5, the associative memory device 200 alternates between the processing in the neural network 210 and the processing in the weight update unit 220, and associates the closest stored data ξ with the initial input information 104, and outputs it as the result 105.

 式(19)において示したように、ウェイト更新部220におけるウェイトJijおよびBiの更新は、ポッツハミルトニアンに基づく演算処理である。また、式(16)および式(17)から明らかなように、ウェイトJijおよびBiの更新は、ニューラルネットワーク210により更新されたxiと記憶保持部221に記憶された情報ξとによって行われる相互作用演算である。この点において、実施形態2の連想記憶装置200は、図1に示した概念図における相互作用演算103と同じ動作をしていることに留意されたい。図1の実施形態1の連想記憶装置100との相違点は、Nビットの多層ニューロン101が、図5のホップフィールド型ニューラルネットワークになっていることである。連想記憶装置200でニューラルネットワーク210としたのは、連想のプロセスの一部の演算処理を、計算負荷の少ない従来技術のイジングハミルトニアンHIsingの処理に置き換えるためである。 As shown in formula (19), the weights J ij and B i are updated in the weight update unit 220 by a calculation process based on the Potts Hamiltonian. Also, as is clear from formulas (16) and (17), the weights J ij and B i are updated by an interaction calculation performed by the x i updated by the neural network 210 and the information ξ stored in the memory holding unit 221. In this respect, it should be noted that the associative memory device 200 of the second embodiment operates in the same manner as the interaction calculation 103 in the conceptual diagram shown in FIG. 1. The difference between the associative memory device 100 of the first embodiment in FIG. 1 is that the N-bit multi-layer neuron 101 is a Hopfield type neural network in FIG. 5. The reason why the associative memory device 200 uses the neural network 210 is to replace a part of the calculation process of the association process with the processing of the Ising Hamiltonian H Ising of the conventional technology, which has a low calculation load.

 本発明は、図3、図6のフローチャート300、400の各工程を順次実施する、連想記憶装置において実施される方法としての側面も持っている。例えば、ポッツハミルトニアンHpottsを最小化するフィードバック計算を実行する演算部と、連想のために使用される情報ξを予め記憶する記憶保持部とを備えた連想記憶装置を考える。連想記憶を実施する方法は、フローチャート300に対応して、初期入力xiを受け取るステップと、コスト関数であるポッツハミルトニアンHPottsを逐次的に更新するステップと、コスト関数が最小化されたかどうかの収束を判定するステップと、最小化が達成された状態で、前記連想結果を出力するステップとを備える方法として実施できる。 The present invention also has an aspect of a method implemented in an associative memory device that sequentially implements the steps of the flowcharts 300 and 400 in Figures 3 and 6. For example, consider an associative memory device that includes a calculation unit that executes a feedback calculation to minimize the Potts Hamiltonian H potts and a memory holding unit that stores information ξ to be used for association in advance. The method for implementing associative memory can be implemented as a method that includes the steps of receiving an initial input x i corresponding to the flowchart 300, successively updating the Potts Hamiltonian H Potts , which is a cost function, determining whether the cost function has been minimized and outputting the association result when minimization has been achieved.

 図3の1つのループを含むフロー300と比べて、図6のフロー500は、イジングハミルトニアンHIsingに基づくxiのための第1の演算ループと、ポッツハミルトニアンHPottsに基づくウェイトJijおよびBiのための第2の演算ループから構成されている。図6のフロー500の開始(S401)から終了(S409)までを時間的に一直線に並べれば、第1の演算ループに対応する工程と、第2の演算ループに対応する工程が交互の現れることになる。全体の工程の数は、フロー300と比べて増える可能性が高いが、イジングハミルトニアンHIsingの処理のS403~S405の計算負荷は、記憶保持部221に記憶された情報ξとの相互作用演算を含まず、計算負荷が軽い。したがって、連想した出力を得るまでの計算負荷を、実施形態1の連想記憶装置100よりも軽くすることができる。計算パワーの小さいコンピュータで連想記憶装置200を実装しても、より短時間で連想を実施することができる。 Compared with the flow 300 including one loop in FIG. 3, the flow 500 in FIG. 6 is composed of a first computation loop for x i based on the Ising Hamiltonian H Ising , and a second computation loop for weights J ij and B i based on the Potts Hamiltonian H Potts . If the flow 500 in FIG. 6 is arranged in a straight line from the start (S401) to the end (S409), the steps corresponding to the first computation loop and the steps corresponding to the second computation loop will appear alternately. The total number of steps is likely to be increased compared to the flow 300, but the computation load of S403 to S405 of the processing of the Ising Hamiltonian H Ising does not include an interaction calculation with the information ξ stored in the memory holding unit 221, and the computation load is light. Therefore, the computation load until the associated output is obtained can be lighter than that of the associative storage device 100 of the first embodiment. Even if the associative storage device 200 is implemented on a computer with a small computing power, the association can be performed in a shorter time.

 したがって本開示の連想記憶装置は、入力された情報xを受け入れる入出力部221と、ホップフィールド型ニューラルネットワークに対応しており、イジングハミルトニアンHIsingに基づいた第1のコスト関数によって、前記入力された情報xを逐次的に更新し(S405)、dF/dxの値によって、前記第1のコスト関数が最小化されたかどうかを判定し(S403)、一時的な連想結果を出力するよう(S406)構成された第1の演算ループ部210と、前記記憶保持部に保持された前記情報ξとの相互作用を規定するポッツハミルトニアンHPottsに基づいた第2のコスト関数によって、前記入力された情報xに対する重みづけを表すウェイトJijおよびBiの各値を逐次的に更新し(S407)、前記第2のコスト関数が最小化されたかどうかを判定し(S408)、最小化されていないと判定された場合に、更新されたウェイトJijおよびBiを前記第1の演算ループ部に与えるよう構成された第2の演算ループ部とをさらに備え、前記第1の演算ループ部は、前記更新されたウェイトJijおよびBiを使って、前記第1のコスト関数を最小化する新たなループ演算を実施するようさらに構成され、前記第2の演算ループ部において前記第2のコスト関数が最小化された状態で、前記入出力部は、連想結果105として、最新の前記一時的な連想結果を出力するよう構成されたものとしても実施できる。 Therefore, the associative storage device of the present disclosure further includes an input/output unit 221 that accepts input information x, a first calculation loop unit 210 that corresponds to a Hopfield neural network and is configured to sequentially update the input information x by a first cost function based on an Ising Hamiltonian H Ising (S405), determine whether the first cost function has been minimized by the value of dF/dx (S403), and output a temporary associative result (S406), and a second calculation loop unit that sequentially updates each value of weights J ij and B i that represent weighting for the input information x by a second cost function based on a Potts Hamiltonian H Potts that defines an interaction with the information ξ stored in the memory storage unit (S407), determine whether the second cost function has been minimized (S408), and if it is determined that the information has not been minimized, provides the updated weights J ij and B i to the first calculation loop unit, and the first calculation loop unit further includes a second calculation loop unit that sequentially updates each value of weights J ij and B i that represent weighting for the input information x by a second cost function based on an Potts Hamiltonian H Potts that defines an interaction with the information ξ stored in the memory storage unit (S408), determines whether the second cost function has been minimized (S409), and provides the updated weights J ij and B i to the first calculation loop unit when it is determined that the information has not been minimized. ij and B i to perform a new loop operation for minimizing the first cost function, and when the second cost function is minimized in the second operation loop section, the input/output unit is configured to output the latest temporary association result as an association result 105.

 上述のように、図5の連想記憶装置200のニューラルネットワーク210のウェイト部202および演算部203、並びに、ウェイト更新部210の演算部222は、いずれも単一のCPUによって実装できる。記憶保持部221は、一般的なメモリとすることができるので、連想記憶装置200は一般的なコンピュータで実装できる。図6に示したアルゴリズムの演算処理のプログラムをコンピュータ上で実施することで、連想記憶を実現できる。ニューラルネットワーク210は、従来技術のホップフィールド型ニューラルネットワークであり、イジングモデルの最小化問題と等価であるので、コンピュータのみの構成とは異なる形態でも実装可能である。 As described above, the weight unit 202 and calculation unit 203 of the neural network 210 of the associative memory device 200 in FIG. 5, and the calculation unit 222 of the weight update unit 210 can all be implemented by a single CPU. The memory holding unit 221 can be a general memory, so the associative memory device 200 can be implemented by a general computer. Associative memory can be realized by executing a program for the calculation processing of the algorithm shown in FIG. 6 on a computer. The neural network 210 is a Hopfield-type neural network of the prior art, and is equivalent to the minimization problem of the Ising model, so it can be implemented in a form other than a configuration that is only a computer.

 例えば、ニューラルネットワーク210を、コヒーレントイジングマシンとして知られている計算装置によって実装できる。コヒーレントイジングマシンの構成は、何ら限定されない。コヒーレントイジングマシンの計算装置の一例として、光ファイバで構成されるリング共振器内の位相感応増幅器に対して、ポンプ光パルスを注入し、パルスの位相とイジングモデルのスピンを対応付けたものが利用できる(非特許文献6)。コヒーレントイジングマシンには、結合係数の演算のために、専用のFPGAやGPU、これらの組合せを含んでいても良い。 For example, the neural network 210 can be implemented by a computing device known as a coherent Ising machine. The configuration of the coherent Ising machine is not limited in any way. As an example of a computing device for a coherent Ising machine, a device that injects a pump light pulse into a phase-sensitive amplifier in a ring resonator made of optical fiber and associates the phase of the pulse with the spin of the Ising model can be used (Non-Patent Document 6). The coherent Ising machine may include a dedicated FPGA, GPU, or a combination of these for calculating the coupling coefficients.

 図7は、物理コヒーレントイジングマシンを含む連想記憶装置の構成の一例を示す図である。連想記憶装置500は、イジングモデルの計算装置であって、リング状の光ファイバ他を含む光学要素部分501と、演算処理と光学要素の制御を実施するコンピュータ部分502を含む。光学要素部分501は、光ファイバ1、位相感応増幅器(PSA)2で構成されるリング共振器を含む。さらに、パルス列を生成する外部パルス生成光源5、パルスの位相状態を検出するホモダイン検波器であり得る光パルス測定部3、光パルスに加える相互作用を計算する演算器4を含む。イジングモデルにおける結合係数の更新演算は、結合係数設定部7、計算結果保持部6によって行われ、演算器4、結合係数設定部7、計算結果保持部6の要素は、コンピュータのCPUおよびメモリによって実装できる。 FIG. 7 is a diagram showing an example of the configuration of an associative memory device including a physical coherent Ising machine. The associative memory device 500 is a calculation device for an Ising model, and includes an optical element part 501 including a ring-shaped optical fiber and the like, and a computer part 502 that performs calculation processing and controls the optical elements. The optical element part 501 includes a ring resonator consisting of an optical fiber 1 and a phase-sensitive amplifier (PSA) 2. It also includes an external pulse generating light source 5 that generates a pulse train, an optical pulse measuring unit 3 that may be a homodyne detector that detects the phase state of the pulse, and a calculator 4 that calculates the interaction to be applied to the optical pulse. The update calculation of the coupling coefficient in the Ising model is performed by a coupling coefficient setting unit 7 and a calculation result holding unit 6, and the elements of the calculator 4, the coupling coefficient setting unit 7, and the calculation result holding unit 6 can be implemented by the CPU and memory of a computer.

 ここで連想記憶装置500の具体的な動作説明は省略するが、上述のような光回路要素を含む物理的なハードウェアによってイジングモデルの計算装置を実現でき、実施形態2のニューラルネットワーク210を、イジングモデルの計算装置で置き替えることができる。尚、結合係数設定部7、計算結果保持部6を実装するCPUおよびメモリは、図5の連想記憶装置200のウェイト更新部210の演算部222、記憶保持部221の処理も実施できる。したがって図5の連想記憶装置200は、図7の物理コヒーレントイジングマシンの構成そのままで実施できる。 Although a detailed description of the operation of the associative memory device 500 will be omitted here, an Ising model calculation device can be realized by physical hardware including the optical circuit elements described above, and the neural network 210 of embodiment 2 can be replaced with an Ising model calculation device. The CPU and memory implementing the coupling coefficient setting unit 7 and calculation result holding unit 6 can also perform the processing of the calculation unit 222 and memory holding unit 221 of the weight update unit 210 of the associative memory device 200 in FIG. 5. Therefore, the associative memory device 200 in FIG. 5 can be implemented with the same configuration as the physical coherent Ising machine in FIG. 7.

 このように実施形態2のニューラルネットワーク210は、従来技術のホップフィールド型ニューラルネットワークのイジングモデルに基づく演算が可能な様々な物理構成によって実装できる。 In this way, the neural network 210 of embodiment 2 can be implemented using various physical configurations that enable calculations based on the Ising model of the Hopfield neural network of conventional technology.

[実施形態3]
 上述の実施形態1の連想記憶装置100および実施形態2の連想記憶装置200は、いずれも、装置内に、記憶された情報ξが保持されている記憶保持部102、221を備えている。具体的には、記憶保持部102、221はコンピュータにおけるメモリやハードディスクドライブなどの記憶デバイスであり得る。記憶された情報ξは、学習入力として記憶保持部に入力され、保持されている情報であって、学習入力として構成された情報ξである。いずれも、図3および図5に示したアルゴリズムにおける入力情報xと記憶された情報ξの間の相互作用演算が可能なように、演算部(CPU)が、記憶された情報ξに対してアクセスが可能であれば良い。
[Embodiment 3]
The associative storage device 100 of the first embodiment and the associative storage device 200 of the second embodiment described above both include a memory storage unit 102, 221 in which stored information ξ is stored. Specifically, the memory storage unit 102, 221 may be a storage device such as a computer memory or a hard disk drive. The stored information ξ is information that is input to the memory storage unit as a learning input and is stored therein, and is information ξ configured as a learning input. In either case, it is sufficient that the calculation unit (CPU) can access the stored information ξ so that an interaction calculation between the input information x and the stored information ξ in the algorithms shown in Figures 3 and 5 is possible.

 図8は、実施形態3の連想記憶装置の機能ブロックおよび構成を示す図である。連想記憶装置600は、図2の(a)に示した実施形態1の連想記憶装置100から、記憶保持部102を省略したものである。また、装置の外部と情報のやり取りを行うインタフェースとして入出力部107を明示した。 FIG. 8 is a diagram showing the functional blocks and configuration of the associative memory device of the third embodiment. The associative memory device 600 is obtained by omitting the memory retention unit 102 from the associative memory device 100 of the first embodiment shown in FIG. 2(a). Also, an input/output unit 107 is shown as an interface for exchanging information with the outside of the device.

 記憶された情報ξを有する記憶保持部102は、連想記憶装置600とは別の装置610に備えられている。連想記憶装置600は、入出力部106によって、通信経路601を経由して、別の装置610の記憶保持部102と接続されている。したがって連想記憶装置600は、別の装置610の記憶保持部102に準備された、記憶された情報ξにアクセス可能なように構成されている。 The memory retention unit 102 having the stored information ξ is provided in a device 610 separate from the associative memory device 600. The associative memory device 600 is connected to the memory retention unit 102 of the separate device 610 via a communication path 601 by the input/output unit 106. Therefore, the associative memory device 600 is configured to be able to access the stored information ξ prepared in the memory retention unit 102 of the separate device 610.

 通信経路601は、有線、無線の形態を問わず、また通信インタフェース、プロトコルの種類も何ら限定されない。したがって別の装置610は、連想記憶装置600を実施する例えばコンピュータと別のものである限り、別のコンピュータでも良いし、サーバ、記憶装置、その他の電子装置など、どのような機能を持っていても良い。また、ネットワークを介して別の場所に配置された装置であり得る。例えば、連想記憶装置600を第1のサーバコンピュータ上で実施をして、記憶保持部102を有する遠隔地の第2のサーバコンピュータと連携して、連想を実施しても良い。 The communication path 601 may be wired or wireless, and there are no limitations on the type of communication interface or protocol. Therefore, the other device 610 may be another computer, or may have any function, such as a server, a storage device, or other electronic device, so long as it is different from the computer that implements the associative storage device 600. It may also be a device located in another location via a network. For example, the associative storage device 600 may be implemented on a first server computer, and association may be performed in cooperation with a second server computer in a remote location that has a memory retention unit 102.

 実施形態3の構成の場合、別の装置610において学習入力106-1が入力されても良いし、連想記憶装置600において学習入力106-2が入力され、通信経路601を介して、記憶保持部102に保持されても良い。また実施形態1~3の全ての場合において、連想記憶装置への入力情報104および連想された結果出力105は、スピーカ/マイクロフォン、キーパッド、ディスプレイ/タッチパッドなどを経由して、入力または出力される。また入力情報104および連想された結果出力105は、図示しない別の通信経路を経由して、さらに別の入出力装置から入力または出力されても良い。 In the configuration of embodiment 3, the learning input 106-1 may be input to another device 610, or the learning input 106-2 may be input to the associative memory device 600 and stored in the memory storage unit 102 via a communication path 601. In all cases of embodiments 1 to 3, the input information 104 to the associative memory device and the associated result output 105 are input or output via a speaker/microphone, a keypad, a display/touchpad, etc. The input information 104 and the associated result output 105 may also be input or output from yet another input/output device via another communication path not shown.

 図8において、別の装置610における記憶保持部102は、連想記憶装置600による制御によって、ポッツハミルトニアンHPottsに関連する演算処理、ポッツハミルトニアンHpottsを最小化するフィードバック計算を実行する。すなわち図1に示した相互作用演算部103の機能を実施する。連想記憶装置600は、フィードバック計算の結果に基づいて、記憶された情報ξの内で、入出力部107に入力された情報xに最も近い情報を連想結果105として出力する。 8, memory retention unit 102 in another device 610 executes arithmetic processing related to the Potts Hamiltonian H Potts and feedback calculation for minimizing the Potts Hamiltonian H potts under the control of the associative memory device 600. That is, it performs the function of interaction calculation unit 103 shown in Fig. 1. Based on the result of the feedback calculation, the associative memory device 600 outputs, as association result 105, information that is closest to information x input to input/output unit 107 among stored information ξ.

 連想記憶装置600は、記憶された情報ξを保持してはいないが、連想記憶装置600をコンピュータで実施する場合の物理的メモリ、例えば図2の(b)のメモリ121は、演算プログラムを実施するために当然に備えられることに留意されたい。実施形態2の連想記憶装置200についても、図8と全く同様に記憶保持部221を、別の装置に配置する構成とすることができる。図3および図5におけるポッツハミルトニアンHpottsを最小化するフィードバック演算処理のアルゴリズムには、何ら変更は無い。 Although the associative memory device 600 does not hold stored information ξ, it should be noted that when the associative memory device 600 is implemented by a computer, a physical memory, for example, the memory 121 in Fig. 2(b) is naturally provided in order to execute the calculation program. The associative memory device 200 of the second embodiment can also be configured in such a way that the memory holding unit 221 is located in a separate device, exactly like in Fig. 8. There is no change in the algorithm of the feedback calculation process for minimizing the Potts Hamiltonian H potts in Figs. 3 and 5.

 以上、詳細に説明してきたように、従来技術のホップフィールド型ニューラルネットワークに基づくイジングハミルトニアンに代えて、式(4)のポッツハミルトニアンを最小化する演算を実施することで、連想記憶装置の記憶容量を大幅に増やすことができる。ポッツハミルトニアンを最小化する演算は、一般的なコンピュータにおける演算処理で、実装が可能である。従来技術のホップフィールド型ニューラルネットワークに対応する構成および処理を一部に取り込むことで、計算負荷を軽減した連想記憶装置とすることもできる。 As explained in detail above, by performing the calculation to minimize the Potts Hamiltonian in equation (4) instead of the Ising Hamiltonian based on the Hopfield neural network of the conventional technology, the storage capacity of the associative memory device can be significantly increased. The calculation to minimize the Potts Hamiltonian can be implemented by arithmetic processing in a general computer. By partially incorporating the configuration and processing corresponding to the Hopfield neural network of the conventional technology, it is also possible to create an associative memory device with a reduced calculation load.

 本発明は、コンピュータ他によって連想記憶を実施する装置に利用できる。 The present invention can be used in devices that implement associative memory using computers and other devices.

Claims (9)

 ポッツハミルトニアンHpottsを最小化するフィードバック計算を実行する演算部と、
 連想のために使用される情報ξを予め記憶する記憶保持部と
 を備え、
 前記ポッツハミルトニアンHpottsを最小化するフィードバック計算の結果に基づいて、連想結果として、入力された情報xに最も近い記憶された情報を出力するよう構成された連想記憶装置。
a computation unit that performs a feedback calculation to minimize the Potts Hamiltonian H;
a memory holding unit for storing in advance information ξ used for association;
An associative memory device configured to output, as an associative result, stored information that is closest to input information x, based on the result of a feedback calculation that minimizes the Potts Hamiltonian H potts .
 前記演算部は、Nビットの多層ニューロンに対応しており、
  前記入力された情報xと、前記記憶保持部に保持された前記情報ξとの相互作用を規定するポッツハミルトニアンで表されるコスト関数Fの値を算出し、
  前記コスト関数Fの値を逐次的に更新し、
  dF/dxの値によって、前記コスト関数Fが最小化されたことを判定し、
  前記最小化が達成された状態で、前記連想結果を出力するよう構成された請求項1に記載の連想記憶装置。
The calculation unit corresponds to an N-bit multi-layer neuron,
Calculating a value of a cost function F expressed by a Potts Hamiltonian that defines an interaction between the input information x and the information ξ stored in the memory storage unit;
Sequentially updating the value of the cost function F;
determining whether the cost function F has been minimized based on the value of dF/dx;
2. The associative memory device according to claim 1, configured to output said associative results once said minimization has been achieved.
 前記入力された情報xを受け入れる入出力部と、
 ホップフィールド型ニューラルネットワークに対応しており、イジングハミルトニアンHIsingに基づいた第1のコスト関数によって、前記入力された情報xを逐次的に更新し、dF/dxの値によって、前記第1のコスト関数が最小化されたかどうかを判定し、一時的な連想結果を出力するよう構成された第1の演算ループ部と、
 前記記憶保持部に保持された前記情報ξとの相互作用を規定するポッツハミルトニアンHPottsに基づいた第2のコスト関数によって、前記入力された情報xに対する重みづけを表すウェイトJijおよびBiの各値を逐次的に更新し、前記第2のコスト関数が最小化されたかどうかを判定し、最小化されていないと判定された場合に、更新されたウェイトJijおよびBiを前記第1の演算ループ部に与えるよう構成され、前記演算部として動作する第2の演算ループ部とをさらに備え、
 前記第1の演算ループ部は、前記更新されたウェイトJijおよびBiを使って、前記第1のコスト関数を最小化する新たなループ演算を実施するようさらに構成され、
 前記第2の演算ループ部において前記第2のコスト関数が最小化された状態で、前記入出力部は、前記連想結果として、最新の前記一時的な連想結果を出力するよう構成された請求項1に記載の連想記憶装置。
an input/output unit for receiving the input information x;
a first calculation loop unit corresponding to a Hopfield neural network, configured to sequentially update the input information x by a first cost function based on an Ising Hamiltonian H Ising , determine whether the first cost function is minimized by a value of dF/dx, and output a temporary association result;
a second calculation loop unit configured to sequentially update each value of weights J ij and B i representing weighting for the input information x by a second cost function based on a Potts Hamiltonian H Potts that defines an interaction with the information ξ stored in the memory storage unit, to determine whether the second cost function has been minimized, and if it is determined that the second cost function has not been minimized, to provide the updated weights J ij and B i to the first calculation loop unit, and to operate as the calculation unit;
the first computation loop unit is further configured to perform a new loop computation using the updated weights J ij and B i to minimize the first cost function;
2. The associative memory device according to claim 1, wherein the input/output unit is configured to output the latest temporary associative result as the associative result when the second cost function is minimized in the second computation loop unit.
 前記ポッツハミルトニアンHPottsは、δをクロネッカーのデルタ関数として、
Figure JPOXMLDOC01-appb-I000001
で表される請求項1乃至3いずれかに記載の連想記憶装置。
The Potts Hamiltonian H Potts is expressed as follows, where δ is the Kronecker delta function:
Figure JPOXMLDOC01-appb-I000001
4. The content addressable memory device according to claim 1, wherein the content addressable memory device is represented by the formula:
 前記第1の演算ループ部は、
  コヒーレントイジングマシン、
  FPGA、GPUもしくはこれらの組合せ、または、
  光パルス列が周回するリング共振器を含む光学要素で動作するイジングモデルの計算装置
のいずれかによる物理イジングマシンで実装される請求項3に記載の連想記憶装置。
The first calculation loop unit
Coherent Ising Machine,
FPGA, GPU, or a combination thereof, or
4. The associative storage device according to claim 3, which is implemented by a physical Ising machine using any one of Ising model calculation devices that operates with optical elements including a ring resonator through which an optical pulse train circulates.
 前記演算部は、中央制御ユニット(CPU)で構成され、前記記憶保持部はメモリで構成され、前記最小化の処理を実施するソフトウェアプログラによってコンピュータで実施される請求項1乃至3いずれかに記載の連想記憶装置。 An associative memory device according to any one of claims 1 to 3, wherein the calculation unit is configured with a central control unit (CPU), the memory storage unit is configured with a memory, and the minimization process is implemented in a computer by a software program.  学習入力として構成された情報ξとアクセス可能な入出力部と、
 ポッツハミルトニアンHpottsを最小化するフィードバック計算を実行する演算部と
 を備え、
 前記フィードバック計算の結果に基づいて、前記情報ξの内で、前記入出力部に入力された情報xに最も近い情報を連想結果として出力するよう構成された連想記憶装置。
An input/output unit accessible to information ξ configured as a learning input;
a calculation unit that performs a feedback calculation to minimize the Potts Hamiltonian H potts ;
an associative memory device configured to output, based on a result of the feedback calculation, information among the information ξ that is closest to the information x input to the input/output unit as an associative result;
 前記演算部は、Nビットの多層ニューロンに対応しており、
  前記入力された情報xと、前記情報ξとの相互作用を規定するポッツハミルトニアンで表されるコスト関数Fの値を算出し、
  前記コスト関数Fの値を逐次的に更新し、
  dF/dxの値によって、前記コスト関数Fが最小化されたことを判定し、
  前記最小化が達成された状態で、前記連想結果を出力するよう構成された請求項7に記載の連想記憶装置。
The calculation unit corresponds to an N-bit multi-layer neuron,
Calculating a value of a cost function F expressed by a Potts Hamiltonian that defines an interaction between the input information x and the information ξ;
Sequentially updating the value of the cost function F;
determining whether the cost function F has been minimized based on the value of dF/dx;
8. The associative memory device according to claim 7, configured to output the associative result once the minimization has been achieved.
 ホップフィールド型ニューラルネットワークに対応しており、イジングハミルトニアンHIsingに基づいた第1のコスト関数によって、前記入力された情報xを逐次的に更新し、dF/dxの値によって、前記第1のコスト関数が最小化されたかどうかを判定し、一時的な連想結果を出力するよう構成された第1の演算ループ部と、
 前記情報ξとの相互作用を規定するポッツハミルトニアンHPottsに基づいた第2のコスト関数によって、前記入力された情報xに対するウェイトJijおよびBiの各値を逐次的に更新し、前記第2のコスト関数が最小化されたかどうかを判定し、最小化されていないと判定された場合に、更新されたウェイトJijおよびBiを前記第1の演算ループ部に与えるよう構成され、前記演算部として動作する第2の演算ループ部とをさらに備え、
 前記第1の演算ループ部は、前記更新されたウェイトJijおよびBiを使って、前記第1のコスト関数を最小化する新たなループ演算を実施するようさらに構成され、
 前記第2の演算ループ部において前記第2のコスト関数が最小化された状態で、前記入出力部は、前記連想結果として、最新の前記一時的な連想結果を出力するよう構成された請求項7に記載の連想記憶装置。
a first calculation loop unit corresponding to a Hopfield neural network, configured to sequentially update the input information x by a first cost function based on an Ising Hamiltonian H Ising , determine whether the first cost function is minimized by a value of dF/dx, and output a temporary association result;
a second calculation loop unit configured to sequentially update each value of weights J ij and B i for the input information x by a second cost function based on a Potts Hamiltonian H Potts that defines an interaction with the information ξ, determine whether the second cost function is minimized, and if it is determined that the second cost function is not minimized, provide the updated weights J ij and B i to the first calculation loop unit, and operate as the calculation unit;
the first computation loop unit is further configured to perform a new loop computation using the updated weights J ij and B i to minimize the first cost function;
8. The associative memory device according to claim 7, wherein the input/output unit is configured to output the latest temporary associative result as the associative result when the second cost function is minimized in the second computation loop unit.
PCT/JP2023/015379 2023-04-17 2023-04-17 Associative memory device Pending WO2024218836A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/JP2023/015379 WO2024218836A1 (en) 2023-04-17 2023-04-17 Associative memory device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2023/015379 WO2024218836A1 (en) 2023-04-17 2023-04-17 Associative memory device

Publications (1)

Publication Number Publication Date
WO2024218836A1 true WO2024218836A1 (en) 2024-10-24

Family

ID=93152170

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2023/015379 Pending WO2024218836A1 (en) 2023-04-17 2023-04-17 Associative memory device

Country Status (1)

Country Link
WO (1) WO2024218836A1 (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH09146908A (en) * 1995-11-17 1997-06-06 Atr Ningen Joho Tsushin Kenkyusho:Kk How to solve the problem
JP2006285489A (en) * 2005-03-31 2006-10-19 Kddi Corp Associative memory and its software

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH09146908A (en) * 1995-11-17 1997-06-06 Atr Ningen Joho Tsushin Kenkyusho:Kk How to solve the problem
JP2006285489A (en) * 2005-03-31 2006-10-19 Kddi Corp Associative memory and its software

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
ELIANA FIORELLI; IGOR LESANOVSKY; MARKUS M\"ULLER: "Phase diagram of quantum generalized Potts-Hopfield neural networks", ARXIV.ORG, 21 September 2021 (2021-09-21), pages 1 - 15, XP091055922 *

Similar Documents

Publication Publication Date Title
Hibat-Allah et al. Recurrent neural network wave functions
EP4032036A1 (en) Quantum bit prediction
EP3937086A1 (en) Training a student neural network to mimic a mentor neural network with inputs that maximize student-to-mentor disagreement
CA3131476A1 (en) Hybrid quantum computation architecture for solving quadratic unconstrained binary optimization problems
KR20240007079A (en) Method and system for solving qubo problems with hybrid classical-quantum solvers
CN117693753A (en) Classical and quantum algorithms for orthogonal neural networks
Lopez et al. Bayesian graphical games for synchronization in networks of dynamical systems
JP2022062274A (en) Function processing methods, devices and electronic devices
Chen Learning to program variational quantum circuits with fast weights
CN115860100A (en) A neural network model training method, device and computing equipment
WO2024218836A1 (en) Associative memory device
JP7730590B2 (en) System and method for selecting optimal actions using a quantum computer
Weinstein Learning the Einstein-Podolsky-Rosen correlations on a restricted Boltzmann machine
Aksu et al. Training the multifeedback-layer neural network using the Particle Swarm Optimization algorithm
CN117313878B (en) Quantum circuit processing method, device and electronic equipment
WO2025013227A1 (en) Associative memory device
JP7663295B2 (en) Pulse generation for updating the crossbar array
WO2024063912A1 (en) Modelling causation in machine learning
EP4591216A1 (en) Modelling causation in machine learning
JP7392203B2 (en) Training device, training method, program and reasoning device
Lukac et al. CNOT-measure quantum neural networks
US20230153074A1 (en) Automated Process for Discovering Optimal Programs and Circuits in New Computing Platforms
WO2022141673A1 (en) Quantum computing system
Wu et al. Finding quantum many-body ground states with artificial neural network
US20250284562A1 (en) Gibbs sampling methods using thermodynamic computing

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23933983

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2025514904

Country of ref document: JP

Kind code of ref document: A

WWE Wipo information: entry into national phase

Ref document number: 2025514904

Country of ref document: JP

NENP Non-entry into the national phase

Ref country code: DE