WO2023158023A1 - Système de réseau neuronal artificiel basé sur un couplage capacitif - Google Patents
Système de réseau neuronal artificiel basé sur un couplage capacitif Download PDFInfo
- Publication number
- WO2023158023A1 WO2023158023A1 PCT/KR2022/007808 KR2022007808W WO2023158023A1 WO 2023158023 A1 WO2023158023 A1 WO 2023158023A1 KR 2022007808 W KR2022007808 W KR 2022007808W WO 2023158023 A1 WO2023158023 A1 WO 2023158023A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- voltage
- output
- neural network
- vsl
- capacitive coupling
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/06—Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
- G06N3/063—Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using electronic means
- G06N3/065—Analogue means
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/06—Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
- G06N3/063—Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using electronic means
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/082—Learning methods modifying the architecture, e.g. adding, deleting or silencing nodes or connections
Definitions
- the present invention relates to an artificial neural network system, and more particularly, to an artificial neural network hardware system that performs vector-matrix multiplication using capacitive coupling based on a memcapacitive device in which a memory and a capacitor are combined. It is about.
- a deep neural network having one or more hidden layers as shown in FIG. 1 is mainly used.
- a neuromorphic system imitating a biological nervous system with hardware is being actively researched.
- a plurality of neurons simulated by hardware exist in an input layer, a hidden layer, and an output layer, and neurons in each layer are connected through hardware synapses.
- a digital computing system based on the von Neumann architecture sequentially calculates an input signal and a synaptic weight value.
- input signals in the form of vectors are applied in parallel to all input layers, and vector-matrix multiplication of synaptic weight matrices is performed simultaneously on an analog basis. do. That is, the neuromorphic system can reduce operation energy and time because it simultaneously performs vector-matrix multiplication operations of each layer in parallel in an artificial neural network.
- synapses are mainly implemented through memory devices, and neurons are implemented through CMOS circuits.
- Biological synapse refers to the junction between the axon of the former neuron and the dendrite of the posterior neuron, and serves to transmit electrical signals through the secretion and adsorption of neurotransmitters. At this time, the size of the electrical signal transmitted to the posterior neuron is adjusted according to the connection strength of the synapse.
- the connection strength of these synapses is called a synaptic weight, and biological synaptic weights are controlled by a learning process.
- a synapse weight is expressed as a conductance of the memory device.
- SRAM static random-access memory
- RRAM resistive random-access memory
- PCM phase-change memory
- STT-MRAM spin-transfer torque magnetoresistive random-access memory
- FG floating-gate
- Synapse operating characteristics can be implemented using memory devices such as memory and CTF (charge-trap flash) memory.
- FIG. 2 shows a neuromorphic system using a conventional conductance-based synaptic device.
- synaptic elements are connected to neuron circuits through separate column lines.
- an inference operation is performed through a conductance-based synapse array
- an input vector (x 1 ..., x i , ..., x n ) in the form of a voltage is input as a synaptic weight matrix.
- Input voltage (x i ) and conductance ( , ) the multiplication operation is performed ( , ), and is summed through the column line connected to the individual synaptic element ( , ).
- the result of the vector-matrix multiplication operation summed by each column line is input to the neuron core after the difference operation is performed in the neuron circuit.
- a technology related to a neural network implemented in this way is disclosed in Korean Patent Registration No. 10-2126791.
- the conductance-based synapse array has a disadvantage in that a certain level of power consumption continues to occur because the same current flows to each column line even when it has a synapse weight of '0'. In addition, when a large number of fan-ins exist, it causes a lot of power consumption. As the number of hidden layers of an artificial neural network increases, the number of synapses required for the neural network increases, making this disadvantage more significant.
- the present invention provides a capacitive coupling-based artificial neural network system capable of dramatically reducing energy consumption even when the energy consumption rapidly increases according to the size of the neural network, and an artificial neural network system based on the capacitive coupling thereof. We want to provide an operating method.
- the capacitive coupling-based artificial neural network system is a synapse array in which a plurality of word lines and a plurality of bit lines cross each other with a memcapacitive element coupled to a memory and a capacitor interposed therebetween ; Output Neuron provided on the output side of the synapse array; WL decoder provided on the input side of the synapse array; and an Output Controller provided on an output side of the Output Neuron, wherein the synapse array is between two word lines selected from among the plurality of word lines and one bit line selected as a voltage summed line (VSL) from among the plurality of bit lines.
- VSL voltage summed line
- the WL Decoder It is formed to configure one synaptic element with two memcapacitive elements disposed on, and the WL Decoder generates two voltage displacements inverted on the selected two word lines ( ) are added and applied, respectively, and the synaptic element is provided to affect the voltage of the VSL with positive and negative weights through capacitive coupling of the two memcapacitive elements, and the Output Neuron is an ignition capacitor ( C int ) and discharging the ignition capacitor with the voltage of the VSL to generate an output signal.
- the WL decoder While an external stimulus is input, the WL decoder generates a rising voltage and a falling voltage obtained by adding the two voltage displacements, respectively, and simultaneously applies them as input signals to the two selected word lines, and the intensity of the external stimulus is determined by the two voltages.
- Another feature of the artificial neural network system based on capacitive coupling according to the present invention is that the duration of each pulse of displacement is adjusted and reflected as the intensity of the input signal.
- the Output Neuron further includes a string select transistor (SST) disposed between the VSL and the Output Controller to supply or block a control voltage from the Output Controller to the VSL, and the neuron circuit is connected to the VSL to control the control voltage.
- SST string select transistor
- Another feature of the capacitive coupling-based artificial neural network system according to the present invention is that the input voltage calculated by the weight of the synaptic element is added to the voltage to become the voltage of the VSL.
- the neuron circuit is configured to include an integrator that receives the voltage of the VSL as a gate and discharges the ignition capacitor, and a pulse generator that receives the output voltage of the integrator and generates the output signal. Another feature of the artificial neural network system based on capacitive coupling according to the present invention.
- the integrator may include a charging transistor connected in parallel with the ignition capacitor between an operating voltage supply line and an output terminal; an operation control transistor and a stimulus reception transistor connected in series between the output terminal and a ground; and a reset transistor connecting the output terminal and ground, wherein the gate of the stimulus receiving transistor is connected to the VSL, and the output terminal is connected to the pulse generator through a control line.
- the memcapacitive element is sequentially formed of a first dielectric layer, a charge trap layer, and a second dielectric layer on a semiconductor body, and is coupled with one of the plurality of word lines stacked on the second dielectric layer Capacitive coupling according to the present invention Another feature of the based artificial neural network system.
- a source and a drain are further formed on both sides of the first dielectric layer, or a second doped layer doped with a higher concentration than the semiconductor body is further formed under the semiconductor body.
- a second synapse array and at least one second Output Neuron are further connected between the output side of the Output Neuron and the Output Controller, and a Pulse Converter is further provided on the output side of the Output Neuron or the output side of the second Output Neuron.
- Another feature of the artificial neural network system based on capacitive coupling according to the present invention is that the input signal converted through the pulse converter is applied to the second synapse array.
- the inference operation of the artificial neural network system is initialization, integration, and evaluation in time ) It is characterized in that it is performed in three steps separated by.
- the SST turn-on voltage is applied to the string select line (SSL) of the SST during T phase 0 , and a predetermined voltage is applied to the bit line of the selected string so that the VSL is precharged and the charge is performed.
- a turn-on voltage is applied to the gate of the ignition transistor to charge the ignition capacitor with the operating voltage ( V DD ), and the integration step is performed during T phase 1 after the completion of the initialization step.
- a rising voltage (WL + ) and a falling voltage (WL - ) are simultaneously applied to two word lines, and the input voltage obtained as a result of performing a vector-matrix multiplication operation by the weight of the synaptic element is added to the precharge and VSL of the selected string.
- the voltage is determined, and in a state where an EN signal is applied as a turn-on voltage to the gate of the operation control transistor, the stimulus receiving transistor is turned on with the VSL voltage and the ignition capacitor is discharged, and the evaluation step is performed after the integration step is completed.
- a RESET signal is input to the gate of the reset transistor as a turn-on voltage to discharge the ignition capacitor at a uniform rate, and in the integration step or the evaluation step, as the voltage of the output terminal drops, the voltage of the ignition capacitor
- V TH is the threshold voltage of the pulse generator
- two memcapacitive elements constitute one synaptic element, and two voltage displacements inverted to the two memcapacitive elements ( ) is added and applied, and the voltage is added to VSL through a vector-matrix multiplication operation with positive and negative weights through the capacitive coupling of the two memcapacitive elements, and the voltage of the VSL discharges the capacitor for ignition, so that the output signal is
- VSL vector-matrix multiplication operation with positive and negative weights through the capacitive coupling of the two memcapacitive elements
- the voltage of the VSL discharges the capacitor for ignition, so that the output signal is
- 1 is a conceptual diagram showing the structure of a deep neural network.
- FIG. 2 is a circuit diagram showing a neuromorphic system using a conventional conductance-based synaptic device.
- FIG. 3 is a circuit diagram showing the structure of an artificial neural network system based on capacitive coupling according to an embodiment of the present invention.
- FIG. 4 is a power application diagram according to time showing an operation process during inference of the artificial neural network system based on capacitive coupling of FIG. 3 .
- 5 is a classification system diagram of memcapacitive devices to be used as synaptic devices in an embodiment of the present invention.
- FIGS. 6 to 9 are cross-sectional views showing the structure of a CTF-based memcapacitive device used as a synaptic device in one embodiment of the present invention
- A shows a structure formed on a lightly doped semiconductor body
- B shows a structure in which a source and a drain are added to (A).
- FIG. 10 is an electrical characteristic diagram showing multilevel capacitance change according to body doping polarity of a CTF-based memcapacitive device used as a synaptic device in an embodiment of the present invention.
- FIG. 11 is a circuit diagram showing the structure of an artificial neural network system based on capacitive coupling according to another embodiment of the present invention.
- the capacitive coupling-based artificial neural network system has a plurality of word lines 111 interposed between memcapacitive elements 12 or 14 in which a memory and a capacitor are coupled. , 112, 121, 122) and a plurality of bit lines (130, 140) are arranged to cross each other synapse array 100; Output Neuron (200) provided on the output side of the synaptic array; WL (word line) decoder (300) provided on the input side of the synaptic array; and an Output Controller 400 provided on the output side of the Output Neuron.
- the synapse array 100 includes two selected word lines (eg, 121 and 122) among the plurality of word lines and one bit line (eg, voltage summed line) selected as a voltage summed line (VSL) among the plurality of bit lines. , 140) is formed to configure one synaptic element 10 with two memcapacitive elements (12, 14) disposed between.
- the WL Decoder 300 generates two voltage displacements (inverted on the two selected word lines 121 and 122). ) is provided so that each is added and applied. That is, the WL decoder 300 divides one word line 120 into two word lines 121 and 122 and generates two inverted voltage displacements ( ) is provided so that the value added is applied.
- the synaptic element 10 has positive and negative weights through the capacitive coupling of the two memcapacitive elements 12 and 14 ( ) to affect the voltage of the VSL 140.
- the Output Neuron 200 has a neuron circuit 210 including an ignition capacitor ( C int , 213) and discharges the ignition capacitor 213 with the voltage of the VSL 140 to generate an output signal. It can be provided as much as possible.
- the WL Decoder 300 while the external stimulus is input, the two voltage displacement ( ) is each added to the rising voltage ( ) and the falling voltage ( ) may be generated and simultaneously applied as an input signal to the selected two word lines 121 and 122.
- V ref is O or an arbitrary value and refers to a reference voltage independent of stimulation.
- the strength of the external stimulus is the difference between the two voltage displacements ( ), that is, by adjusting T WLi in Equation 1, it can be reflected as the intensity of the input signal.
- the Output Neuron 200 may further include a plurality of SSTs (string select transistors, 220) for selecting a String 131 of synaptic elements connected along each bit line.
- each SST 220 is disposed between each VSL 130 and the Output Controller 400 to supply a control voltage (eg, a precharge voltage to be described later) from the Output Controller 400 to the VSL 130, or It may be provided to block.
- the input voltage calculated as the weight of the synaptic element is added to the control voltage for the voltage of the VSL 130 .
- the neuron circuit 210 may be connected to the VSL 130 to control the discharge of the ignition capacitor 213 with the voltage of the VSL 130 .
- the Output Controller 400 is connected to the Output Neuron 200 to generate a BL (bit line) voltage and apply it to each VSL 130 through the plurality of SSTs 220 or the like, or as shown in FIG. 3, the neuron It may be provided to receive and process OL (output line; 230, 240) signals output from the circuit 210.
- BL bit line
- OL output line
- peripheral circuits such as SSL, BL drivers, and program verification circuits may be further included in the Output Controller 400, and BL voltages necessary for performing inference calculations or learning synaptic weights are generated and applied to each VSL 130 may be provided.
- the voltage of each VSL 130 can be obtained through a vector-matrix multiplication operation based on voltage summation through C 2 -ANN (capacitive coupling artificial neural network). Referring to FIG. 3 , this may be performed through the following process.
- the precharge voltage ( V precharge ) is applied to the BL connected to one end of the SST authorize
- turn-off (for example, 0V) is applied to the SSL to turn-off the SST 220 to make the VSL 130 a floating state.
- WL + and WL - input signals are input to the synapse array 100 through the WL Decoder 300 .
- the input WL + and WL - voltages are not limited to positive and negative voltages, respectively, and can be configured in various ways according to use conditions. For example, when 0 V is used as the WL reference voltage ( V ref ), V WL,i and -V WL ,i are respectively applied to the ith WL + and WL - during inference operation. At this time, the VSL voltage located at the j-th string (140) is changed as shown in Equation 2 below due to the capacitive coupling of the WL and the synaptic element.
- C total,j is the total capacitance (capacitance) of the VSL of the j-th string 140, including the parasitic capacitance and the capacitance of all synapses connected to the VSL. and is a value expressing the positive and negative synaptic weights located at the ith WL and the jth string, respectively, as capacitance.
- an arbitrary voltage V ref may be used as the WL reference voltage.
- the voltages applied to the ith WL + and WL - during the vector-matrix multiplication operation are respectively ,
- the voltage of VSL located in the j-th string 140 can be determined through Equation 3 below.
- the neuron circuit 210 includes an integrator 212 for discharging a capacitor 213 for ignition by inputting a voltage of each VSL 140 to a gate and an output voltage of the integrator (output terminal 215 voltage). ) and a pulse generator 214 that generates an output signal (OL j 240 signal).
- the integrator 212 includes a charging transistor M2 connected in parallel with the ignition capacitor 213 between the operating voltage (V DD ) supply line 211 and the output terminal 215; a transistor for operation control (M3) and a transistor for receiving stimulus (M4) connected in series between the output terminal (215) and ground; and a reset transistor M5 connecting the output terminal 215 and ground, the gate of the stimulus receiving transistor M4 is connected to VSL 140, and the output terminal 215 is a control line It can be connected to the pulse generator 214 through 217.
- V DD operating voltage
- M3 transistor for operation control
- M4 transistor for receiving stimulus
- the charging transistor M2 is a p-type MOSFET
- the remaining SSTs (220, M1), the operation control transistor M3, the stimulus receiving transistor M4, and the reset transistor M5 are n-type MOSFETs.
- MOSFET types can be exchanged with each other or devices having the same or similar switch characteristics can be implemented.
- the neuron circuit 210 in FIG. 3 is shown as an example, it is not limited to the structure or form presented.
- the memcapacitive elements 12 and 14 to be used as the synaptic element 10 in one embodiment of the present invention may be used as long as they have a function in which a memory and a capacitor are combined.
- 5 shows a classification system diagram of memcapacitive devices 12 and 14 to be used in one embodiment of the present invention. According to this, memcapacitive devices can be largely classified into dielectric polarization-based and charge-based.
- a memcapacitive device based on dielectric polarization is a device whose capacitance changes due to dielectric polarization, and a typical example is a ferroelectric field effect transistor (FeFET).
- FeFET ferroelectric field effect transistor
- a memcapacitive device based on dielectric polarization adjusts the capacitance of the device by aligning the direction of dielectric polarization of a ferroelectric layer present in the device.
- Charge-based memcapacitive devices are further classified into two types according to the properties of charge storage nodes.
- a charge-trap flash (CTF)-based device is a typical memcapacitive device corresponding thereto.
- the CTF-based memcapacitive device can control the capacitance of the device by injecting electrons or holes into a charge trap layer present in the device.
- a typical memcapacitive device corresponding thereto is a floating gate (FG) type device.
- FG type according to the process method, as shown in FIG. 5, it can be divided into a single poly FG element and a dual poly FG element.
- the FG type memcapacitive device can control the capacitance of the device by injecting electrons or holes into the FG existing in the device.
- Figure 5 is an example of memcapacitive elements to be used as the synaptic element 10 of the present invention, it is not limited to the above-mentioned types.
- FIGS. 6 to 9 are cross-sectional views showing the structure of a CTF-based memcapacitive device used as a synaptic device in one embodiment of the present invention
- A shows a structure formed on a lightly doped semiconductor body
- B shows a structure in which a source and a drain are added to (A).
- FIG. 6(A) in the memcapacitive device 12 used in each of the above-described embodiments, a lightly doped semiconductor body is used as a bit line (VSL, 140), and a first dielectric layer is formed on the semiconductor body 140. (11), the charge trap layer 13 and the second dielectric layer 15 are sequentially formed and fastened to one of the plurality of word lines 121 stacked on the second dielectric layer 15.
- FIG. 6(B) shows a structure of FIG. 6(A) in which a source 142a and a drain 142b are added to both sides of the first dielectric layer 11 to be implemented in the form of a charge trap transistor (CTT). show that you can
- the fact that the semiconductor body 140 is doped with a low concentration means that the semiconductor body 140 is doped with impurities at a lower concentration than the source/drain of the MOSFET, and may be expressed as P- or N- depending on the polarity.
- a material such as SiON, SiO 2 , or HfO X may be used as a tunneling dielectric.
- a material such as SiN X may be used for the charge trap layer 13 , and materials such as SiO 2 and Al 2 O 3 may be used as a blocking dielectric for the second dielectric layer 15 .
- One of the plurality of word lines 121 may use a gate material of a memory device, that is, metal or highly doped (P + or N + ) silicon. Since the content described here is an example, materials usable for each configuration are not limited to the materials described above, and a plurality of materials may be used at the same time.
- a second doped layer 140 doped more highly than the semiconductor body 141 is further formed under the semiconductor body 141, unlike each embodiment of FIG.
- the doping layer may be used as the bit line (VSL) 140 .
- the tunneling dielectric and blocking dielectric are exchanged to form the first dielectric layer 15 for the latter and the second dielectric layer 11 for the former, respectively. There is only a difference.
- capacitance of the device can be adjusted by injecting electrons or holes into the charge trap layer.
- Multi-level capacitance can be implemented with a single device by adjusting the amount of charge injected into the charge trap layer, and based on this, multi-level synaptic weights can be expressed.
- FIG. 10 is an electrical characteristic diagram showing multilevel capacitance change according to body doping polarity of a CTF-based memcapacitive device used as a synaptic device in an embodiment of the present invention.
- the doping polarity of the CTC body it is divided into P-body and N-body CTC.
- Each body polarity has different capacitance characteristics depending on the applied gate voltage.
- the capacitance decreases as the gate voltage increases, as shown in FIG. 10(A)
- the capacitance decreases as the gate voltage decreases, as shown in FIG. have a decreasing characteristic. Therefore, when implementing multilevel capacitance in a single device, as shown in FIG. 10, an appropriate operating voltage ( V RESD ) must be set to implement multilevel characteristics.
- V RESD an appropriate operating voltage
- program and erase operations can be performed through Fowler-Nordheim (FN) tunneling by selecting SSL, WL, and BL in FIG. 3 .
- FN Fowler-Nordheim
- Table 1 below shows an FN program and erase method based on a positive operating voltage as an example.
- (A) and (C) of Table 1 are program or erase operating voltages of charge-based memcapacitive devices having gate dielectric structures as shown in FIGS. 6 and 7 .
- the FN program is performed in a manner similar to a self-boosting program inhibit (SBPI) operation used in a conventional NAND flash.
- SBPI self-boosting program inhibit
- the BL voltage is applied.
- 0 V is applied to the selected BL
- V CC is applied to the non-selected BL.
- the VSLs connected to the unselected BL are precharged and floated with the V CC - V TH,SST voltage.
- the FN program is performed by applying the WL voltage.
- V PGM is applied to the selected WL
- V PGM,PASS is applied to the non-selected WL.
- the floating VSL is boosted to a high voltage by capacitive coupling, a sufficient electric field between the channel and the gate is not generated, thereby preventing programming.
- Devices connected to select WL and select BL are programmed with a sufficient electric field generated between the channel and the gate.
- ISPP incremental step pulse programming
- V PASS,C voltage applied to SSL applies a higher voltage ( V ERS + V TH,SST ) than the voltage ( V ERS ) applied to the selected BL considering the threshold voltage ( V TH,SST ) of SST. Then, the WL voltage is applied.
- V ERS,PASS voltage is applied to the non-selected WL.
- V ERS,PASS voltage is applied to the non-selected WL.
- a program or erase operation may be performed with reference to (B) and (D) of Table 1.
- programming refer to Table 1(B) and apply V PASS,B voltage to SSL. Then, the BL voltage is applied. At this time, V PGM voltage is applied to the selected BL, and V PGM,PASS is applied to the non-selected BL. Through this, breakdown of the gate dielectric of the SST due to V PASS,B applied to the SSL is prevented.
- the V PASS,B voltage applied to the SSL applies a higher voltage ( V PGM + V TH,SST ) than the voltage ( V PGM ) applied to the selected BL in consideration of the threshold voltage ( V TH,SST ) of SST. Then, the WL voltage is applied. At this time, 0 V is applied to the selected WL, and V PGM,PASS voltage is applied to the non-selected WL. Through this, a sufficient electric field is generated between the channel and the gate only in the devices connected to the selected WL and the selected BL, and the program is performed.
- FIG. 11 is a circuit diagram showing the structure of an artificial neural network system based on capacitive coupling according to another embodiment of the present invention.
- a second synapse array 102 and one or more second Output Neurons 202 are provided, respectively. It can be implemented with a C 2 -DNN (capacitive coupling deep neural network) structure that is further connected and extended to a deep neural network.
- C 2 -DNN capactive coupling deep neural network
- the Output Neuron (200) becomes a Hidden Neuron.
- a pulse converter 214 is further provided on the output side of the output neuron 201 or the output side of the second output neuron (not shown) of the above-described C 2 -ANN architecture, and the input signal converted through the pulse converter 214 Can be applied to the second synaptic array (102).
- the Pulse Converter 214 serves to convert the OL signal output from the Output Neuron 201 into the next layer, that is, the layer of the second synapse array 102, and an input signal.
- the OL signal input to the pulse converter 214 is converted into WL + and WL - output signals through signal conversion, respectively, and applied to the second synapse array 102 of the next layer.
- the C 2 -ANN of the present invention can configure a C 2 -DNN through simple addition of components, and thus has high scalability to a deep neural network.
- FIG. 4 is a power application diagram according to time showing an operation process during inference of the artificial neural network system based on capacitive coupling of FIG. 3 .
- the inference operation operation of the artificial neural network system is divided into three stages divided into initialization, integration and evaluation in time is carried out
- the SST turn-on voltage is applied to the string select line (SSL) of the SST 220 during T phase0 , and a predetermined voltage is applied to the bit line BL 1 of the selected string (eg, 131).
- a voltage of is applied to precharge the VSL 130, and a turn-on voltage (Rstb signal) is applied to the gate of the charging transistor M2 to set the ignition capacitor 213 to the operating voltage V DD .
- a rising voltage (WL + ) and a falling voltage (WL - ) are simultaneously applied to each selected two word lines of the synaptic array 100 through the WL decoder (300) during T phase1 after the completion of the initialization step.
- the VSL voltage of the selected string 131 is determined by adding the input voltage obtained as a result of performing the vector-matrix multiplication operation by the weight of the synaptic element to the precharge, and the turn-on voltage at the gate of the operation control transistor M3 In a state in which the EN signal is applied, the stimulus receiving transistor M4 is turned on with the VSL voltage and the ignition capacitor 213 is discharged.
- a RESET signal is input as a turn-on voltage to the gate of the reset transistor M5 during T phase2 to discharge the ignition capacitor 213 at a uniform rate.
- the C 2 -ANN of the present invention is relatively free from non-ideal characteristics due to cross talk between column lines and capacitance difference between each column line.
- the synaptic array of the C 2 -ANN architecture does not have a DC current through the synaptic element.
- the neuron circuit processes the result of vector-matrix multiplication operation performed in the synapse array in a manner of discharging the V Cint node charged with V DD through the M4 transistor in the integration step during the inference operation. That is, DC power consumption is not consumed because other operations except charging/discharging of the ignition capacitor 213 are not performed in the neuron circuit 210 in performing the inference operation.
- the average power consumption of the neuron circuit 210 during the inference operation is determined by the capacitance ( C int ) of the ignition capacitor 213 and V DD .
- the neuronal circuit based on the C 2 -ANN architecture of the present invention has power consumption independent of the size of the synapse array.
- the C 2 -ANN architecture of the present invention DC power does not increase in the synapse array 100, the WL Decoder 300, and the Output Neuron 200 even when the size of the synapse array increases, and during inference operation, the WL, VSL and Power consumption occurs only in the process of charging/discharging the C int of the neuron circuit.
- the C 2 -ANN of the present invention can dramatically reduce the power consumption of the entire architecture compared to the conventional current summation-based neuromorphic system.
- WL pulses with a duration proportional to the input signal magnitude are applied to the synapse array when inference operation is performed.
- the OL pulse is first output from the output neuron of the string whose result of the vector-matrix multiplication operation is large. If the output pulse voltage drops to 0 V at the same time, it can be seen that the longer the OL pulse is applied, the greater the vector-matrix multiplication result.
- the output signal of the C 2 -ANN architecture is similar to the input signal applied through the WL decoder during reasoning.
- the signal conversion process between the hidden layer and the hidden layer or between the hidden layer and the output layer can be reduced. . Therefore, when C 2 -ANN is extended to a deep neural network (C 2 -DNN structure), low power consumption and high operating speed can be expected in the process of converting the output of the neuron circuit into the input of the next layer.
- the present invention relates to an artificial neural network system based on capacitive coupling and has industrial applicability.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Evolutionary Computation (AREA)
- General Engineering & Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Artificial Intelligence (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Computational Linguistics (AREA)
- General Physics & Mathematics (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Neurology (AREA)
- Semiconductor Memories (AREA)
- Non-Volatile Memory (AREA)
Abstract
La présente invention concerne un système de réseau neuronal artificiel basé sur un couplage capacitif, deux éléments memcapacitifs constituant un élément synaptique et deux déplacements de tension inversés (±△V WL) étant respectivement ajoutés aux deux éléments memcapacitifs et appliqués, et par l'intermédiaire du couplage capacitif des deux éléments memcapacitifs, ajoutés à la tension de VSL en tant que poids positifs et négatifs au moyen d'une multiplication de matrice vectorielle, de telle sorte que la tension de VSL décharge un condensateur de déclenchement et génère en conséquence un signal de sortie, ce qui permet une réduction considérable de la consommation d'énergie par rapport à un système neuromorphique basé sur une conductance classique même dans le cas d'une augmentation de l'échelle du réseau neuronal.
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| KR10-2022-0019570 | 2022-02-15 | ||
| KR1020220019570A KR102820923B1 (ko) | 2022-02-15 | 2022-02-15 | 용량성 커플링 기반의 인공 신경망 시스템 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2023158023A1 true WO2023158023A1 (fr) | 2023-08-24 |
Family
ID=87578769
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/KR2022/007808 Ceased WO2023158023A1 (fr) | 2022-02-15 | 2022-06-02 | Système de réseau neuronal artificiel basé sur un couplage capacitif |
Country Status (2)
| Country | Link |
|---|---|
| KR (1) | KR102820923B1 (fr) |
| WO (1) | WO2023158023A1 (fr) |
Families Citing this family (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| KR102856063B1 (ko) * | 2023-10-17 | 2025-09-04 | 서울시립대학교 산학협력단 | 이중 게이트 tft 기반의 전하 저장형 시냅스 소자와 시냅스 어레이 |
Citations (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20200035305A1 (en) * | 2018-07-24 | 2020-01-30 | Sandisk Technologies Llc | Configurable precision neural network with differential binary non-volatile memory cell structure |
| CN111695678A (zh) * | 2020-06-01 | 2020-09-22 | 电子科技大学 | 一种基于忆阻模块阵列的图像标题生成方法 |
| US20200311533A1 (en) * | 2019-03-27 | 2020-10-01 | Globalfoundries Inc. | In-memory binary convolution for accelerating deep binary neural networks |
| KR20210059815A (ko) * | 2019-11-15 | 2021-05-26 | 삼성전자주식회사 | 메모리 기반의 뉴로모픽 장치 |
| US20210192325A1 (en) * | 2019-12-20 | 2021-06-24 | Sandisk Technologies Llc | Kernel transformation techniques to reduce power consumption of binary input, binary weight in-memory convolutional neural network inference engine |
Family Cites Families (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| KR920701918A (ko) * | 1989-06-15 | 1992-08-12 | 제이 엘.체신 | 입력 또는 출력라인과 차등감지 또는 구동된 출력 또는 입력라인 쌍을 접속한 용량성 구조를 사용한 뉴럴네트 |
| JP2021121876A (ja) * | 2018-03-30 | 2021-08-26 | ソニーセミコンダクタソリューションズ株式会社 | 積和演算装置及び積和演算方法 |
| KR102405226B1 (ko) * | 2019-12-30 | 2022-06-02 | 광운대학교 산학협력단 | 가변 정전 용량형 가중치 메모리 소자와 가중치 메모리 시스템 및 그 동작 방법 |
-
2022
- 2022-02-15 KR KR1020220019570A patent/KR102820923B1/ko active Active
- 2022-06-02 WO PCT/KR2022/007808 patent/WO2023158023A1/fr not_active Ceased
Patent Citations (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20200035305A1 (en) * | 2018-07-24 | 2020-01-30 | Sandisk Technologies Llc | Configurable precision neural network with differential binary non-volatile memory cell structure |
| US20200311533A1 (en) * | 2019-03-27 | 2020-10-01 | Globalfoundries Inc. | In-memory binary convolution for accelerating deep binary neural networks |
| KR20210059815A (ko) * | 2019-11-15 | 2021-05-26 | 삼성전자주식회사 | 메모리 기반의 뉴로모픽 장치 |
| US20210192325A1 (en) * | 2019-12-20 | 2021-06-24 | Sandisk Technologies Llc | Kernel transformation techniques to reduce power consumption of binary input, binary weight in-memory convolutional neural network inference engine |
| CN111695678A (zh) * | 2020-06-01 | 2020-09-22 | 电子科技大学 | 一种基于忆阻模块阵列的图像标题生成方法 |
Also Published As
| Publication number | Publication date |
|---|---|
| KR20230122842A (ko) | 2023-08-22 |
| KR102820923B1 (ko) | 2025-06-13 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US5028810A (en) | Four quadrant synapse cell employing single column summing line | |
| US10103162B2 (en) | Vertical neuromorphic devices stacked structure and array of the structure | |
| US5264734A (en) | Difference calculating neural network utilizing switched capacitors | |
| Bayat et al. | Redesigning commercial floating-gate memory for analog computing applications | |
| US10892330B2 (en) | FET based synapse network | |
| EP0566739A1 (fr) | Dispositif a semi-conducteur | |
| EP0584844B1 (fr) | Circuit neuronal à opération multiplication-addition avec apprentissage adaptatif | |
| US5818081A (en) | Semiconductor device | |
| KR20190103610A (ko) | 3차원 적층 시냅스 어레이 기반의 뉴로모픽 시스템과 그 동작 방법 및 제조 방법 | |
| US11742433B2 (en) | Floating gate memristor device and neuromorphic device having the same | |
| KR20180020078A (ko) | 뉴로모픽 컴퓨팅을 위한 저전압 아날로그 또는 멀티레벨 메모리 | |
| US20210166108A1 (en) | Neural network with synapse string array | |
| KR20210151737A (ko) | 신경망을 위한 시냅스 스트링 및 시냅스 스트링 어레이 | |
| CN112149051B (zh) | 执行向量-矩阵乘法的计算电路和包括其的半导体器件 | |
| WO2021033906A1 (fr) | Dispositif neuromorphique tridimensionnel ayant de multiples synapses par neurone | |
| WO2023158023A1 (fr) | Système de réseau neuronal artificiel basé sur un couplage capacitif | |
| US5237210A (en) | Neural network accomodating parallel synaptic weight adjustments for correlation learning algorithms | |
| US5247206A (en) | Neural network accommodating parallel synaptic weight adjustments in a single cycle | |
| US12141546B2 (en) | Product-sum calculation device and product-sum calculation method | |
| KR20210087389A (ko) | 신경망을 위한 시냅스 스트링 어레이 아키텍처 | |
| KR20220065299A (ko) | 플래시 메모리 기반의 스파이크 레귤레이터를 포함하는 뉴로모픽 회로 | |
| US20230125501A1 (en) | Capacitor device for unit synapse, unit synapse and synapse array based on capacitor | |
| WO2020085607A1 (fr) | Élément de pondération basé sur un condensateur de point de croisement et réseau neuronal l'utilisant | |
| TW202403757A (zh) | 記憶體內計算用的記憶體裝置 | |
| WO2024205370A1 (fr) | Cellules de mise à jour, circuit neuromorphique la comprenant, et procédé de fonctionnement de circuit neuromorphique |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 22927421 Country of ref document: EP Kind code of ref document: A1 |
|
| NENP | Non-entry into the national phase |
Ref country code: DE |
|
| 122 | Ep: pct application non-entry in european phase |
Ref document number: 22927421 Country of ref document: EP Kind code of ref document: A1 |