Doevenspeck et al., 2021 - Google Patents
Noise tolerant ternary weight deep neural networks for analog in-memory inferenceDoevenspeck et al., 2021
- Document ID
- 879424519581794661
- Author
- Doevenspeck J
- Vrancx P
- Laubeuf N
- Mallik A
- Debacker P
- Verkest D
- Lauwereins R
- Dehaene W
- Publication year
- Publication venue
- 2021 International Joint Conference on Neural Networks (IJCNN)
External Links
Snippet
Analog in memory computing (AiMC) is a promising hardware solution to efficiently perform inference with deep neural networks (DNNs). Similar to digital DNN accelerators, AiMC systems benefit from aggressively quantized DNNs. In addition, AiMC systems also suffer …
- 230000001537 neural 0 title abstract description 18
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06N—COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computer systems based on biological models
- G06N3/02—Computer systems based on biological models using neural network models
- G06N3/06—Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
- G06N3/063—Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using electronic means
- G06N3/0635—Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using electronic means using analogue means
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06N—COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computer systems based on biological models
- G06N3/02—Computer systems based on biological models using neural network models
- G06N3/04—Architectures, e.g. interconnection topology
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06N—COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N99/00—Subject matter not provided for in other groups of this subclass
- G06N99/005—Learning machines, i.e. computer in which a programme is changed according to experience gained by the machine itself during a complete run
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06G—ANALOGUE COMPUTERS
- G06G7/00—Devices in which the computing operation is performed by varying electric or magnetic quantities
- G06G7/12—Arrangements for performing computing operations, e.g. operational amplifiers
- G06G7/14—Arrangements for performing computing operations, e.g. operational amplifiers for addition or subtraction
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11C—STATIC STORES
- G11C11/00—Digital stores characterised by the use of particular electric or magnetic storage elements; Storage elements therefor
- G11C11/56—Digital stores characterised by the use of particular electric or magnetic storage elements; Storage elements therefor using storage elements with more than two stable states represented by steps, e.g. of voltage, current, phase, frequency
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06F—ELECTRICAL DIGITAL DATA PROCESSING
- G06F17/00—Digital computing or data processing equipment or methods, specially adapted for specific functions
- G06F17/10—Complex mathematical operations
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06F—ELECTRICAL DIGITAL DATA PROCESSING
- G06F7/00—Methods or arrangements for processing data by operating upon the order or content of the data handled
- G06F7/38—Methods or arrangements for performing computations using exclusively denominational number representation, e.g. using binary, ternary, decimal representation
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| Yao et al. | Fully hardware-implemented memristor convolutional neural network | |
| Le Gallo et al. | A 64-core mixed-signal in-memory compute chip based on phase-change memory for deep neural network inference | |
| Lim et al. | Adaptive learning rule for hardware-based deep neural networks using electronic synapse devices | |
| Ambrogio et al. | Equivalent-accuracy accelerated neural-network training using analogue memory | |
| US11087204B2 (en) | Resistive processing unit with multiple weight readers | |
| WO2018158680A1 (en) | Resistive processing unit with hysteretic updates for neural network training | |
| US10783963B1 (en) | In-memory computation device with inter-page and intra-page data circuits | |
| CN110852429B (en) | 1T 1R-based convolutional neural network circuit and operation method thereof | |
| CN110569962B (en) | Convolution calculation accelerator based on 1T1R memory array and operation method thereof | |
| US20210294874A1 (en) | Quantization method based on hardware of in-memory computing and system thereof | |
| Bhattacharya et al. | Computing high-degree polynomial gradients in memory | |
| Zhou et al. | An energy efficient computing-in-memory accelerator with 1T2R cell and fully analog processing for edge AI applications | |
| CN114614865A (en) | Pre-coding device based on memristor array and signal processing method | |
| CN112199234A (en) | Neural network fault tolerance method based on memristor | |
| Doevenspeck et al. | Noise tolerant ternary weight deep neural networks for analog in-memory inference | |
| Lammie et al. | Variation-aware binarized memristive networks | |
| Yang et al. | Essence: Exploiting structured stochastic gradient pruning for endurance-aware reram-based in-memory training systems | |
| CN114186667B (en) | A mapping method of recurrent neural network weight matrix to memristor array | |
| Geng et al. | An on-chip layer-wise training method for RRAM based computing-in-memory chips | |
| CN117672306A (en) | Integrated circuit for memory and calculation, operation method thereof and electronic device | |
| Song et al. | Xpikeformer: Hybrid analog-digital hardware acceleration for spiking transformers | |
| Kim et al. | VCAM: Variation compensation through activation matching for analog binarized neural networks | |
| CN111476356A (en) | Training method, device, equipment and storage medium of memristive neural network | |
| Bao et al. | Energy Efficient Memristive Transiently Chaotic Neural Network for Combinatorial Optimization | |
| US12210957B2 (en) | Local training of neural networks |