[go: up one dir, main page]

Doevenspeck et al., 2021 - Google Patents

Noise tolerant ternary weight deep neural networks for analog in-memory inference

Doevenspeck et al., 2021

Document ID
879424519581794661
Author
Doevenspeck J
Vrancx P
Laubeuf N
Mallik A
Debacker P
Verkest D
Lauwereins R
Dehaene W
Publication year
Publication venue
2021 International Joint Conference on Neural Networks (IJCNN)

External Links

Snippet

Analog in memory computing (AiMC) is a promising hardware solution to efficiently perform inference with deep neural networks (DNNs). Similar to digital DNN accelerators, AiMC systems benefit from aggressively quantized DNNs. In addition, AiMC systems also suffer …
Continue reading at ieeexplore.ieee.org (other versions)

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06NCOMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computer systems based on biological models
    • G06N3/02Computer systems based on biological models using neural network models
    • G06N3/06Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
    • G06N3/063Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using electronic means
    • G06N3/0635Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using electronic means using analogue means
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06NCOMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computer systems based on biological models
    • G06N3/02Computer systems based on biological models using neural network models
    • G06N3/04Architectures, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06NCOMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N99/00Subject matter not provided for in other groups of this subclass
    • G06N99/005Learning machines, i.e. computer in which a programme is changed according to experience gained by the machine itself during a complete run
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06GANALOGUE COMPUTERS
    • G06G7/00Devices in which the computing operation is performed by varying electric or magnetic quantities
    • G06G7/12Arrangements for performing computing operations, e.g. operational amplifiers
    • G06G7/14Arrangements for performing computing operations, e.g. operational amplifiers for addition or subtraction
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11CSTATIC STORES
    • G11C11/00Digital stores characterised by the use of particular electric or magnetic storage elements; Storage elements therefor
    • G11C11/56Digital stores characterised by the use of particular electric or magnetic storage elements; Storage elements therefor using storage elements with more than two stable states represented by steps, e.g. of voltage, current, phase, frequency
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRICAL DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRICAL DIGITAL DATA PROCESSING
    • G06F7/00Methods or arrangements for processing data by operating upon the order or content of the data handled
    • G06F7/38Methods or arrangements for performing computations using exclusively denominational number representation, e.g. using binary, ternary, decimal representation

Similar Documents

Publication Publication Date Title
Yao et al. Fully hardware-implemented memristor convolutional neural network
Le Gallo et al. A 64-core mixed-signal in-memory compute chip based on phase-change memory for deep neural network inference
Lim et al. Adaptive learning rule for hardware-based deep neural networks using electronic synapse devices
Ambrogio et al. Equivalent-accuracy accelerated neural-network training using analogue memory
US11087204B2 (en) Resistive processing unit with multiple weight readers
WO2018158680A1 (en) Resistive processing unit with hysteretic updates for neural network training
US10783963B1 (en) In-memory computation device with inter-page and intra-page data circuits
CN110852429B (en) 1T 1R-based convolutional neural network circuit and operation method thereof
CN110569962B (en) Convolution calculation accelerator based on 1T1R memory array and operation method thereof
US20210294874A1 (en) Quantization method based on hardware of in-memory computing and system thereof
Bhattacharya et al. Computing high-degree polynomial gradients in memory
Zhou et al. An energy efficient computing-in-memory accelerator with 1T2R cell and fully analog processing for edge AI applications
CN114614865A (en) Pre-coding device based on memristor array and signal processing method
CN112199234A (en) Neural network fault tolerance method based on memristor
Doevenspeck et al. Noise tolerant ternary weight deep neural networks for analog in-memory inference
Lammie et al. Variation-aware binarized memristive networks
Yang et al. Essence: Exploiting structured stochastic gradient pruning for endurance-aware reram-based in-memory training systems
CN114186667B (en) A mapping method of recurrent neural network weight matrix to memristor array
Geng et al. An on-chip layer-wise training method for RRAM based computing-in-memory chips
CN117672306A (en) Integrated circuit for memory and calculation, operation method thereof and electronic device
Song et al. Xpikeformer: Hybrid analog-digital hardware acceleration for spiking transformers
Kim et al. VCAM: Variation compensation through activation matching for analog binarized neural networks
CN111476356A (en) Training method, device, equipment and storage medium of memristive neural network
Bao et al. Energy Efficient Memristive Transiently Chaotic Neural Network for Combinatorial Optimization
US12210957B2 (en) Local training of neural networks