Jia et al., 2018 - Google Patents
A microprocessor implemented in 65nm CMOS with configurable and bit-scalable accelerator for programmable in-memory computingJia et al., 2018
View PDF- Document ID
- 17479219638442115564
- Author
- Jia H
- Tang Y
- Valavi H
- Zhang J
- Verma N
- Publication year
- Publication venue
- arXiv preprint arXiv:1811.04047
External Links
Snippet
This paper presents a programmable in-memory-computing processor, demonstrated in a 65nm CMOS technology. For data-centric workloads, such as deep neural networks, data movement often dominates when implemented with today's computing architectures. This …
- 239000011159 matrix material 0 abstract description 10
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06F—ELECTRICAL DIGITAL DATA PROCESSING
- G06F7/00—Methods or arrangements for processing data by operating upon the order or content of the data handled
- G06F7/38—Methods or arrangements for performing computations using exclusively denominational number representation, e.g. using binary, ternary, decimal representation
- G06F7/48—Methods or arrangements for performing computations using exclusively denominational number representation, e.g. using binary, ternary, decimal representation using non-contact-making devices, e.g. tube, solid state device; using unspecified devices
- G06F7/52—Multiplying; Dividing
- G06F7/523—Multiplying only
- G06F7/53—Multiplying only in parallel-parallel fashion, i.e. both operands being entered in parallel
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06F—ELECTRICAL DIGITAL DATA PROCESSING
- G06F7/00—Methods or arrangements for processing data by operating upon the order or content of the data handled
- G06F7/38—Methods or arrangements for performing computations using exclusively denominational number representation, e.g. using binary, ternary, decimal representation
- G06F7/48—Methods or arrangements for performing computations using exclusively denominational number representation, e.g. using binary, ternary, decimal representation using non-contact-making devices, e.g. tube, solid state device; using unspecified devices
- G06F7/544—Methods or arrangements for performing computations using exclusively denominational number representation, e.g. using binary, ternary, decimal representation using non-contact-making devices, e.g. tube, solid state device; using unspecified devices for evaluating functions by calculation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06F—ELECTRICAL DIGITAL DATA PROCESSING
- G06F15/00—Digital computers in general; Data processing equipment in general
- G06F15/76—Architectures of general purpose stored programme computers
- G06F15/78—Architectures of general purpose stored programme computers comprising a single central processing unit
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06F—ELECTRICAL DIGITAL DATA PROCESSING
- G06F17/00—Digital computing or data processing equipment or methods, specially adapted for specific functions
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06F—ELECTRICAL DIGITAL DATA PROCESSING
- G06F1/00—Details of data-processing equipment not covered by groups G06F3/00 - G06F13/00, e.g. cooling, packaging or power supply specially adapted for computer application
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06N—COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computer systems based on biological models
- G06N3/02—Computer systems based on biological models using neural network models
- G06N3/06—Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
-
- H—ELECTRICITY
- H03—BASIC ELECTRONIC CIRCUITRY
- H03M—CODING; DECODING; CODE CONVERSION IN GENERAL
- H03M1/00—Analogue/digital conversion; Digital/analogue conversion
- H03M1/66—Digital/analogue converters
- H03M1/74—Simultaneous conversion
- H03M1/80—Simultaneous conversion using weighted impedances
- H03M1/802—Simultaneous conversion using weighted impedances using capacitors, e.g. neuron-mos transistors, charge coupled devices
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06F—ELECTRICAL DIGITAL DATA PROCESSING
- G06F13/00—Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
-
- H—ELECTRICITY
- H03—BASIC ELECTRONIC CIRCUITRY
- H03M—CODING; DECODING; CODE CONVERSION IN GENERAL
- H03M1/00—Analogue/digital conversion; Digital/analogue conversion
- H03M1/12—Analogue/digital converters
- H03M1/34—Analogue value compared with reference values
- H03M1/38—Analogue value compared with reference values sequentially only, e.g. successive approximation type
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| Jia et al. | A microprocessor implemented in 65nm CMOS with configurable and bit-scalable accelerator for programmable in-memory computing | |
| US11714749B2 (en) | Efficient reset and evaluation operation of multiplying bit-cells for in-memory computing | |
| Valavi et al. | A 64-tile 2.4-Mb in-memory-computing CNN accelerator employing charge-domain compute | |
| Jia et al. | 15.1 a programmable neural-network inference accelerator based on scalable in-memory computing | |
| Zhang et al. | PIMCA: A programmable in-memory computing accelerator for energy-efficient DNN inference | |
| Jiang et al. | Analog-to-digital converter design exploration for compute-in-memory accelerators | |
| Cao et al. | Neural-PIM: Efficient processing-in-memory with neural approximation of peripherals | |
| Khaddam-Aljameh et al. | An SRAM-based multibit in-memory matrix-vector multiplier with a precision that scales linearly in area, time, and power | |
| Cheon et al. | A 2941-TOPS/W charge-domain 10T SRAM compute-in-memory for ternary neural network | |
| Zhang et al. | A 55nm 1-to-8 bit configurable 6T SRAM based computing-in-memory unit-macro for CNN-based AI edge processors | |
| Liu et al. | SME: ReRAM-based sparse-multiplication-engine to squeeze-out bit sparsity of neural network | |
| Jeong et al. | A ternary neural network computing-in-memory processor with 16T1C bitcell architecture | |
| Chen et al. | Samba: Single-adc multi-bit accumulation compute-in-memory using nonlinearity-compensated fully parallel analog adder tree | |
| Xuan et al. | A brain-inspired ADC-free SRAM-based in-memory computing macro with high-precision MAC for AI application | |
| JP7587823B2 (en) | Configurable in-memory computing engine, platform, bit cell, and layout therefor | |
| Liu et al. | An energy-efficient mixed-bit CNN accelerator with column parallel readout for ReRAM-based in-memory computing | |
| Zhou et al. | A cfmb stt-mram-based computing-in-memory proposal with cascade computing unit for edge ai devices | |
| Jia et al. | A Programmable Embedded Microprocessor for Bit-scalable In-memory Computing. | |
| Yu et al. | A 4-bit mixed-signal MAC array with swing enhancement and local kernel memory | |
| EP4086910B1 (en) | Multiply-accumulate (mac) unit for in-memory computing | |
| Chung et al. | 8t-sram based process-in-memory (pim) system with current mirror for accurate mac operation | |
| US20240330178A1 (en) | Configurable in memory computing engine, platform, bit cells and layouts therefore | |
| Lin et al. | A reconfigurable in-SRAM computing architecture for DCNN applications | |
| Ma et al. | An sram compute-in-memory macro based on direct coupling sar adc and dac reuse | |
| Gu et al. | A dual-wordline 6T SRAM computing-in-memory macro featuring full signed multi-bit computation for lightweight networks |