Chatterjee et al., 2017 - Google Patents
Towards optimal quantization of neural networksChatterjee et al., 2017
View PDF- Document ID
- 4187577758737284984
- Author
- Chatterjee A
- Varshney L
- Publication year
- Publication venue
- 2017 IEEE International Symposium on Information Theory (ISIT)
External Links
Snippet
Due to the unprecedented success of deep neural networks in inference tasks like speech and image recognition, there has been increasing interest in using them in mobile and in- sensor applications. As most current deep neural networks are very large in size, a major …
- 230000001537 neural 0 title abstract description 38
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06N—COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computer systems based on biological models
- G06N3/02—Computer systems based on biological models using neural network models
- G06N3/08—Learning methods
- G06N3/082—Learning methods modifying the architecture, e.g. adding or deleting nodes or connections, pruning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06N—COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computer systems based on biological models
- G06N3/02—Computer systems based on biological models using neural network models
- G06N3/06—Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
- G06N3/063—Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using electronic means
- G06N3/0635—Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using electronic means using analogue means
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06N—COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computer systems based on biological models
- G06N3/12—Computer systems based on biological models using genetic models
- G06N3/126—Genetic algorithms, i.e. information processing using digital simulations of the genetic system
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06N—COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computer systems based on biological models
- G06N3/02—Computer systems based on biological models using neural network models
- G06N3/04—Architectures, e.g. interconnection topology
- G06N3/0454—Architectures, e.g. interconnection topology using a combination of multiple neural nets
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06N—COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N99/00—Subject matter not provided for in other groups of this subclass
- G06N99/005—Learning machines, i.e. computer in which a programme is changed according to experience gained by the machine itself during a complete run
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06N—COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N5/00—Computer systems utilising knowledge based models
- G06N5/04—Inference methods or devices
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06N—COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N7/00—Computer systems based on specific mathematical models
- G06N7/005—Probabilistic networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06N—COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N5/00—Computer systems utilising knowledge based models
- G06N5/02—Knowledge representation
- G06N5/022—Knowledge engineering, knowledge acquisition
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06K—RECOGNITION OF DATA; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
- G06K9/00—Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
- G06K9/62—Methods or arrangements for recognition using electronic means
- G06K9/6217—Design or setup of recognition systems and techniques; Extraction of features in feature space; Clustering techniques; Blind source separation
- G06K9/6232—Extracting features by transforming the feature space, e.g. multidimensional scaling; Mappings, e.g. subspace methods
- G06K9/6247—Extracting features by transforming the feature space, e.g. multidimensional scaling; Mappings, e.g. subspace methods based on an approximation criterion, e.g. principal component analysis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06N—COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N7/00—Computer systems based on specific mathematical models
- G06N7/02—Computer systems based on specific mathematical models using fuzzy logic
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06K—RECOGNITION OF DATA; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
- G06K9/00—Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
- G06K9/62—Methods or arrangements for recognition using electronic means
- G06K9/6217—Design or setup of recognition systems and techniques; Extraction of features in feature space; Clustering techniques; Blind source separation
- G06K9/6232—Extracting features by transforming the feature space, e.g. multidimensional scaling; Mappings, e.g. subspace methods
- G06K9/6251—Extracting features by transforming the feature space, e.g. multidimensional scaling; Mappings, e.g. subspace methods based on a criterion of topology preservation, e.g. multidimensional scaling, self-organising maps
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| EP3543917B1 (en) | Dynamic adaptation of deep neural networks | |
| Goel et al. | A survey of methods for low-power deep learning and computer vision | |
| Hubara et al. | Binarized neural networks | |
| Hubara et al. | Quantized neural networks: Training neural networks with low precision weights and activations | |
| Tjandra et al. | Compressing recurrent neural network with tensor train | |
| US20200134461A1 (en) | Dynamic adaptation of deep neural networks | |
| Rastegari et al. | Xnor-net: Imagenet classification using binary convolutional neural networks | |
| CN109445935B (en) | Self-adaptive configuration method of high-performance big data analysis system in cloud computing environment | |
| Jin et al. | Training large scale deep neural networks on the intel xeon phi many-core coprocessor | |
| CN103620624A (en) | Method and apparatus for locally competitive learning rules leading to sparse connectivity | |
| Chatterjee et al. | Towards optimal quantization of neural networks | |
| Xie et al. | Energy efficiency enhancement for cnn-based deep mobile sensing | |
| Basheer et al. | Alternating layered variational quantum circuits can be classically optimized efficiently using classical shadows | |
| Spallanzani et al. | Additive noise annealing and approximation properties of quantized neural networks | |
| Grubic et al. | Synchronous multi-gpu deep learning with low-precision communication: An experimental study | |
| Nguyen et al. | A low-power, high-accuracy with fully on-chip ternary weight hardware architecture for Deep Spiking Neural Networks | |
| Chung et al. | Multi-objective evolutionary architectural pruning of deep convolutional neural networks with weights inheritance | |
| CN113570037A (en) | Neural network compression method and device | |
| Cui et al. | Deep Bayesian optimization on attributed graphs | |
| Liu et al. | SuperPruner: Automatic neural network pruning via super network | |
| KR20210157826A (en) | Method for sturcture learning and model compression for deep neural netwrok | |
| CN117634580A (en) | Data processing method, training method and related equipment of neural network model | |
| Petschenig et al. | Quantized rewiring: hardware-aware training of sparse deep neural networks | |
| Pierro et al. | Accelerating Linear Recurrent Neural Networks for the Edge with Unstructured Sparsity | |
| Verma et al. | Clustered network adaptation methodology for the resource constrained platform |