WO2023004374A1 - Architecture d'apprentissage automatique hybride avec unité de traitement neuronal et éléments de traitement de calcul en mémoire - Google Patents
Architecture d'apprentissage automatique hybride avec unité de traitement neuronal et éléments de traitement de calcul en mémoire Download PDFInfo
- Publication number
- WO2023004374A1 WO2023004374A1 PCT/US2022/073979 US2022073979W WO2023004374A1 WO 2023004374 A1 WO2023004374 A1 WO 2023004374A1 US 2022073979 W US2022073979 W US 2022073979W WO 2023004374 A1 WO2023004374 A1 WO 2023004374A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- pes
- npu
- neural
- cim
- network
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/06—Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
- G06N3/063—Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using electronic means
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/042—Knowledge-based neural networks; Logical representations of neural networks
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/0464—Convolutional networks [CNN, ConvNet]
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D10/00—Energy efficient computing, e.g. low power processors, power management or thermal management
Definitions
- aspects of the present disclosure relate to machine learning, and in particular, to neural processing unit (NPU) and compute-in-memory (CIM) technologies.
- NPU neural processing unit
- CCM compute-in-memory
- Machine learning is generally the process of producing a trained model (e.g., an artificial neural network, a tree, or other structures), which represents a generalized fit to a set of training data that is known a priori. Applying the trained model to new data produces inferences, which may be used to gain insights into the new data. In some cases, applying the model to the new data is described as “running an inference” on the new data.
- a trained model e.g., an artificial neural network, a tree, or other structures
- machine learning accelerators may be used to enhance a processing system’s capacity to process machine learning model data.
- Such hardware demands space and power, which is not always available on the processing device.
- “edge processing” devices such as mobile devices, always-on devices, Internet of Things (IoT) devices, and the like, typically have to balance processing capabilities with power and packaging constraints.
- accelerators may move data across common data busses, which can cause significant power usage and introduce latency into other processes sharing the data bus. Consequently, other aspects of a processing system are being considered for processing machine learning model data.
- Memory devices are one example of another aspect of a processing system that may be leveraged for performing processing of machine learning model data through so-called compute-in-memory (CIM) processes, also referred to as “in-memory computation.”
- CIM compute-in-memory
- Conventional CIM processes perform computation using analog signals, which may result in inaccuracy of computation results, adversely impacting neural network computations. Accordingly, techniques and apparatus are needed for performing computation-in-memory with increased accuracy.
- the neural-network-processing circuit generally includes a plurality of compute-in-memory (CIM) processing elements (PEs), a plurality of neural processing unit (NPU) PEs, and a bus coupled to the plurality of CIM PEs and to the plurality of NPU PEs.
- CIM compute-in-memory
- NPU neural processing unit
- Certain aspects of the present disclosure are directed to a method for neural network processing.
- the method generally includes processing data in a neural-network processing circuit comprising a plurality of CIM PEs, a plurality of NPU PEs, and a bus coupled to the plurality of CIM PEs and to the plurality of NPU PEs; and transferring the processed data between at least one of the plurality of CIM PEs and at least one of the plurality of NPU PEs via the bus.
- the processing system generally includes a plurality of CIM PEs, a plurality of NPU PEs, a bus coupled to the plurality of CIM PEs and to the plurality of NPU PEs, a memory having computer-executable instructions stored thereon, and one or more processors configured to execute the computer-executable instructions stored thereon to transfer processed data between at least one of the plurality of CIM PEs and at least one of the plurality of NPU PEs via the bus
- processing systems configured to perform the aforementioned methods as well as those described herein; non-transitory, computer- readable media comprising instructions that, when executed by one or more processors of a processing system, cause the processing system to perform the aforementioned methods as well as those described herein; a computer program product embodied on a computer-readable storage medium comprising code for performing the aforementioned methods as well as those further described herein; and a processing system comprising means for performing the aforementioned methods as well as those further described herein.
- the one or more aspects comprise the features hereinafter fully described and particularly pointed out in the claims.
- the following description and the appended drawings set forth in detail certain illustrative features of the one or more aspects. These features are indicative, however, of but a few of the various ways in which the principles of various aspects may be employed.
- FIGs. 1A-1D depict examples of various types of neural networks, which may be implemented by aspects of the present disclosure.
- FIG. 2 depicts an example of a traditional convolution operation, which may be implemented by aspects of the present disclosure.
- FIGs. 3A and 3B depict examples of depthwise separable convolution operations, which may be implemented by aspects of the present disclosure.
- FIG. 4 is a block diagram of an example digital compute-in-memory (DCIM) architecture, in accordance with certain aspects of the present disclosure.
- DCIM digital compute-in-memory
- FIG. 5 illustrates an example compute-in-memory (CIM) cell for the DCIM architecture of FIG. 4, implemented as an eight-transistor (8T) static random-access memory (SRAM) cell.
- CIM compute-in-memory
- 8T eight-transistor
- SRAM static random-access memory
- FIG. 6 is a block diagram of an example neural processing unit (NPU) architecture, in accordance with certain aspects of the present disclosure.
- NPU neural processing unit
- FIG. 7A is a block diagram of an example hybrid architecture with DCIM processing elements (PEs) and NPU PEs sharing resources, illustrating an example dataflow sequence, in accordance with certain aspects of the present disclosure.
- PEs DCIM processing elements
- FIG. 7B is a block diagram of an example hybrid architecture with DCIM PEs and NPU PEs sharing resources and with first-in, first-out (FIFO) circuits for enabling data exchange, in accordance with certain aspects of the present disclosure.
- FIG. 8 is a table comparing DCIM and NPU for different combinations of light versus heavy inputs, depths, and kernels, in accordance with certain aspects of the present disclosure.
- FIG. 9 is a block diagram of data exchanges between an example DCIM PE and an example NPU PE in a hybrid architecture, in accordance with certain aspects of the present disclosure.
- FIG. 10 illustrates an example output-stationary mapping for NPU PEs, in accordance with certain aspects of the present disclosure.
- FIG. 11 illustrates an example pseudo-weight-stationary mapping for DCIM PEs, in accordance with certain aspects of the present disclosure.
- FIG. 12 is a flow diagram illustrating example operations for neural network processing, in accordance with certain aspects of the present disclosure.
- FIG. 13 is a block diagram illustrating an example electronic device having a hybrid architecture configured to perform machine learning tasks, in accordance with certain aspects of the present disclosure.
- aspects of the present disclosure provide apparatus, methods, processing systems, and computer-readable mediums for performing data-intensive processing, such as implementing machine learning models.
- Some aspects provide a hybrid neural network architecture using both compute-in-memory (CIM) and neural processing unit (NPU) processing elements (PEs), where the CIM PEs and the NPU PEs can share resources (e.g., memory), can concurrently operate, and can transfer data from one type of PE to another type of PE within the same neural network layer or in different neural network layers (e.g., adjacent layers).
- the CIM and NPU PEs may be coupled to the same tightly coupled memory (TCM) bus for transferring weights, activation inputs, and/or outputs.
- TCM tightly coupled memory
- a hybrid architecture as presented herein may offer the best (or at least better) energy consumption and speed trade-offs than conventional neural network architectures utilizing only NPU PEs or only CIM PEs.
- Neural networks are organized into layers of interconnected nodes.
- a node or neuron is where computation happens.
- a node may combine input data with a set of weights (or coefficients) that either amplifies or dampens the input data.
- the amplification or dampening of the input signals may thus be considered an assignment of relative significances to various inputs with regard to a task the network is trying to learn.
- input-weight products are summed (or accumulated), and then the sum is passed through a node’s activation function to determine whether and to what extent that signal should progress further through the network.
- a neural network may have an input layer, a hidden layer, and an output layer. “Deep” neural networks generally have more than one hidden layer.
- Deep learning is a method of training deep neural networks.
- deep learning finds the right / to transform x intoy.
- Deep learning trains each layer of nodes based on a distinct set of features, which is the output from the previous layer.
- features become more complex. Deep learning is thus powerful because it can progressively extract higher-level features from input data and perform complex tasks, such as object recognition, by learning to represent inputs at successively higher levels of abstraction in each layer, thereby building up a useful feature representation of the input data.
- a first layer of a deep neural network may learn to recognize relatively simple features, such as edges, in the input data.
- the first layer of a deep neural network may learn to recognize spectral power in specific frequencies in the input data.
- the second layer of the deep neural network may then learn to recognize combinations of features, such as simple shapes for visual data or combinations of sounds for auditory data, based on the output of the first layer.
- Higher layers may then learn to recognize complex shapes in visual data or words in auditory data.
- Still higher layers may learn to recognize common visual objects or spoken phrases.
- deep learning architectures may perform especially well when applied to problems that have a natural hierarchical structure.
- Neural networks such as deep neural networks (DNNs) may be designed with a variety of connectivity patterns between layers.
- DNNs deep neural networks
- FIG. 1A illustrates an example of a fully connected neural network 102.
- each node in a first layer communicates its output to every node in a second layer, so that each node in the second layer will receive input from every node in the first layer.
- FIG. IB illustrates an example of a locally connected neural network 104.
- a node in a first layer may be connected to a limited number of nodes in the second layer.
- a locally connected layer of the locally connected neural network 104 may be configured so that each node in a layer will have the same or a similar connectivity pattern, but with connection strengths (or weights) that may have different values (e.g., values associated with local areas 110, 112, 114, and 116 of the first layer nodes).
- the locally connected connectivity pattern may give rise to spatially distinct receptive fields in a higher layer, because the higher layer nodes in a given region may receive inputs that are tuned through training to the properties of a restricted portion of the total input to the network.
- FIG. 1C illustrates an example of a convolutional neural network 106.
- the convolutional neural network 106 may be configured such that the connection strengths associated with the inputs for each node in the second layer are shared (e.g., for local area 108 overlapping another local area of the first layer nodes).
- Convolutional neural networks are well suited to problems in which the spatial locations of inputs are meaningful.
- DCN deep convolutional network
- DCN deep convolutional network
- FIG. ID illustrates an example of a DCN 100 designed to recognize visual features in an image 126 generated by an image-capturing device 130.
- the DCN 100 may be trained with various supervised learning techniques to identify a traffic sign and even a number on the traffic sign.
- the DCN 100 may likewise be trained for other tasks, such as identifying lane markings or identifying traffic lights. These are just some example tasks, and many others are possible.
- the DCN 100 includes a feature-extraction section and a classification section.
- a convolutional layer 132 Upon receiving the image 126, a convolutional layer 132 applies convolutional kernels (for example, as depicted and described in FIG. 2) to the image 126 to generate a first set of feature maps (or intermediate activations) 118.
- a “kernel” or “filter” comprises a multidimensional array of weights designed to emphasize different aspects of an input data channel.
- “kernel” and “filter” may be used interchangeably to refer to sets of weights applied in a convolutional neural network.
- the first set of feature maps 118 may then be subsampled by a pooling layer (e.g., a max pooling layer, not shown) to generate a second set of feature maps 120.
- the pooling layer may reduce the size of the first set of feature maps 118 while maintaining much of the information in order to improve model performance.
- the second set of feature maps 120 may be downsampled to a 14x14 matrix from a 28x28 matrix by the pooling layer.
- This process may be repeated through many layers.
- the second set of feature maps 120 may be further convolved via one or more subsequent convolutional layers (not shown) to generate one or more subsequent sets of feature maps (not shown).
- the second set of feature maps 120 is provided to a fully connected layer 124, which in turn generates an output feature vector 128.
- Each feature of the output feature vector 128 may include a number that corresponds to a possible feature of the image 126, such as “sign,” “60,” and “100.”
- a softmax function (not shown) may convert the numbers in the output feature vector 128 to a probability.
- an output 122 of the DCN 100 is a probability of the image 126 including one or more features.
- a softmax function may convert the individual elements of the output feature vector 128 into a probability in order that an output 122 of DCN 100 is one or more probabilities of the image 126 including one or more features, such as a sign with the number “60” thereon, as in image 126.
- the probabilities in the output 122 for “sign” and “60” should be higher than the probabilities of the other elements of the output 122, such as “30,” “40,” “50,” “70,” “80,” “90,” and “100.”
- the output 122 produced by the DCN 100 may be incorrect.
- an error may be calculated between the output 122 and a target output known a priori.
- the target output is an indication that the image 126 includes a “sign” and the number “60.”
- the weights of the DCN 100 may then be adjusted through training so that a subsequent output 122 of the DCN 100 achieves the target output (with high probabilities).
- a learning algorithm may compute a gradient vector for the weights.
- the gradient vector may indicate an amount that an error would increase or decrease if a weight were adjusted in a particular way.
- the weights may then be adjusted to reduce the error. This manner of adjusting the weights may be referred to as “backpropagation” because this adjustment process involves a “backward pass” through the layers of the DCN 100.
- the error gradient of weights may be calculated over a small number of examples, so that the calculated gradient approximates the true error gradient.
- This approximation method may be referred to as stochastic gradient descent. Stochastic gradient descent may be repeated until the achievable error rate of the entire system has stopped decreasing or until the error rate has reached a target level.
- the DCN 100 may be presented with new images, and the DCN 100 may generate inferences, such as classifications, or probabilities of various features being in the new image.
- Convolution is generally used to extract useful features from an input data set. For example, in convolutional neural networks, such as described above, convolution enables the extraction of different features using kernels and/or filters whose weights are automatically learned during training. The extracted features are then combined to make inferences.
- An activation function may be applied before and/or after each layer of a convolutional neural network.
- Activation functions are generally mathematical functions that determine the output of a node of a neural network. Thus, the activation function determines whether a node should pass information or not, based on whether the node’s input is relevant to the model’s prediction.
- y conv(x) (i.e., y is the convolution of x)
- both x and y may be generally considered as “activations.”
- x may also be referred to as “pre activations” or “input activations” as x exists before the particular convolution, andy may be referred to as output activations or a feature map.
- FIG. 2 depicts an example of a traditional convolution in which a 12-pixel x 12-pixel x 3 -channel input image 202 is convolved using a 5 x 5 x 3 convolution kernel 204 and a stride (or step size) of 1.
- the resulting feature map 206 is 8 pixels x 8 pixels x 1 channel.
- the traditional convolution may change the dimensionality of the input data as compared to the output data (here, from 12 x 12 to 8 x 8 pixels), including the channel dimensionality (here, from 3 channels to 1 channel).
- the convolution kernel 204 is shown as corresponding to a portion of the input image 202 with which the kernel is convolved to generate a single element of the feature map 206.
- a spatial separable convolution such as depicted in FIG. 2 may be factorized into two components: (1) a depthwise convolution, where each spatial channel is convolved independently by a depthwise convolution (e.g., a spatial fusion); and (2) a pointwise convolution, where all the spatial channels are linearly combined (e.g., a channel fusion).
- a depthwise separable convolution is depicted in FIGs. 3A and 3B.
- a network learns features from the spatial planes, and during channel fusion, the network learns relations between these features across channels.
- a depthwise separable convolution may be implemented using 5x5 kernels for spatial fusion, and lxl kernels for channel fusion.
- the channel fusion may use a lxlxi/ kernel that iterates through every single point in an input image of depth d , where the depth d of the kernel generally matches the number of channels of the input image.
- Channel fusion via pointwise convolution is useful for dimensionality reduction for efficient computations.
- Applying lxlxi/ kernels and adding an activation layer after the kernel may give a network added depth, which may increase the network’s performance.
- the 12-pixel x 12-pixel x 3 -channel input image 302 is convolved with a filter comprising three separate kernels 304A-C, each having a 5 x 5 x 1 dimensionality, to generate a feature map 306 of 8 pixels x 8 pixels x 3 channels, where each channel is generated by an individual kernel among the kernels 304A-C with the corresponding shading in FIG. 3A.
- Each convolution kernel 304A-C is shown as corresponding to a portion of the input image 302 with which the kernel is convolved to generate a single element of the feature map 306.
- feature map 306 is further convolved (as shown in FIG. 3B) using a pointwise convolution operation with a kernel 308 having dimensionality 1 x 1 x 3 to generate a feature map 310 of 8 pixels x 8 pixels x 1 channel.
- feature map 310 has reduced dimensionality (1 channel versus 3 channels), which allows for more efficient computations therewith.
- multiple (e.g., m) pointwise convolution kernels 308 can be used to increase the channel dimensionality of the convolution output.
- m 256 1x1x3 kernels 308 can be generated, in which each output is an 8-pixel x 8-pixel x 1 -channel feature map (e.g., feature map 310), and these feature maps can be stacked to get a resulting feature map of 8 pixels x 8 pixels x 256 channels.
- the resulting increase in channel dimensionality provides more parameters for training, which may improve a convolutional neural network’s ability to identify features (e.g., in input image 302).
- CIM-based machine learning (ML)/artificial intelligence (AI) may be used for a wide variety of tasks, including image and audio processing and making wireless communication decisions (e.g., to optimize, or at least increase, throughput and signal quality).
- CIM may be based on various types of memory architectures, such as dynamic random-access memory (DRAM), static random-access memory (SRAM) (e.g., based on an SRAM cell as in FIG.
- DRAM dynamic random-access memory
- SRAM static random-access memory
- CIM magnetoresistive random-access memory
- ReRAM resistive random-access memory
- CPUs central processing units
- DSPs digital signal processors
- GPUs graphics processing units
- FPGAs field-programmable gate arrays
- AI accelerators AI accelerators
- CIM may beneficially reduce the “memory wall” problem, which is where the movement of data in and out of memory consumes more power than the computation of the data.
- memory wall is where the movement of data in and out of memory consumes more power than the computation of the data.
- significant power savings may be realized. This is particularly useful for various types of electronic devices, such as lower power edge processing devices, mobile devices, and the like.
- a mobile device may include a memory device configured for storing data and performing CIM operations.
- the mobile device may be configured to perform an ML/AI operation based on data generated by the mobile device, such as image data generated by a camera sensor of the mobile device.
- a memory controller unit (MCU) of the mobile device may thus load weights from another on-board memory (e.g., flash or RAM) into a CIM array of the memory device and allocate input feature buffers and output (e.g., output activation) buffers.
- the processing device may then commence processing of the image data by loading, for example, a layer in the input buffer and processing the layer with weights loaded into the CIM array. This processing may be repeated for each layer of the image data, and the outputs (e.g., output activations) may be stored in the output buffers and then used by the mobile device for an ML/AI task, such as facial recognition.
- CIM digital compute-in-memory
- FIG. 4 is a block diagram of an example DCIM circuit 400, in accordance with certain aspects of the present disclosure.
- the DCIM circuit 400 may function as a single DCIM processing element (PE).
- PE DCIM processing element
- the DCIM circuit 400 includes a CIM array 401 (e.g., a DCIM array) having thirty-two word-lines 404o to 40431 (also referred to as “rows”) and eight columns 406o to 4067 (e.g., each column may be composed of multiple bit-lines, such as thirty-two bit-lines).
- Word-lines 404o to 40431 are collectively referred to as “word-lines (WLs) 404,” and columns 406o to 4067 are collectively referred to as “columns 406.”
- WLs word-lines
- columns 406o to 4067 are collectively referred to as “columns 406.”
- the CIM array 401 is implemented with 32 word-lines and 8 columns to facilitate understanding, the CIM array may be implemented with any number of word-lines and with any number of columns.
- CIM cells 402o-o to 40231-7 are implemented at the intersections of the WLs 404 and columns 406.
- Each of the CIM cells 402 may be implemented using the CIM cell architecture described below with respect to FIG. 5, for example.
- the CIM cells 402 may be loaded with the weight bits of a neural network.
- the activation inputs may be provided as an input matrix (e.g., a 32-row by 8-column matrix) to the CIM array 401, one vector at a time, or as an input vector shared across the columns (e.g., a 32-row by 1 -column vector that is shared or hardwired across 8 columns).
- activation input bits a(0,0) to a(31,0) (e.g., a first vector) may be provided to respective word-lines 404, and the CIM cells 402 may store weights w(0,0) to w(31,7) of the neural network, for example.
- CIM cells 402o-o to 402o-7 may store weight bits w(0,0) to w(0,7)
- CIM cells 402i-o to 402I-7 may store weight bits w(l,0) to w(l,7)
- Each word-line may store a multi-bit weight.
- weight bits w(0,0) to w(0,7) may represent eight bits of a weight of a neural network (e.g., an 8-bit weight).
- Each CIM cell 402 may perform bit-wise multiplication of a received activation input bit with the weight bit stored in the CIM cell and pass the result to the output of the CIM cell (e.g., the read bit-line (RBL), as explained with respect to FIG. 5)
- the CIM cell e.g., the read bit-line (RBL), as explained with respect to FIG. 5
- the DCIM circuit 400 may include a bit-column adder tree 409, which may include eight adder trees 410o to 4107 (collectively referred to as “adder trees 410”), each adder tree being implemented for a respective one of the columns 406.
- adder trees 410 adds the output signals from the CIM cells 402 on the respective one of the columns 406, and the adder trees 410 may operate in parallel (e.g., concurrently).
- the outputs of the adder trees 410 may be coupled to a weight-shift adder tree circuit 412, as shown.
- the weight-shift adder tree circuit 412 includes multiple weight-shift adders 414, each including a bit-shift-and-add circuit to facilitate the performance of a bit-shifting-and-addition operation.
- the CIM cells on column 406o may store the most-significant bits (MSBs) for respective weights on each word-line 404
- the CIM cells on column 4067 may store the least-significant bits (LSBs) for respective weights on each word-line. Therefore, when performing the addition across the columns 406, a bit-shift operation is performed to shift the bits to account for the significance of the bits on the associated column.
- the output of the weight-shift adder tree circuit 412 is provided to an activation-shift accumulator circuit 416.
- the activation-shift accumulator circuit 416 includes a bit-shift circuit 418, a serial accumulator 420, and a flip-flop (FF) array 422.
- the FF array 422 may be used to implement a register.
- the various elements of the DCIM circuit 400 of FIG. 4 may be operated with a common clock frequency (as indicated by the label “System Frequency x 1”).
- activation circuitry 490 provides a first set of activation input bits a(0,0) to a(31,0) (e.g., a first vector in a batch of thirty- two activation input features) to the CIM cells 402 for computation during a first activation cycle.
- the first set of activation input bits a(0,0) to a(31,0) may represent the most-significant bits of the activation inputs, for example.
- the outputs of computations on each column 406 are added using a respective one of the adder trees 410.
- the outputs of the adder trees 410 are added using the weight-shift adder tree circuit 412, the results of which are provided to the activation-shift accumulator circuit 416.
- the same operation is performed for other sets of activation input bits (other input vectors in the batch) during subsequent activation cycles, such as activation input bits a(0,l) to a(31,l) (e.g., a second vector) that may represent the second most-significant bits of the activation inputs, and so on until activation input bits representing the least-significant bits of the activation inputs are processed.
- the bit-shift circuit 418 performs a bit-shift operation based on the activation cycle.
- the bit-shift circuit 418 may perform an 8-bit shift for the first activation cycle, a 7-bit shift for the second activation cycle, and so on.
- the outputs of the bit-shift circuit 418 are accumulated using the serial accumulator 420 and stored in the FF array 422, which may be used as a register to transfer the final accumulation result to another component (e.g., an output TCM or digital post-processing logic, as described below).
- the DCIM circuit 400 of FIG. 4 provides bit-wise storage and bit-wise multiplication.
- the adder trees 410 perform a population count addition for the columns 406. That is, each of the adder trees 410 adds the output signals of the CIM cells for a column (e.g., adding all 32 rows per column).
- the weight-shift adder tree circuit 412 e.g. having three stages as shown for eight columns
- the activation-shift accumulator circuit 416 combines the results from multiple (e.g., eight) activation cycles and outputs the final accumulation result.
- bit-shift circuit 418 shifts the bits at the output of the weight-shift adder tree circuit 412 based on the associated activation cycle.
- the serial accumulator 420 accumulates the shifted adder output generated by the bit- shift circuit 418.
- the transfer register implemented using the FF array 422 copies the output of the serial accumulator 420 after the computation for the last activation cycle has been completed.
- the DCIM circuit 400 provides linear energy scaling across computations using different bit-sizes of activation inputs and/or weights.
- using the adder trees 410 and weight-shift adder tree circuit 412 provides bit-size configurability, allowing for an n- bit activation input with an m- bit weight accumulation, n and m being positive integers.
- the energy consumption associated with the DCIM circuit 400 may scale linearly based on the configured bit-size for the activation inputs and weights.
- the example DCIM circuit 400 of FIG. 4 may be comparatively compact (in terms of area occupied) and may consume relatively low energy.
- the DCIM circuit 400 and the weight-stationary mapping used therein may have some disadvantages, which are discussed below.
- the term “weight-stationary” generally refers to a re-use architecture where the neural network weights remain stationary during operation (e.g., after being initially loaded) and the inputs are streamed in.
- a “pseudo-weight-stationary mapping” generally refers to a weight-stationary re-use scheme that processes a batch of input features for each of multiple depth-cycles, in an effort to generate the final outputs as quickly as possible.
- the DCIM circuit 400 enables a pseudo-weight-stationary scheme, where a batch of 32 activation input bits may be concurrently processed.
- a smaller batch size e.g., 32 versus 256 features
- weights are re-used for the different sets of activation input bits in the input batch.
- the final outputs may be transferred to the memory (e.g., the output TCM), as described below.
- FIG. 5 illustrates an example CIM cell 500 of a static random-access memory (SRAM), which may be implemented in a CIM array, such as the CIM array 401 in the DCIM circuit 400 of FIG. 4.
- the CIM cell 500 may be referred to as an “eight-transistor (8T) SRAM cell” because the CIM cell is implemented with eight transistors.
- the CIM cell 500 may include a cross-coupled invertor pair 524 having an output 514 and an output 516.
- the cross-coupled invertor pair output 514 is selectively coupled to a write bit-line (WBL) 506 via a pass-gate transistor 502
- the cross-coupled invertor pair output 516 is selectively coupled to a complementary write bit-line (WBLB) 520 via a pass-gate transistor 518.
- WBL 506 and WBLB 520 are configured to provide complementary digital signals to be written (e.g., stored) in the cross-coupled invertor pair 524.
- the WBL and WBLB may be used to store a bit for a neural network weight in the CIM cell 500.
- pass-gate transistors 502, 518 may be coupled to a write word-line (WWL) 504, as shown.
- WWL write word-line
- a digital signal to be written may be provided to the WBL (and a complement of the digital signal is provided to the WBLB).
- the pass-gate transistors 502, 518 which are implemented here as n-type field-effect transistors (NFETs) — are then turned on by providing a logic high signal to WWL 504, resulting in the digital signal being stored in the cross-coupled invertor pair 524.
- NFETs n-type field-effect transistors
- the cross-coupled invertor pair output 514 may be coupled to a gate of a transistor 510.
- the source of the transistor 510 may be coupled to a reference potential node (Vss or electrical ground), and the drain of the transistor 510 may be coupled to a source of a transistor 512.
- the drain of the transistor 512 may be coupled to a read bit-line (RBL) 522, as shown.
- the gate of transistor 512 may be controlled via a read word-line (RWL) 508.
- the RWL 508 may be controlled via an activation input signal.
- the RBL 522 may be precharged to logic high. If both the activation input bit and the weight bit stored at the cross-coupled invertor pair output 514 are logic high, then transistors 510, 512 are both turned on, electrically coupling the RBL 522 to the reference potential node at the source of transistor 510 and discharging the RBL 522 to logic low. If either the activation input bit or the weight bit stored at the cross-coupled invertor pair output 514 is logic low, then at least one of the transistors 510, 512 will be turned off, such that the RBL 522 remains logic high. Thus, the output of the CIM cell 500 at the RBL 522 is logic low only when both the weight bit and the activation input bit are logic high, and is logic high otherwise, effectively implementing a NAND- gate operation.
- FIG. 6 is a block diagram of an example neural processing unit (NPU) architecture 600, in accordance with certain aspects of the present disclosure.
- An NPU may also be referred to as a neural network signal processor (NSP), but for consistency, the present disclosure uses the term “NPU.”
- the NPU architecture 600 may have a weight tightly coupled memory (TCM) bus 602, an activation TCM bus 604, an output TCM bus 610, digital post-processing logic 608, and multiple NPU processing elements (PEs) 606.
- TCM weight tightly coupled memory
- PEs digital post-processing logic 608
- Each of the NPU PEs 606 may include multiple multiply-and-accumulate (MAC) units 612, an adder tree 614, and an accumulator register 616, as shown.
- the digital post processing logic 608 may perform any of various suitable digital processing operations on the accumulation result from the NPU PEs, such as biasing, batch normalization (BN), linear/non-linear thresholding, quantization, etc.
- the NPU architecture 600 of FIG. 6 offers parallel MAC operation for both activation inputs and weights, and a single computation cycle may generate the accumulation result.
- the NPU PEs 606 may use an output- stationary architecture in order to re-use the accumulator.
- output-stationary generally refers to a re-use architecture where the computation results remain stationary during operation, but the inputs and weights move in opposite directions through the architecture. That being said, the NPU architecture 600 may be limited by the TCM bandwidth for feeding data (e.g., activation inputs and/or weights), and the output- stationary architecture may have a comparatively large energy penalty for weight loading at each cycle.
- each MAC unit 612 may occupy a relatively large area, such that the NPU architecture 600 may take up a lot of space.
- CIM compute-in-memory
- CPU central processing unit
- weight-stationary generally refers to a re-use architecture where the neural network weights remain stationary during operation (e.g., after being initially loaded) and the inputs are streamed in. Weight-stationary mapping may be used in CIM to reduce the overhead of the weight update time during operation.
- CIM and other weight-stationary mapping schemes may have some challenges in certain applications.
- the weight-stationary operation of some neural-network-processing circuits may force these circuits to offload and reload (e.g., write and read) partial accumulation results to a memory (e.g., the output TCM) for the final accumulation.
- a memory e.g., the output TCM
- partial sums partial accumulation results are not final data, or in other words, are not yet ready to become (or to be transferred to digital post-processing logic before the results become) an activation input for the next layer nor data to be stored in the output TCM as the final result of a layer.
- partial sums may be temporarily stored in the output TCM and read back to the DCIM PEs for further processing in one or more cycles until the final accumulation output is ready. These partial sums may then be discarded when the final outputs are ready to be processed (e.g., by the digital post-processing logic).
- weight-stationary mapping may force the partial accumulation results to be written to a buffer memory and read back from the buffer memory for a subsequent input feature multiply-and-accumulate (MAC) operation, which may create overhead in terms of energy and a performance penalty (e.g., in terms of lower tera- operations per second (TOPS)) if this read/write cannot be handled in the same MAC cycle.
- MAC multiply-and-accumulate
- TOPS tera- operations per second
- CIM may have reduced flexibility, for instance when mapping to workloads with a low number of kernels and/or a low depth (e.g., a low number of neural network layers). CIM may also have limited utilization, particularly in workloads with a low number of kernels and a large number of inputs.
- CIM may most likely suffer from a performance penalty (e.g., reduced TOPS) in output- stationary workloads, due to loading CIM weights to the CIM cells row-by-row during operation.
- a performance penalty e.g., reduced TOPS
- output-stationary generally refers to a re-use architecture where the computation results remain stationary during operation, but the inputs and weights move in opposite directions through the architecture.
- NPUs are well suited to workloads favoring output-stationary mappings and/or a large degree of input feature parallelism.
- NPUs may suffer from scalability to a large number of kernels, large area occupation because of weight storage and multiplication, and a strong dependence of performance (e.g., TOPS) on memory bandwidth.
- NPUs may be limited to a low number of rows per accumulation; otherwise, there may be a speed and area penalty.
- CIM units are generally better than NPUs in terms of energy efficiency, whereas NPUs are generally better than CIM units for depth-wise convolution.
- DCIM digital compute-in-memory
- DCIM and NPU processing elements may be able to use shared memory resources, may be able to concurrently operate, and may be able to transfer data from one compute unit (e.g., NPU/DCIM) to another (e.g., DCIM/NPU), which allows cascading within the same layer or consecutive layers of a neural network.
- the DCIM PEs may be implemented with a pseudo-weight-stationary mapping, and the NPU PEs may be implemented with an output- stationary mapping, such that the DCIM and NPU PEs can pipeline the data traffic within the same layer.
- FIG. 7A is a block diagram of an example hybrid architecture 700 with DCIM PEs 702 and NPU PEs 703 sharing resources, illustrating an example dataflow sequence, in accordance with certain aspects of the present disclosure.
- the DCIM PEs 702 may be implemented with any of various suitable DCIM circuits, such as the DCIM circuit 400 of FIG. 4.
- the NPU PEs 703 may be implemented with any of various suitable NPU circuits, such as the NPU PEs 606 described with respect to FIG. 6.
- the hybrid architecture 700 may also include a global memory 704, a weight tightly coupled memory (TCM) 706, an activation TCM 708, an output TCM 710, bus arbitration logic 712, digital post-processing logic 713, a memory bus 714, and a PE bus 716 (e.g., common bus with a FIFO).
- TCM weight tightly coupled memory
- an activation TCM 708, an output TCM 710 bus arbitration logic 712, digital post-processing logic 713, a memory bus 714, and a PE bus 716 (e.g., common bus with a FIFO).
- a “TCM” generally refers to a memory accessed by a dedicated connection from the processor(s), such as the PEs 702, 703.
- the weight TCM 706, the activation TCM 708, and/or the output TCM 710 may be combined.
- the memory bus 714 may couple the global memory 704 to the weight TCM 706, the activation TCM 708, and the output TCM 710.
- the PE bus 716 may couple the DCIM PEs 702, the NPU PEs 703, and the digital post-processing logic 713 together. In this manner, the DCIM PEs 702 and the NPU PEs 703 may share the memory resources (e.g., the weight TCM, the activation TCM, and the output TCM).
- the memory resources e.g., the weight TCM, the activation TCM, and the output TCM.
- weights may be loaded from the global memory 704 to the weight TCM 706. Then, the weights may be loaded from the weight TCM 706 to the PE weight arrays (e.g., in the CIM cells of the DCIM PEs 702 and/or in weight registers of the NPU PEs 703).
- Activation inputs may be loaded from the global memory 704 to the activation TCM 708 via the memory bus 714. Then, the activation inputs may be loaded from the activation TCM 708 to the PE bus 716 (or at least a portion of the PE bus operating as an activation bus).
- the DCIM PEs 702 and the NPU PEs 703 may perform computations (e.g., MAC operations) over multiple computation cycles to generate final accumulation results.
- the final accumulation results may be processed by the digital post-processing logic 713, and the processed results may be written to the output TCM 710, as controlled by the bus arbitration logic 712. From the output TCM 710, the processed results may be loaded in the global memory 704 via the memory bus 714.
- FIG. 7B is a block diagram of another example hybrid architecture 750 with DCIM PEs 702 and NPU PEs 703 sharing resources and with first-in, first-out (FIFO) circuits for enabling data exchange, in accordance with certain aspects of the present disclosure.
- the DCIM PEs 702 may be implemented with any of various suitable DCIM circuits, such as the DCIM circuit 400 of FIG. 4.
- the NPU PEs 703 may be implemented with any of various suitable NPU circuits, such as the NPU PEs 606 described with respect to FIG. 6.
- the hybrid architecture 750 may also include a global memory 704, one or more TCMs 718 (for storing weights, activation inputs, and/or outputs), a weight buffer 720, bus arbitration logic 712, digital post-processing logic 713, a DCIM PE tile mapper 722, an NPU PE tile mapper 724, a bit-serial interleaver 726, a DCIM PE activation FIFO 728, an NPU PE activation FIFO 730, and a memory bus 714 (also referred to as a “TCM bus”).
- the memory bus 714 may couple the global memory 704 to the one or more TCMs 718 and to the bus arbitration logic 712.
- the memory bus 714 may also couple the one or more TCMs 718 to the input of the weight buffer 720 and to the bus arbitration logic 712.
- the memory bus 714 may also couple the output of the weight buffer 720 to the bus arbitration logic 712.
- the bus arbitration logic 712 may route weights to the DCIM PEs 702 and to the NPU PEs 703 and may route activation inputs to the different activation FIFOs (e.g., 728, 730).
- the bus arbitration logic 712 may also receive outputs (e.g., final accumulation results or partial sums) from the digital post-processing logic 713. In this manner, the DCIM PEs 702 and the NPU PEs 703 may share the memory resources (e.g., the one or more TCMs 718).
- the dataflow sequence for the hybrid architecture 750 may be similar to the dataflow sequence explained above for the hybrid architecture 700.
- the digital post-processing logic 713 may process the output data, with functions such as biasing, batch normalization, linear/non-linear thresholding, quantization, etc.
- Activation inputs are received from the bus arbitration logic 712, which may select either of the following, to prepare the activation inputs for either or both of the DCIM PEs 702 and the NPU PEs 703, and to write the activation inputs to the corresponding activation FIFO 728 or 730: (1) activation data directly read from the TCM(s) 718 (typically for the first layer of a network, where data may have originated from the actual video, audio, or other sensor input); or (2) activation data read from the digital post-processing logic 713 (typically for intermediate layers of the network, where accumulator outputs from the DCIM and/or NPU PEs are processed).
- Each of the activation FIFOs 728 or 730 may be implemented using an array of flops, dual-port SRAM memory, or a register file with one write port and read port.
- the bus arbitration logic 712 may be implemented by a combinatorial circuit consisting of (de)multiplexers routing the input data to multiple destinations. The data destination may be controlled by a mapper module, which may make destination decisions based on the layer workload. [0088]
- the pseudo-weight-stationary dataflow of the DCIM PEs 702 generates partial-sum accumulator results. Partial sums are not final data, or in other words, are not yet ready to become an activation input for the next layer nor data to be stored in the output TCM as the final result of a layer.
- partial sums may be temporarily stored in the output TCM and read back to the DCIM PEs 702 for further processing in one or more cycles until the final output is ready. These partial sums may then be discarded when the final outputs are ready to be processed by the digital post-processing logic 713.
- Partial sum outputs may be routed through the bus arbitration logic 712, sent to the output TCM, read back again, and then sent back to the DCIM PEs 702 through the bus arbitration logic 712.
- the accumulator outputs from the DCIM PEs 702 may be sent to the digital post-processing logic 713 for processing and then to the bus arbitration logic 712.
- the accumulator output going to the digital post processing logic 713 may be fed therethrough (e.g., without being processed by the digital post-processing logic). If the accumulator output is a final accumulator result, the accumulator output is processed by the digital post-processing logic 713.
- the digital post-processing logic sends this processed output to the bus arbitration logic 712, and then the bus arbitration logic 712 sends this data to either the output TCM or as the activation input of another network layer.
- the DCIM PEs 702 are pseudo-weight-stationary, need not have frequent weight writing, and may have no weight broadcast scheme. Because of this pseudo-weight-stationary architecture in the DCIM PEs 702 and the less-frequent weight writes, the amount of MAC stalls may be significantly reduced. In contrast, the NPU PEs 703 are output-stationary with frequent weight writing and a weight broadcast scheme allowing weight sharing across parallel inputs.
- the DCIM PEs 702 have fewer activation reads because of lower kernel cycles enabled by a large number of mapped kernels (e.g., 32 kernels running in parallel).
- the frequent weight access of the NPU PEs 703 may lead to a pairing of spatially mapped input features, as well as a pairing of spatially mapped kernels (e.g., 8 input features running in parallel, as well as 8 kernels running in parallel). Otherwise, there may be a significant weight-access penalty due to repeated weight reads if the number of cycles to complete the full set of input features increases.
- FIG. 8 is a table 800 comparing DCIM and NPU for different combinations of light versus heavy inputs, depths, and kernels, in accordance with certain aspects of the present disclosure.
- 010 in the table 800 represents light inputs, heavy depth, and light kernels
- 101 represents heavy inputs, light depths, and heavy kernels.
- DCIM is generally better than NPU in terms of energy efficiency.
- NPU is better than DCIM in depth-wise convolution, for example.
- FIG. 9 is a block diagram 900 of data exchanges between an example DCIM PE 902 and an example NPU PE 904 in a hybrid architecture, using the appropriate activation FIFO and the digital post-processing (DPP) logic 906, 908, in accordance with certain aspects of the present disclosure.
- the DCIM PE 902 may generate a first output batch (HWD outputs) after CWD x FDD x IBD cycles, where HWD is the number of output bytes generated by parallel hardware for the DCIM PE 902, CWD is the convolution window for the DCIM PE 902, FDD is the filter depth for the DCIM PE 902, and IBD is the input batch size for the DCIM PE 902.
- the hardware resources are a spatial mapping of parallel inputs, parallel kernels, filter depth, and a convolution window. Loops that do not fit the hardware resources may be run in sequential clock cycles.
- the NPU PE 904 may generate a first output batch (HWN outputs) after CWN x FDN cycles, where HWN is the number of output bytes generated by parallel hardware for the NPU PE 904, CWN is the convolution window for the NPU PE 904, and FDN is the filter depth for the NPU PE 904.
- CWN x FDN may be less than 8 x CWD x FDD since the DCIM PE (e.g., in the case of the circuit 400 of FIG. 4) has more than 32 activations per accumulation, while the NPU PE 904 has 4 activations per accumulation.
- the input batch size for the DCIM PE 902 is selected small enough (e.g., IBD ⁇ 32) to reduce the output latency and therefore reduce the activation FIFO size, but large enough (e.g., IBD > 7) to amortize the weight loading time and to keep the weight re-use factor high.
- FIG. 10 illustrates an example output- stationary mapping scheme 1000 with dataflow timing for NPU PEs (e.g., NPU PEs 606), in accordance with certain aspects of the present disclosure.
- NPU PEs e.g., NPU PEs 606
- a depth-first implementation may enable the output- stationary mapping scheme.
- Partial sums e.g., partial sums labeled “PS1” and “PS2”
- PS1 and PS2 Partial sums
- Each depth cycle may involve obtaining a weight update, which may result in a low weight use factor and an energy penalty.
- the updated weight may be broadcasted to each parallel input (e.g., Inputl and Input2).
- every depth cycle one slice at depth Ni is read. For example, during the first depth cycle, a slice of depth Ni is read, during the second depth cycle, a second slice of depth N2 is read, and so on for each depth Ni.
- the accumulation during the depth cycles may be performed by the accumulator(s) within each NPU PE to generate outputs (e.g., outputs labeled “OUT1” and “OUT2”) in the final accumulator (e.g., the accumulator register 616).
- the partial sums (e.g., PS1, PS2) need not be written to a memory or a register file, and the output traffic to the memory may be reduced.
- the final accumulator value (e.g., values labeled “OUTW1” and “OUTW2”) may be written to the memory (e.g., output TCM) — in some cases after being transferred to the digital post-processing logic) — or may be directly ported to DCIM, as described above.
- the use of input parallelism in the NPU PE improves the weight use factor and improves performance (e.g., TOPS).
- FIG. 11 illustrates an example pseudo-weight-stationary mapping scheme 1100 with dataflow timing for DCIM PEs (e.g., the DCIM PEs 702), in accordance with certain aspects of the present disclosure.
- This pseudo-weight-stationary mapping scheme is an input-batch implementation and is preferentially weight stationary.
- the pseudo weight-stationary scheme processes a batch of input features for each of multiple depth- cycles. A smaller batch size allows the final output result to be generated more quickly, since the total number of cycles to finish running through the depth-cycles becomes much lower compared to a case in which all inputs are processed for each of the depth-cycles, which would significantly delay the output generation.
- the DCIM PE may be globally input stationary to maintain the output TCM size storing the partial sums and reasonable output latency. This may also reduce the depth of the NPU PE activation FIFO, by limiting the amount of final outputs generated from the DCIM PE.
- weights are re-used for the input batch. For example, the weights may be loaded in for a 4-cycle period at the beginning of a depth-cycle, as shown.
- the entirety of a feature map may not be completed in one depth-cycle, and multiple depth-cycles may be utilized to expand the input feature map.
- the number of input-cycles used to expand the input feature map in a depth-cycle may be 32, as shown.
- the number of depth-cycles used to load weights and the number of depth- cycles used to expand a feature map may be set in accordance with the number of clock cycles within the depth-cycle.
- “PSW” represents a partial sum output written to an output TCM
- “PSR” represents a partial sum output read back from the output TCM.
- the final outputs e.g., OUTW1, OUTW2, ..., and OUTWN
- Certain aspects of the present disclosure provide a hybrid neural network architecture and circuitry that combines DCIM and NPU technologies, allowing for the best (or at least better) energy and speed trade-offs when implementing a neural network.
- Such a hybrid architecture may enable a workload execution for energy and/or performance (e.g., TOPS) optimized (or at least enhanced) through the use of either or both NPU and DCIM PEs.
- the DCIM PEs and the NPU PEs are able to use the shared memory resources (e.g., TCM(s)) for any or a combination of weight, activation, and output.
- Pseudo-weight-stationary DCIM PEs and output- stationary NPU PEs allow fast data porting (and reduced depth of FIFOs in such porting) from one compute unit (e.g., NPU/DCIM) to another compute unit, (e.g., DCIM/NPU) allowing concurrent operation of DCIM and NPU PEs.
- the DCIM PEs and NPU PEs can pipeline the data traffic within the same layer and/or can cascade the data between consecutive layers.
- FIG. 12 is a flow diagram illustrating example operations 1200 for neural network processing, in accordance with certain aspects of the present disclosure.
- the operations 1200 may be performed by a hybrid neural network circuit, such as the hybrid architecture 700 or 750 described with respect to FIG. 7A or 7B, respectively.
- the operations 1200 may begin at block 1205 with the circuit processing data.
- the circuit includes a plurality of compute-in-memory (CIM) processing elements (PEs) (e.g., DCIM PEs 702), a plurality of neural processing unit (NPU) PEs (e.g., NPU PEs 703), and a bus (e.g., PE bus 716) coupled to the plurality of CIM PEs and to the plurality of NPU PEs.
- CIM compute-in-memory
- NPU neural processing unit
- bus e.g., PE bus 716
- the neural network circuit further includes at least one of a global memory (e.g., global memory 704) or a tightly coupled memory (TCM) (e.g., the weight TCM 706, the activation TCM 708, the output TCM 710, or the one or more TCMs 718).
- a global memory e.g., global memory 704
- TCM tightly coupled memory
- the transferring at block 1210 may involve transferring the processed data between the at least one of the plurality of CIM PEs and the at least one of the plurality of NPU PEs via the bus without writing the processed data to the at least one of the global memory or the TCM.
- FIG. 13 illustrates an example electronic device 1300.
- the electronic device 1300 may be configured to perform the methods described herein, including the operations 1200 described with respect to FIG. 12.
- the electronic device 1300 includes a central processing unit (CPU) 1302, which in some aspects may be a multi-core CPU. Instructions executed at the CPU 1302 may be loaded, for example, from a program memory associated with the CPU 1302 or may be loaded from a memory 1324.
- CPU central processing unit
- the electronic device 1300 also includes additional processing blocks tailored to specific functions, such as a graphics processing unit (GPU) 1304, a digital signal processor (DSP) 1306, a hybrid neural network 1307 with neural processing unit (NPU) processing elements (PEs) 1308 and compute-in-memory (CIM) PEs 1309, a multimedia processing block 1310, and a wireless connectivity processing block 1312.
- graphics processing unit GPU
- DSP digital signal processor
- hybrid neural network 1307 with neural processing unit (NPU) processing elements (PEs) 1308 and compute-in-memory (CIM) PEs 1309 a multimedia processing block 1310
- the hybrid neural network 1307 is implemented in one or more of the CPU 1302, GPU 1304, and/or DSP 1306.
- the wireless connectivity processing block 1312 may include components, for example, for Third-Generation (3G) connectivity, Fourth-Generation (4G) connectivity (e.g., 4G LTE), Fifth-Generation connectivity (e.g., 5G or NR), Wi-Fi connectivity, Bluetooth connectivity, and/or wireless data transmission standards.
- the wireless connectivity processing block 1312 is further connected to one or more antennas 1314 to facilitate wireless communication.
- the electronic device 1300 may also include one or more sensor processors 1316 associated with any manner of sensor, one or more image signal processors (ISPs) 1318 associated with any manner of image sensor, and/or a navigation processor 1320, which may include satellite-based positioning system components (e.g., Global Positioning System (GPS) or Global Navigation Satellite System (GLONASS)) as well as inertial positioning system components.
- ISPs image signal processors
- GPS Global Positioning System
- GLONASS Global Navigation Satellite System
- the electronic device 1300 may also include one or more input and/or output devices 1322, such as screens, touch-sensitive surfaces (including touch-sensitive displays), physical buttons, speakers, microphones, and the like.
- one or more of the processors of the electronic device 1300 may be based on an Advanced RISC Machines (ARM) instruction set, where RISC stands for “reduced instruction set computing.”
- ARM Advanced RISC Machines
- the electronic device 1300 also includes memory 1324, which is representative of one or more static and/or dynamic memories, such as a dynamic random access memory (DRAM), a flash-based static memory, and the like.
- memory 1324 includes computer-executable components, which may be executed by one or more of the aforementioned processors of the electronic device 1300, including the hybrid neural network 1307.
- the depicted components, and others not depicted, may be configured to perform various aspects of the methods described herein.
- the electronic device 1300 is a server device
- various aspects may be omitted from the example depicted in FIG. 13, such as one or more of the multimedia processing block 1310, wireless connectivity processing block 1312, antenna(s) 1314, sensor processors 1316, ISPs 1318, or navigation processor 1320.
- a neural network circuit comprising a plurality of compute-in memory (CIM) processing elements (PEs), a plurality of neural processing unit (NPU) PEs, and a bus coupled to the plurality of CIM PEs and to the plurality of NPU PEs.
- CIM compute-in memory
- NPU neural processing unit
- Clause 2 The neural network circuit of Clause 1, further comprising one or more shared memory resources coupled to the plurality of CIM PEs and to the plurality of NPU PEs.
- Clause 3 The neural network circuit of Clause 2, wherein the one or more shared memory resources comprise a tightly coupled memory (TCM).
- TCM tightly coupled memory
- Clause 4 The neural network circuit of Clause 3, wherein the TCM is configured to store at least one of activations, weights, or outputs.
- Clause 5 The neural network circuit of Clause 3 or 4, wherein at least one of the plurality of CIM PEs is configured to transfer data to at least one of the plurality of NPU PEs.
- Clause 6 The neural network circuit of Clause 5, wherein the at least one of the plurality of CIM PEs is configured to transfer the data to the at least one of the plurality of NPU PEs without the data being written to or being read from the TCM.
- Clause 7 The neural network circuit of any of Clauses 3-6, wherein at least one of the plurality of NPU PEs is configured to transfer data to at least one of the plurality of CIM PEs.
- Clause 8 The neural network circuit of Clause 7, wherein the at least one of the plurality of NPU PEs is configured to transfer the data to the at least one of the plurality of CIM PEs without the data being written to or being read from the TCM.
- Clause 9 The neural network circuit of any of Clauses 1-4, wherein at least one of the plurality of CIM PEs is configured to transfer data to at least one of the plurality of NPU PEs.
- Clause 10 The neural network circuit of Clause 9, further comprising a global memory, wherein the at least one of the plurality of CIM PEs is configured to transfer the data to the at least one of the plurality of NPU PEs without the data being written to or being read from the global memory.
- Clause 11 The neural network circuit of Clause 9 or 10, wherein the at least one of the plurality of CIM PEs is in a same neural network layer as the at least one of the plurality of NPU PEs.
- Clause 12 The neural network circuit of Clause 9 or 10, wherein the at least one of the plurality of CIM PEs is in a first neural network layer and wherein the at least one of the plurality of NPU PEs is in a second neural network layer, different from the first neural network layer.
- Clause 13 The neural network circuit of Clause 12, wherein the second neural network layer is adjacent to the first neural network layer.
- Clause 14 The neural network circuit of any of Clauses 1-6 and 9, wherein at least one of the plurality of NPU PEs is configured to transfer data to at least one of the plurality of CIM PEs.
- Clause 15 The neural network circuit of Clause 14, further comprising a global memory, wherein the at least one of the plurality of NPU PEs is configured to transfer the data to the at least one of the plurality of CIM PEs without the data being written to or being read from the global memory.
- Clause 16 The neural network circuit of Clause 14 or 15, wherein the at least one of the plurality of NPU PEs is in a same neural network layer as the at least one of the plurality of CIM PEs.
- Clause 17 The neural network circuit of Clause 14 or 15, wherein the at least one of the plurality of NPU PEs is in a first neural network layer and wherein the at least one of the plurality of CIM PEs is in a second neural network layer, different from the first neural network layer.
- Clause 18 The neural network circuit of Clause 17, wherein the second neural network layer is adjacent to the first neural network layer.
- Clause 19 The neural network circuit of any of the preceding Clauses, wherein the plurality of CIM PEs are configured as pseudo-weight-stationary PEs.
- Clause 20 The neural network circuit of any of the preceding Clauses, wherein the plurality of CIM PEs are configured as digital compute-in-memory (DCIM) PEs.
- Clause 21 The neural network circuit of any of the preceding Clauses, wherein the plurality of NPU PEs are configured as output- stationary PEs.
- Clause 22 The neural network circuit of any of the preceding Clauses, further comprising bus arbitration logic coupled between the bus and the plurality of CIM PEs and between the bus and the plurality of NPU PEs.
- Clause 23 The neural network circuit of Clause 22, further comprising a digital processing circuit coupled between the bus arbitration logic and the plurality of CIM PEs and between the bus arbitration logic and the plurality of NPU PEs.
- Clause 24 The neural network circuit of Clause 23, further comprising: a first first-in, first-out (FIFO) circuit coupled between the digital processing circuit and the plurality of CIM PEs; and a second FIFO circuit coupled between the digital processing circuit and the plurality of NPU PEs.
- FIFO first-in, first-out
- Clause 25 The neural network circuit of Clause 22, further comprising: a first first-in, first-out (FIFO) circuit coupled between the bus arbitration logic and the plurality of CIM PEs; and a second FIFO circuit coupled between the bus arbitration logic and the plurality of NPU PEs.
- FIFO first-in, first-out
- a method for neural network processing comprising: processing data in a neural network circuit comprising a plurality of compute-in-memory (CIM) processing elements (PEs), a plurality of neural processing unit (NPU) PEs, and a bus coupled to the plurality of CIM PEs and to the plurality of NPU PEs; and transferring the processed data between at least one of the plurality of CIM PEs and at least one of the plurality of NPU PEs via the bus.
- CIM compute-in-memory
- NPU neural processing unit
- Clause 27 The method of Clause 26, wherein the neural network circuit further comprises at least one of a global memory or a tightly coupled memory (TCM) and wherein the transferring comprises transferring the processed data between the at least one of the plurality of CIM PEs and the at least one of the plurality of NPU PEs via the bus without writing the processed data to the at least one of the global memory or the TCM.
- TCM tightly coupled memory
- Clause 28 The method of Clause 26 or 27, further comprising digitally post processing the processed data in a digital processing circuit before transferring the processed data via the bus.
- Clause 29 The method of any of Clauses 26 -28, wherein the plurality of CIM PEs are configured as pseudo-weight-stationary PEs and wherein the plurality of NPU PEs are configured as output- stationary PEs.
- a processing system comprising: a plurality of compute-in memory (CIM) processing elements (PEs); a plurality of neural processing unit (NPU) PEs; a bus coupled to the plurality of CIM PEs and to the plurality of NPU PEs; a memory having computer-executable instructions stored thereon; and one or more processors configured to execute the computer-executable instructions stored thereon to transfer processed data between at least one of the plurality of CIM PEs and at least one of the plurality of NPU PEs via the bus.
- CIM compute-in memory
- NPU neural processing unit
- an apparatus may be implemented or a method may be practiced using any number of the aspects set forth herein.
- the scope of the disclosure is intended to cover such an apparatus or method that is practiced using other structure, functionality, or structure and functionality in addition to, or other than, the various aspects of the disclosure set forth herein. It should be understood that any aspect of the disclosure disclosed herein may be embodied by one or more elements of a claim.
- exemplary means “serving as an example, instance, or illustration.” Any aspect described herein as “exemplary” is not necessarily to be construed as preferred or advantageous over other aspects.
- a phrase referring to “at least one of’ a list of items refers to any combination of those items, including single members.
- “at least one of: a, b, or c” is intended to cover a, b, c, a-b, a-c, b-c, and a-b-c, as well as any combination with multiples of the same element (e.g., a-a, a-a-a, a-a-b, a-a-c, a-b-b, a-c-c, b-b, b-b-b, b-b-c, c-c, and c-c-c or any other ordering of a, b, and c).
- determining encompasses a wide variety of actions. For example, “determining” may include calculating, computing, processing, deriving, investigating, looking up (e.g., looking up in a table, a database or another data structure), ascertaining, and the like. Also, “determining” may include receiving (e.g., receiving information), accessing (e.g., accessing data in a memory), and the like. Also, “determining” may include resolving, selecting, choosing, establishing, and the like.
- the methods disclosed herein comprise one or more steps or actions for achieving the methods.
- the method steps and/or actions may be interchanged with one another without departing from the scope of the claims.
- the order and/or use of specific steps and/or actions may be modified without departing from the scope of the claims.
- the various operations of methods described above may be performed by any suitable means capable of performing the corresponding functions.
- the means may include various hardware and/or software component(s) and/or module(s), including, but not limited to a circuit, an application specific integrated circuit (ASIC), or processor.
- ASIC application specific integrated circuit
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Evolutionary Computation (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Artificial Intelligence (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Neurology (AREA)
- Image Analysis (AREA)
Abstract
Priority Applications (3)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202280050002.4A CN117651953A (zh) | 2021-07-21 | 2022-07-21 | 具有神经处理单元和存储器内计算处理的混合机器学习架构 |
| KR1020247001639A KR20240035999A (ko) | 2021-07-21 | 2022-07-21 | 뉴럴 프로세싱 유닛 및 컴퓨트-인-메모리 프로세싱 엘리먼트들을 사용한 하이브리드 머신 러닝 아키텍처 |
| EP22755044.9A EP4374296A1 (fr) | 2021-07-21 | 2022-07-21 | Architecture d'apprentissage automatique hybride avec unité de traitement neuronal et éléments de traitement de calcul en mémoire |
Applications Claiming Priority (4)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US202163224155P | 2021-07-21 | 2021-07-21 | |
| US63/224,155 | 2021-07-21 | ||
| US17/813,834 US20230025068A1 (en) | 2021-07-21 | 2022-07-20 | Hybrid machine learning architecture with neural processing unit and compute-in-memory processing elements |
| US17/813,834 | 2022-07-20 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2023004374A1 true WO2023004374A1 (fr) | 2023-01-26 |
Family
ID=82932487
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/US2022/073979 Ceased WO2023004374A1 (fr) | 2021-07-21 | 2022-07-21 | Architecture d'apprentissage automatique hybride avec unité de traitement neuronal et éléments de traitement de calcul en mémoire |
Country Status (1)
| Country | Link |
|---|---|
| WO (1) | WO2023004374A1 (fr) |
Citations (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20210089865A1 (en) * | 2019-09-19 | 2021-03-25 | Qualcomm Incorporated | Parallel processing of a convolutional layer of a neural network with compute-in-memory array |
-
2022
- 2022-07-21 WO PCT/US2022/073979 patent/WO2023004374A1/fr not_active Ceased
Patent Citations (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20210089865A1 (en) * | 2019-09-19 | 2021-03-25 | Qualcomm Incorporated | Parallel processing of a convolutional layer of a neural network with compute-in-memory array |
Non-Patent Citations (1)
| Title |
|---|
| KAINING ZHOU ET AL: "Domino: A Tailored Network-on-Chip Architecture to Enable Highly Localized Inter- and Intra-Memory DNN Computing", ARXIV.ORG, CORNELL UNIVERSITY LIBRARY, 201 OLIN LIBRARY CORNELL UNIVERSITY ITHACA, NY 14853, 18 July 2021 (2021-07-18), XP091014302 * |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US20230025068A1 (en) | Hybrid machine learning architecture with neural processing unit and compute-in-memory processing elements | |
| US12340304B2 (en) | Partial sum management and reconfigurable systolic flow architectures for in-memory computation | |
| US12056459B2 (en) | Compute in memory architecture and dataflows for depth-wise separable convolution | |
| EP4416585B1 (fr) | Accumulateur pour architectures de calcul en mémoire numériques | |
| US20230031841A1 (en) | Folding column adder architecture for digital compute in memory | |
| CN117546178A (zh) | 支持逐深度卷积神经网络(cnn)的存储器内计算(cim)架构和数据流 | |
| US20230065725A1 (en) | Parallel depth-wise processing architectures for neural networks | |
| US12019905B2 (en) | Digital compute in memory | |
| US12423377B2 (en) | Computation in memory architecture for phased depth-wise convolutional | |
| US20240256827A1 (en) | Activation buffer architecture for data-reuse in a neural network accelerator | |
| WO2023004374A1 (fr) | Architecture d'apprentissage automatique hybride avec unité de traitement neuronal et éléments de traitement de calcul en mémoire | |
| WO2023015167A1 (fr) | Calcul numérique en mémoire | |
| US20240086153A1 (en) | Multi-bit accumulator and in-memory computing processor with same |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 22755044 Country of ref document: EP Kind code of ref document: A1 |
|
| WWE | Wipo information: entry into national phase |
Ref document number: 202347080768 Country of ref document: IN |
|
| WWE | Wipo information: entry into national phase |
Ref document number: 202280050002.4 Country of ref document: CN |
|
| REG | Reference to national code |
Ref country code: BR Ref legal event code: B01A Ref document number: 112024000245 Country of ref document: BR |
|
| WWE | Wipo information: entry into national phase |
Ref document number: 2022755044 Country of ref document: EP |
|
| NENP | Non-entry into the national phase |
Ref country code: DE |
|
| ENP | Entry into the national phase |
Ref document number: 2022755044 Country of ref document: EP Effective date: 20240221 |
|
| ENP | Entry into the national phase |
Ref document number: 112024000245 Country of ref document: BR Kind code of ref document: A2 Effective date: 20240105 |