[go: up one dir, main page]

CN114999544A - An in-memory computing circuit based on SRAM - Google Patents

An in-memory computing circuit based on SRAM Download PDF

Info

Publication number
CN114999544A
CN114999544A CN202210585976.4A CN202210585976A CN114999544A CN 114999544 A CN114999544 A CN 114999544A CN 202210585976 A CN202210585976 A CN 202210585976A CN 114999544 A CN114999544 A CN 114999544A
Authority
CN
China
Prior art keywords
input terminal
memory
nmos tube
sram
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210585976.4A
Other languages
Chinese (zh)
Inventor
贺雅娟
张振伟
骆宏阳
王梓霖
张波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Electronic Science and Technology of China
Original Assignee
University of Electronic Science and Technology of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Electronic Science and Technology of China filed Critical University of Electronic Science and Technology of China
Priority to CN202210585976.4A priority Critical patent/CN114999544A/en
Publication of CN114999544A publication Critical patent/CN114999544A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11CSTATIC STORES
    • G11C11/00Digital stores characterised by the use of particular electric or magnetic storage elements; Storage elements therefor
    • G11C11/21Digital stores characterised by the use of particular electric or magnetic storage elements; Storage elements therefor using electric elements
    • G11C11/34Digital stores characterised by the use of particular electric or magnetic storage elements; Storage elements therefor using electric elements using semiconductor devices
    • G11C11/40Digital stores characterised by the use of particular electric or magnetic storage elements; Storage elements therefor using electric elements using semiconductor devices using transistors
    • G11C11/41Digital stores characterised by the use of particular electric or magnetic storage elements; Storage elements therefor using electric elements using semiconductor devices using transistors forming static cells with positive feedback, i.e. cells not needing refreshing or charge regeneration, e.g. bistable multivibrator or Schmitt trigger
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11CSTATIC STORES
    • G11C11/00Digital stores characterised by the use of particular electric or magnetic storage elements; Storage elements therefor
    • G11C11/21Digital stores characterised by the use of particular electric or magnetic storage elements; Storage elements therefor using electric elements
    • G11C11/34Digital stores characterised by the use of particular electric or magnetic storage elements; Storage elements therefor using electric elements using semiconductor devices
    • G11C11/40Digital stores characterised by the use of particular electric or magnetic storage elements; Storage elements therefor using electric elements using semiconductor devices using transistors
    • G11C11/41Digital stores characterised by the use of particular electric or magnetic storage elements; Storage elements therefor using electric elements using semiconductor devices using transistors forming static cells with positive feedback, i.e. cells not needing refreshing or charge regeneration, e.g. bistable multivibrator or Schmitt trigger
    • G11C11/413Auxiliary circuits, e.g. for addressing, decoding, driving, writing, sensing, timing or power reduction
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Microelectronics & Electronic Packaging (AREA)
  • Computer Hardware Design (AREA)
  • Static Random-Access Memory (AREA)

Abstract

本发明属于集成电路技术领域,具体的说涉及一种基于SRAM的存内计算电路。本发明提出了一种基于SRAM的存内计算电路,包括6T SRAM Cell阵列和复用计算单元,所述存内计算电路将权重数据预先写入SRAM内存储,然后通过复用计算单元与输入数据进行乘累加运算来实现存内计算。本发明提出的一种存内计算单元虽然增加了存储器的面积,但却使得数据在存储器内部就可以完成计算,显著降低了卷积神经网络等应用的数据搬运量和功耗。

Figure 202210585976

The invention belongs to the technical field of integrated circuits, and in particular relates to an in-memory computing circuit based on SRAM. The invention proposes an in-memory computing circuit based on SRAM, including a 6T SRAM Cell array and a multiplexing computing unit. The in-memory computing circuit pre-writes weight data into SRAM for storage, and then multiplexes the computing unit with the input data. Perform multiply-accumulate operations to implement in-memory calculations. Although the in-memory computing unit proposed by the present invention increases the area of the memory, the data can be calculated in the memory, and the data handling and power consumption of applications such as convolutional neural networks are significantly reduced.

Figure 202210585976

Description

一种基于SRAM的存内计算电路An in-memory computing circuit based on SRAM

技术领域technical field

本发明属于集成电路技术领域,涉及一种基于SRAM的存内计算电路。The invention belongs to the technical field of integrated circuits, and relates to an in-memory computing circuit based on SRAM.

背景技术Background technique

随着人工智能技术和物联网技术的快速发展以及大规模应用,神经网络算法所需要处理的数据量也越来越大。在基于传统的冯诺依曼架构的计算机中,存储器和中央处理器(CPU)是相互分离的,在计算过程中,存储器先将数据通过总线传输到CPU内进行计算,计算完成后再将计算结果通过总线写回到存储器内。在内存、总线的速度相匹配时,冯诺依曼架构是占据优势的。但是近几十年,存储器朝着金字塔层次结构发展,即存储器越远离CPU,容量越大,价格越便宜,但速度也越慢,而CPU的速度则是遵循摩尔定律高速增长。这也使得如今存储器的存取速度远远低于CPU的数据处理速度,成为了限制计算机性能的“内存墙(Memory Wall)”,这也被公认为冯诺依曼架构体系中最大的瓶颈。在深度神经网络这类需要高频数据读写和传输的应用中这个现象更为明显。深度神经网络作为目前在人工智能的图像识别中应用最广泛的算法,需要对大量的图像数据进行存取、乘法及累加运算。受限于系统总线及存储器有限的传输速度及带宽,这些因素都限制了深度神经网络的算法的推理速度及应用范围。With the rapid development and large-scale application of artificial intelligence technology and Internet of Things technology, the amount of data that needs to be processed by neural network algorithms is also increasing. In a computer based on the traditional Von Neumann architecture, the memory and the central processing unit (CPU) are separated from each other. During the calculation process, the memory first transmits the data to the CPU through the bus for calculation, and then calculates the calculation after the calculation is completed. The results are written back to memory over the bus. When the memory and bus speeds are matched, the von Neumann architecture is dominant. However, in recent decades, memory has developed towards a pyramid hierarchy, that is, the farther away the memory is from the CPU, the larger the capacity, the cheaper the price, but the slower the speed, while the speed of the CPU follows Moore's Law to grow rapidly. This also makes today's memory access speed much lower than the data processing speed of the CPU, becoming a "Memory Wall" that limits computer performance, which is also recognized as the biggest bottleneck in the von Neumann architecture. This phenomenon is more pronounced in applications such as deep neural networks that require high-frequency data reading and writing and transmission. As the most widely used algorithm in artificial intelligence image recognition, deep neural network needs to access, multiply and accumulate a large amount of image data. Limited by the limited transmission speed and bandwidth of the system bus and memory, these factors limit the reasoning speed and application scope of deep neural network algorithms.

为了突破冯诺依曼架构的这个瓶颈,存内计算(Computing-in-Memory,CIM)架构被提出。顾名思义,存内计算就是在存储器内进行一些计算,例如乘法、同或和累加运算。这样存储器内的大量数据就不需要通过系统总线传输到CPU内部才能计算,而是在存储器内部依靠一些计算单元和控制电路就能计算出结果,只将结果通过系统总线输出,这样就大大减少了CPU和存储器之间的总线交互,不仅解决了CPU速度和存储器速度不匹配、数据存取功耗的问题,还解决了系统总线上的传输带宽及延时限制的问题。In order to break through this bottleneck of the von Neumann architecture, the Computing-in-Memory (CIM) architecture was proposed. As the name implies, in-memory computing is performing some computations in memory, such as multiplication, exclusive-or, and accumulation operations. In this way, a large amount of data in the memory does not need to be transmitted to the CPU through the system bus to be calculated, but the results can be calculated by relying on some calculation units and control circuits in the memory, and only the results are output through the system bus, which greatly reduces the number of The bus interaction between the CPU and the memory not only solves the problems of CPU speed and memory speed mismatch and data access power consumption, but also solves the problems of transmission bandwidth and delay limitations on the system bus.

发明内容SUMMARY OF THE INVENTION

本发明的目的是提出一种存内计算单元,该架构通过在SRAM存储器内部引入计算单元电路来实现数据的乘累加运算,避免了冯诺依曼架构中,需要通过总线将数据从存储器取出,再通过总线送到CPU才能进行计算的缺点,可以有效降低数据的搬运量及其产生的电路功耗。The purpose of the present invention is to propose an in-memory computing unit, the architecture realizes the multiply-accumulate operation of data by introducing a computing unit circuit inside the SRAM memory, avoiding the need to take out the data from the memory through the bus in the Von Neumann architecture, The disadvantage that it can only be calculated by sending it to the CPU through the bus can effectively reduce the amount of data handling and the power consumption of the circuit.

为了实现上述目的,本发明公开了一种基于SRAM的存内计算单元,包括6T SRAMCell阵列和复用计算单元(Multiplex Computing Cell)。在计算之前,先将需要进行计算的权重数据通过SRAM外围电路写入6T SRAM Cell中进行存储。当工作在存内计算模式时,先将输入数据进行预编码,同时读出所存储的权重数据,利用复用计算单元进行乘累加运算,以实现存内计算功能。所述复用计算单元是指每16个6T SRAM Cell分时复用同一个计算单元,每次计算时只有1个6T Cell被选通,与每个6T Cell需要一个计算单元的方案相比,这种分时复用策略极大的降低了存内计算电路的面积,更加适用于边缘计算设备。In order to achieve the above object, the present invention discloses an in-memory computing unit based on SRAM, including a 6T SRAMCell array and a multiplex computing cell (Multiplex Computing Cell). Before the calculation, the weight data that needs to be calculated is written into the 6T SRAM Cell through the SRAM peripheral circuit for storage. When working in the in-memory computing mode, the input data is pre-coded first, and the stored weight data is read out at the same time, and the multiplexing computing unit is used to perform the multiply-accumulate operation to realize the in-memory computing function. The multiplexing calculation unit means that every 16 6T SRAM Cells are time-multiplexed with the same calculation unit, and only one 6T Cell is gated during each calculation. Compared with the scheme that each 6T Cell requires one calculation unit, This time-division multiplexing strategy greatly reduces the area of the in-memory computing circuit and is more suitable for edge computing devices.

本发明的技术方案具体为:The technical scheme of the present invention is specifically:

一种基于SRAM的存内计算电路,该存内计算单元包括16行64列的6T SRAM Cell阵列和64列复用计算单元;每列的6T SRAM Cell接列读写信号线BLP和BLN,每行的6T SRAMCell接读写信号线WL,输入数据经过编码后分为第一输入端、第二输入端和第三输入端接复用计算单元,复用计算单元通过信号线BLP和BLN获取权重数据;具体的,每个复用计算单元包括第一NMOS管、第二NMOS管、第三NMOS管和第四NMOS管,其中第一NMOS管的栅极接信号线BLP,其漏极接第二NMOS管的源极;第二NMOS管的栅极接第一输入端,其漏极接第二输入端;第三NMOS管的栅极接信号线BLN,其漏极接第四NMOS管的源极;第四NMOS管的栅极接第一输入端,其漏极接第三输入端;第一NMOS管的源极和第三NMOS管的源极连接作为输出端;An in-memory computing circuit based on SRAM, the in-memory computing unit includes a 6T SRAM Cell array of 16 rows and 64 columns and a 64-column multiplexing computing unit; The 6T SRAMCell of the row is connected to the read-write signal line WL, and the input data is divided into the first input terminal, the second input terminal and the third input terminal after being encoded and connected to the multiplexing calculation unit, and the multiplexing calculation unit obtains the weight through the signal lines BLP and BLN. Specifically, each multiplexing calculation unit includes a first NMOS transistor, a second NMOS transistor, a third NMOS transistor, and a fourth NMOS transistor, wherein the gate of the first NMOS transistor is connected to the signal line BLP, and the drain thereof is connected to the first NMOS transistor. The source of the two NMOS transistors; the gate of the second NMOS transistor is connected to the first input terminal, and its drain is connected to the second input terminal; the gate of the third NMOS transistor is connected to the signal line BLN, and its drain is connected to the fourth NMOS transistor. source; the gate of the fourth NMOS tube is connected to the first input terminal, and its drain is connected to the third input terminal; the source of the first NMOS tube is connected to the source of the third NMOS tube as an output terminal;

输入数据经过编码后转化为第一输入端、第二输入端、第三输入端的电压信号,具体为:当输入数据为1,编码后第一输入端为电源电压,第二输入端为电源电压,第三输入端为0电压;当输入数据为0,编码后第一输入端、第二输入端、第三输入端的电压信号均为0;当输入数据为-1,编码后第一输入端为电源电压,第二输入端为0电压,第三输入端为电源电压。The input data is encoded and converted into voltage signals of the first input terminal, the second input terminal, and the third input terminal. Specifically, when the input data is 1, the first input terminal is the power supply voltage after encoding, and the second input terminal is the power supply voltage. , the third input terminal is 0 voltage; when the input data is 0, the voltage signals of the first input terminal, the second input terminal and the third input terminal are all 0 after encoding; when the input data is -1, the first input terminal after encoding is the power supply voltage, the second input terminal is 0 voltage, and the third input terminal is the power supply voltage.

本发明的有益效果为:本发明提出了一种基于SRAM的存内计算单元,通过复用计算单元来实现存内计算,使得数据在存储器内部就可以完成计算,能够显著的降低卷积神经网络等应用的数据搬运量和功耗。The beneficial effects of the present invention are as follows: the present invention proposes an in-memory computing unit based on SRAM, and realizes in-memory computing by multiplexing the computing unit, so that the data can be calculated in the memory, and the convolutional neural network can be significantly reduced. data throughput and power consumption for applications such as

附图说明Description of drawings

图1为本发明整体电路架构说明。FIG. 1 is an illustration of the overall circuit structure of the present invention.

图2为本发明对输入进行编码的示意图。FIG. 2 is a schematic diagram of the present invention encoding an input.

图3为本发明计算过程的示意图。FIG. 3 is a schematic diagram of the calculation process of the present invention.

图4为本发明计算结果示意图。FIG. 4 is a schematic diagram of the calculation result of the present invention.

具体实施方式Detailed ways

为使本发明的上述目的,特征和优点能够更加明显清晰,下面结合附图对本发明的具体实施方式做详细且完整的描述。In order to make the above objects, features and advantages of the present invention more obvious and clear, the specific embodiments of the present invention will be described in detail and completely below with reference to the accompanying drawings.

如图1所示,本发明主要由SRAM阵列及复用计算单元组成。在计算之前,先将需要进行运算的权重数据通过SRAM外围电路写入6T SRAM Cell中进行存储。当电路工作在存内计算模式时,输入编码电路将输入数据按图2进行编码,即不同的输入对应为不同的INV、INS_N、INS_P上的电压。同时每16个SRAM Cell分时复用同一个计算单元,即每个时钟周期,每16个6T SRAM仅有一个开启进行计算。与每个6T Cell需要一个计算单元的方案相比,这种分时复用策略极大的降低了存内计算电路的面积,更加适用于边缘计算设备。As shown in FIG. 1 , the present invention is mainly composed of an SRAM array and a multiplexing calculation unit. Before the calculation, the weight data that needs to be calculated is written into the 6T SRAM Cell through the SRAM peripheral circuit for storage. When the circuit works in the memory calculation mode, the input encoding circuit encodes the input data according to Figure 2, that is, different inputs correspond to different voltages on INV, INS_N, and INS_P. At the same time, every 16 SRAM Cells time-division multiplex the same computing unit, that is, every clock cycle, only one of every 16 6T SRAMs is turned on for calculation. Compared with the solution that requires one computing unit for each 6T Cell, this time-division multiplexing strategy greatly reduces the area of the in-memory computing circuit and is more suitable for edge computing devices.

具体的计算过程如图3所示,首先是一个6T Cell被选通,其权重数据体现在BLP和BLN上,若权重存储的数据为+1,则BLP电压为VDD,BLN上电压为0;若权重存储的数据为-1,则BLP电压为0,BLN上电压为VDD。而输入数据被编码后的电压体现在INV、INS_P和INS_N上,若输入为+1,则INV与INS_P电压为VDD,INS_N电压为0。这样,N1与N2的通路导通,N3与N4的通路被关闭,即对OUTPUT进行充电。若计算结果为1,则体现为OUTPUT上的充电电流;若计算结果为0,则不导通,即不充电也不放电;若计算结果为-1,则体现为OUTPUT上的放电电流。同一行的复用计算单元的计算结果共同作用在一根输出位线上,如图4所示。The specific calculation process is shown in Figure 3. First, a 6T Cell is gated, and its weight data is reflected on the BLP and BLN. If the data stored in the weight is +1, the BLP voltage is VDD, and the voltage on the BLN is 0; If the data stored in the weight is -1, the BLP voltage is 0, and the voltage on the BLN is VDD. The encoded voltage of the input data is reflected on INV, INS_P and INS_N. If the input is +1, the voltage of INV and INS_P is VDD, and the voltage of INS_N is 0. In this way, the paths of N1 and N2 are turned on, and the paths of N3 and N4 are closed, that is, the OUTPUT is charged. If the calculation result is 1, it is reflected as the charging current on OUTPUT; if the calculation result is 0, it is not conducting, that is, neither charging nor discharging; if the calculation result is -1, it is reflected as the discharge current on OUTPUT. The calculation results of the multiplexed calculation units in the same row act together on one output bit line, as shown in Figure 4.

综上所述,本发明提出了一种基于SRAM的存内计算单元,包括6T SRAM Cell阵列和复用计算单元。所述存内计算电路通过将权重数据预先写入SRAM内存储,然后通过复用计算单元与输入数据进行计算来实现存内计算。本发明提出的一种存内计算单元虽然增加了存储器的面积,但却使得数据在存储器内部就可以完成计算,能够显著的降低卷积神经网络等应用的数据搬运量和功耗。To sum up, the present invention proposes an in-memory computing unit based on SRAM, including a 6T SRAM Cell array and a multiplexing computing unit. The in-memory computing circuit realizes in-memory computing by pre-writing the weight data into the SRAM for storage, and then performing computation by multiplexing the computing unit and the input data. Although the in-memory computing unit proposed by the present invention increases the area of the memory, the data can be calculated in the memory, which can significantly reduce the data handling and power consumption of applications such as convolutional neural networks.

Claims (2)

1.一种基于SRAM的存内计算电路,其特征在于,该存内计算单元包括16行64列的6TSRAM Cell阵列和64列复用计算单元;每列的6T SRAM Cell接列读写信号线BLP和BLN,每行的6T SRAM Cell接读写信号线WL,输入数据经过编码后分为第一输入端、第二输入端和第三输入端接复用计算单元,复用计算单元通过信号线BLP和BLN获取权重数据,权重数据在计算之前通过信号线BLP和BLN已存入6T SRAM Cell中;具体的,每个复用计算单元包括第一NMOS管、第二NMOS管、第三NMOS管和第四NMOS管,其中第一NMOS管的栅极接信号线BLP,其漏极接第二NMOS管的源极;第二NMOS管的栅极接第一输入端,其漏极接第二输入端;第三NMOS管的栅极接信号线BLN,其漏极接第四NMOS管的源极;第四NMOS管的栅极接第一输入端,其漏极接第三输入端;第一NMOS管的源极和第三NMOS管的源极连接作为输出端;1. an in-memory computing circuit based on SRAM, is characterized in that, this in-memory computing unit comprises the 6TSRAM Cell array of 16 rows and 64 columns and 64 columns of multiplexing computing units; The 6T SRAM Cell of each row is connected to the column read-write signal line BLP and BLN, the 6T SRAM Cell in each row is connected to the read-write signal line WL, the input data is divided into the first input terminal, the second input terminal and the third input terminal after being encoded and connected to the multiplexing calculation unit, and the multiplexing calculation unit passes the signal Lines BLP and BLN obtain weight data, and the weight data has been stored in the 6T SRAM Cell through the signal lines BLP and BLN before calculation; specifically, each multiplexing calculation unit includes a first NMOS tube, a second NMOS tube, and a third NMOS tube. The gate of the first NMOS tube is connected to the signal line BLP, the drain of the first NMOS tube is connected to the source of the second NMOS tube; the gate of the second NMOS tube is connected to the first input terminal, and the drain of the second NMOS tube is connected to the first input terminal. Two input terminals; the gate of the third NMOS tube is connected to the signal line BLN, and its drain is connected to the source of the fourth NMOS tube; the gate of the fourth NMOS tube is connected to the first input terminal, and its drain is connected to the third input terminal; The source of the first NMOS transistor and the source of the third NMOS transistor are connected as an output terminal; 输入数据经过编码后转化为第一输入端、第二输入端、第三输入端的电压信号,具体为:当输入数据为1,编码后第一输入端为电源电压,第二输入端为电源电压,第三输入端为0电压;当输入数据为0,编码后第一输入端、第二输入端、第三输入端的电压信号均为0;当输入数据为-1,编码后第一输入端为电源电压,第二输入端为0电压,第三输入端为电源电压。The input data is encoded and converted into voltage signals of the first input terminal, the second input terminal, and the third input terminal. Specifically, when the input data is 1, the first input terminal is the power supply voltage after encoding, and the second input terminal is the power supply voltage. , the third input terminal is 0 voltage; when the input data is 0, the voltage signals of the first input terminal, the second input terminal and the third input terminal are all 0 after encoding; when the input data is -1, the first input terminal after encoding is the power supply voltage, the second input terminal is 0 voltage, and the third input terminal is the power supply voltage. 2.根据权利要求1所述的一种基于SRAM的存内计算电路,其特征在于,每行的16个6TSRAM Cell分时复用同一个计算单元,每次计算时只有1个6T Cell被选通。2. a kind of in-memory computing circuit based on SRAM according to claim 1, is characterized in that, 16 6TSRAM Cells of each row time-division multiplexing the same computing unit, and only 1 6T Cell is selected during each calculation Pass.
CN202210585976.4A 2022-05-27 2022-05-27 An in-memory computing circuit based on SRAM Pending CN114999544A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210585976.4A CN114999544A (en) 2022-05-27 2022-05-27 An in-memory computing circuit based on SRAM

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210585976.4A CN114999544A (en) 2022-05-27 2022-05-27 An in-memory computing circuit based on SRAM

Publications (1)

Publication Number Publication Date
CN114999544A true CN114999544A (en) 2022-09-02

Family

ID=83029234

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210585976.4A Pending CN114999544A (en) 2022-05-27 2022-05-27 An in-memory computing circuit based on SRAM

Country Status (1)

Country Link
CN (1) CN114999544A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115586885A (en) * 2022-09-30 2023-01-10 晶铁半导体技术(广东)有限公司 Memory computing unit and acceleration method
CN116721682A (en) * 2023-06-13 2023-09-08 上海交通大学 Cross-level reconfigurable SRAM in-memory computing unit and method for edge intelligence
TWI822313B (en) * 2022-09-07 2023-11-11 財團法人工業技術研究院 Memory cell

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111816234A (en) * 2020-07-30 2020-10-23 中科院微电子研究所南京智能技术研究院 A voltage accumulating in-memory computing circuit based on the same-or of SRAM bit lines
CN112151092A (en) * 2020-11-26 2020-12-29 中科院微电子研究所南京智能技术研究院 A storage unit, storage array and in-memory computing device based on 4-tube storage
CN112711394A (en) * 2021-03-26 2021-04-27 南京后摩智能科技有限公司 Circuit based on digital domain memory computing
US11176991B1 (en) * 2020-10-30 2021-11-16 Qualcomm Incorporated Compute-in-memory (CIM) employing low-power CIM circuits employing static random access memory (SRAM) bit cells, particularly for multiply-and-accumluate (MAC) operations
CN114327368A (en) * 2022-03-09 2022-04-12 中科南京智能技术研究院 Storage circuit for XNOR operation

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111816234A (en) * 2020-07-30 2020-10-23 中科院微电子研究所南京智能技术研究院 A voltage accumulating in-memory computing circuit based on the same-or of SRAM bit lines
US11176991B1 (en) * 2020-10-30 2021-11-16 Qualcomm Incorporated Compute-in-memory (CIM) employing low-power CIM circuits employing static random access memory (SRAM) bit cells, particularly for multiply-and-accumluate (MAC) operations
CN116368461A (en) * 2020-10-30 2023-06-30 高通股份有限公司 In-memory Computation (CIM) using low-power CIM circuits employing Static Random Access Memory (SRAM) bit cells, particularly for Multiplication and Accumulation (MAC) operations
CN112151092A (en) * 2020-11-26 2020-12-29 中科院微电子研究所南京智能技术研究院 A storage unit, storage array and in-memory computing device based on 4-tube storage
CN112711394A (en) * 2021-03-26 2021-04-27 南京后摩智能科技有限公司 Circuit based on digital domain memory computing
CN114327368A (en) * 2022-03-09 2022-04-12 中科南京智能技术研究院 Storage circuit for XNOR operation

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
ZHENWEI ZHANG: "An Energy-Efficient Computing-in-Memory circuit based on Multiplexing-Computing-Cell for BNN acceleration", ELECTRONICS LETTERS, 21 April 2023 (2023-04-21), pages 1 - 3 *
司鑫: "面向人工智能的嵌入式储存器及存内计算电路设计", 《中国博士学位论文全文数据库信息科技辑》, no. 2021, 15 March 2021 (2021-03-15), pages 73 - 81 *
张振伟: "应用于神经网络的存内计算电路设计", 《中国优秀硕士学位论文全文数据库 信息科技辑》, no. 2024, 15 April 2024 (2024-04-15), pages 19 - 29 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI822313B (en) * 2022-09-07 2023-11-11 財團法人工業技術研究院 Memory cell
US12406721B2 (en) 2022-09-07 2025-09-02 Industrial Technology Research Institute Memory cell
CN115586885A (en) * 2022-09-30 2023-01-10 晶铁半导体技术(广东)有限公司 Memory computing unit and acceleration method
CN116721682A (en) * 2023-06-13 2023-09-08 上海交通大学 Cross-level reconfigurable SRAM in-memory computing unit and method for edge intelligence

Similar Documents

Publication Publication Date Title
CN110277121B (en) Multi-bit memory-computing integrated SRAM based on substrate bias effect and its realization method
CN111816234B (en) Voltage accumulation in-memory computing circuit based on SRAM bit line exclusive nor
CN114999544A (en) An in-memory computing circuit based on SRAM
CN111880763B (en) A kind of SRAM circuit with multiplication and addition of positive and negative numbers realized in memory
CN110942792B (en) Low-power-consumption low-leakage SRAM (static random Access memory) applied to storage and calculation integrated chip
CN109784483B (en) In-memory computing accelerator for binarized convolutional neural network based on FD-SOI process
CN111816231A (en) An in-memory computing device with dual-6T SRAM structure
CN112636745B (en) Logic unit, adder and multiplier
CN112151091A (en) 8T SRAM unit and memory computing device
CN110428048B (en) Binaryzation neural network accumulator circuit based on analog delay chain
CN111817710B (en) Memristor-based hybrid logic exclusive nor circuit and exclusive nor calculation array
CN112233712B (en) A 6T SRAM storage device, storage system and storage method
CN111158635B (en) FeFET-based nonvolatile low-power-consumption multiplier and operation method thereof
CN116126779A (en) 9T memory operation circuit, multiply-accumulate operation circuit, memory operation circuit and chip
CN110875071A (en) SRAM unit and related device
CN117894350A (en) A Boolean logic in-memory operation circuit based on 2T-2C ferroelectric memory cell
CN115935894A (en) Accelerator design method of double 6T-SRAM storage unit and double-bit local calculation unit based on separated word lines
CN115565581A (en) High-energy-efficiency edge storage calculation circuit
CN115995254A (en) A Complete Non-Volatile Boolean Logic Circuit Based on 1T1R Array and Its Control Method
WO2025236940A1 (en) Transformer accelerator architecture based on monolithic three-dimensional integration
CN110085270B (en) Storage operation circuit module and processor
CN113655989A (en) Multiplier digital circuit, chip and electronic equipment for memory calculation
CN116204490B (en) A 7T storage and calculation circuit and multiplication and accumulation circuit based on low voltage technology
CN117037877A (en) Memory computing chip based on NOR Flash and control method thereof
CN113921057B (en) 8T SRAM circuit structure for realizing iterative exclusive OR calculation in memory

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination