[go: up one dir, main page]

CN116579395B - In-situ compensation method for hardware precision problem of Skip Structure deep neural network - Google Patents

In-situ compensation method for hardware precision problem of Skip Structure deep neural network

Info

Publication number
CN116579395B
CN116579395B CN202310438357.7A CN202310438357A CN116579395B CN 116579395 B CN116579395 B CN 116579395B CN 202310438357 A CN202310438357 A CN 202310438357A CN 116579395 B CN116579395 B CN 116579395B
Authority
CN
China
Prior art keywords
compensation
array
input voltage
output
line number
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310438357.7A
Other languages
Chinese (zh)
Other versions
CN116579395A (en
Inventor
吴祖恒
汪泽清
冯哲
朱云来
徐祖雨
代月花
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Anhui University
Original Assignee
Anhui University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Anhui University filed Critical Anhui University
Priority to CN202310438357.7A priority Critical patent/CN116579395B/en
Publication of CN116579395A publication Critical patent/CN116579395A/en
Application granted granted Critical
Publication of CN116579395B publication Critical patent/CN116579395B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/06Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
    • G06N3/063Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using electronic means
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11CSTATIC STORES
    • G11C11/00Digital stores characterised by the use of particular electric or magnetic storage elements; Storage elements therefor
    • G11C11/54Digital stores characterised by the use of particular electric or magnetic storage elements; Storage elements therefor using elements simulating biological cells, e.g. neuron
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Neurology (AREA)
  • Theoretical Computer Science (AREA)
  • Biophysics (AREA)
  • Artificial Intelligence (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Computer Hardware Design (AREA)
  • Testing Of Individual Semiconductor Devices (AREA)

Abstract

本发明公开一种针对SkipStructure深度神经网络硬件精度问题的原位补偿法,属于忆阻器技术领域;原位补偿法包括S1,确定补偿方程的相关系数;S2,测试相关系数与输出误差之间的关系;S3,根据S2得到的测试数据,进行数据拟合,并建立补偿方程;S4,在阵列输出结果时,运用S3建立的补偿方程,对输出结果进行原位补偿;本发明的补偿方法能够解决忆阻器阵列在实现Skip Structure式深度神经网络时误差层层叠加所导致的精度下降问题,从而对于硬件实现跳跃式结构神经网络有很大的优化作用。

This invention discloses an in-situ compensation method for the hardware accuracy problem of Skip Structure deep neural networks, belonging to the field of memristor technology. The in-situ compensation method includes S1, determining the correlation coefficient of the compensation equation; S2, testing the relationship between the correlation coefficient and the output error; S3, performing data fitting based on the test data obtained in S2, and establishing the compensation equation; S4, when the array outputs the results, using the compensation equation established in S3 to perform in-situ compensation on the output results. The compensation method of this invention can solve the accuracy degradation problem caused by the layer-by-layer accumulation of errors when implementing Skip Structure deep neural networks with memristor arrays, thus having a significant optimization effect on the hardware implementation of skip structure neural networks.

Description

In-situ compensation method for hardware precision problem of Skip Structure deep neural network
Technical Field
The invention belongs to the technical field of memristors, and particularly relates to an in-situ compensation method for the problem of hardware precision of a Skip Structure deep neural network.
Background
The deep neural network has rapid development and is applied to the fields of image recognition, voice recognition, natural language processing and the like. As the number of network layers increases, the information acquired by the deep neural network is more, and the extracted features are also more abundant. However, too deep a deep neural network can suffer from gradient explosion and gradient extinction, resulting in errors in the training and testing process. In response to this problem, the residual network (ResNet) proposed in 2015 directly transfers shallow features to deeper layers through a Skip Connection mechanism (Skip Connection), solving the problems of gradient extinction and gradient explosion, while DenseNet (Densely connected convolutionalnetworks) proposes a denser Connection mechanism than ResNet, further improving the feature multiplexing capability of ResNet, each layer taking additional inputs from all previous layers and passing its own feature map to all subsequent layers. DenseNet achieves better performance than ResNet with less parameters and computational cost. Following ResNet and DenseNet, more optimal networks based on Skip Connection ideas, such as ResNeXt, denseNet-B, denseNet-C, are proposed successively, which we will refer to collectively as Skip Structure type deep neural networks.
While Skip Structure's deep neural network has great advantages, its implementation in hardware by memristor array has to take into account the problem of hardware non-idealities. Due to the limitations of immature manufacturing technology of memristors, memristors are often interfered by non-ideal factors including write nonlinearity, inter-device variation, inter-period variation and the like, and when a Skip Structure type deep neural network is implemented, error superposition caused by the non-ideal factors is extremely serious, and finally, larger prediction accuracy is reduced. How to solve the problem of error layer-by-layer accumulation is a primary problem to be considered in realizing a Skip Structure type deep neural network by using a memristor array.
Disclosure of Invention
Aiming at the defects of the prior art, the invention aims to provide an in-situ compensation method aiming at the problem of hardware precision of the Skip Structure deep neural network, solves the problem of precision reduction caused by error layer-by-layer superposition when the memristor array is used for realizing the Skip Structure deep neural network, and has an optimization effect on the hardware realization of the Skip Structure neural network.
The aim of the invention can be achieved by the following technical scheme:
an in-situ compensation method for the hardware precision problem of a Skip Structure deep neural network comprises the following steps of;
S1, determining a correlation coefficient of a compensation equation;
s2, testing the relation between the correlation coefficient and the output error;
s3, performing data fitting according to the test data obtained in the step S2, and establishing a compensation equation;
And S4, when the array outputs a result, performing in-situ compensation on the output result by using a compensation equation established in the S3.
Further, in S1, according to the experiment that the change of the input voltage V, the array starting line number n row and the memristor resistance value R can have different effects on the array output, the compensation equation is primarily determined to be related to the input voltage V, the array starting line number n row and the memristor resistance value R.
Further, in S2, the influence relationship of the input voltage V, the array starting line number n row and the memristor resistance value R on the array output error is determined by designing and executing a test scheme, and the specific steps include:
S21, setting a fixed step length, and testing the relation between the output error along with the array starting line number n row and the memristor resistance value R when the input voltage is increased;
S22, setting a fixed step length, and testing the relation between the output error and the array starting line number n row and the input voltage V when the resistance value R of the memristor is increased.
Further, in S21, the testing step specifically includes:
S211, setting an input voltage V input to 0.03V, setting an input voltage fixed increment step V step to 0.03V, and setting a memristor resistance fixed increment step R step to 5KΩ;
s212, setting the resistance value R of the memristor to be 5KΩ;
S213, array starting line number n row is set to be 1;
S214, measuring the actual output current I actual and the ideal output current I ideal of the array under the current parameter setting, and calculating an error I error=Iideal-Iactual between the actual output current I actual and the ideal output current I ideal;
s215, judging whether the current array starting line number exceeds the array maximum starting line number 32, if the array starting line number does not exceed the array maximum starting line number, entering S214, otherwise entering S216;
S216, increasing the resistance of the memristor once according to a fixed step length R step;
S217, judging whether the current memristor resistance is greater than the upper limit of the memristor resistance by 50KΩ, if the memristor resistance is not greater than the upper limit of the memristor resistance, entering S213, otherwise entering S218;
S218, increasing the primary input voltage according to a fixed step V step;
s219, judging whether the current input voltage is larger than the upper limit of the input voltage by 0.3V, if the input voltage does not exceed the upper limit of the input voltage, entering S212, otherwise ending the test.
Further, the specific test steps in S22 are:
S221, setting the memristor resistance value R to be 5KΩ, setting the input voltage fixed increment step V step to be 0.001V, and setting the memristor resistance value fixed increment step R step to be 5KΩ;
S222, the input voltage V input is set to 0.001V;
S223, setting the array starting line number n row to be 1;
s224, measuring the actual output current I actual and the ideal output current I ideal of the array under the current parameter setting, and calculating the error I error=Iideal-Iactual between the actual output current I actual and the ideal output current I ideal;
S225, judging whether the current array starting line number exceeds the array maximum starting line number 32, if the array starting line number does not exceed the array maximum starting line number, entering S224, otherwise entering S226;
S226, increasing the primary input voltage according to a fixed step V step;
s227, judging whether the current input voltage is larger than the upper limit of the input voltage by 0.2V, if the input voltage does not exceed the upper limit of the input voltage, entering S223, otherwise entering S228;
S228, increasing the resistance of the memristor once according to a fixed step length R step;
S229, judging whether the current memristor resistance is greater than the upper limit of the memristor resistance by 50KΩ, if the memristor resistance is not greater than the upper limit of the memristor resistance, entering S222, otherwise ending the test.
Further, in S3, according to the data measured in S2, a fitting manner most suitable for the two sets of data is selected, and linear fitting and nonlinear fitting are performed on the two sets of data respectively, so as to obtain a relationship between an output error corresponding to the linear fitting and the nonlinear fitting, an input voltage V, an array starting line number n row and a memristor resistance value R.
7. The in-situ compensation method for the hardware precision problem of the Skip Structure deep neural network according to claim 6, wherein the compensation equation comprises four in-situ compensation schemes, which are respectively:
the first compensation scheme is to linearly compensate the output only;
the second compensation scheme is to carry out nonlinear compensation on the output only;
the third compensation scheme is to compensate the output, wherein the compensation value is the average value of linear compensation and nonlinear compensation;
And in a fourth compensation scheme, the output is compensated, and the compensation value is a result of solving Chang Weifen equations of linear compensation and nonlinear compensation.
Further, the compensation equation is established by the following steps:
S31, determining the relation among the ideal array output I ideal, the actual array output I actual and the compensation value M compensation as I ideal=Iactual+Mcompensation;
s32, according to the data measured in S21, taking the array output error I error as a dependent variable and the array starting line number n row as an independent variable, performing nonlinear fitting to obtain a nonlinear relation between the array output error I error and the input voltage V when the memristor resistance R and the input voltage V take different values Let the compensation value Namely, the establishment of a nonlinear compensation equation is completed;
S33, according to the data measured in S22, linear fitting is carried out by taking the array output error I error as a dependent variable and the input voltage V as an independent variable, so as to obtain the linear relation between the array output error I error and the input voltage V when the memristor resistance R and the array starting line number n row take different values Let the compensation valueNamely, the establishment of a linear compensation equation is completed;
s34, taking the average value of the linear compensation value and the nonlinear compensation value, namely, letting the compensation value Completing the establishment of a third compensation equation;
s35, solving Chang Weifen equations of linear compensation and nonlinear compensation to make compensation values Completing the establishment of a fourth compensation equation;
The final compensation equation is:
further, in S4, the four in-situ compensation schemes determined in S3 are used for compensating the output current, and the output error of the arrays before and after compensation is compared, so that the invention has the effect of optimizing the problem of larger output error of the jump type deep neural network in hardware realization.
An in-situ compensation system for Skip Structure deep neural network hardware accuracy problems, comprising:
The coefficient confirming unit is used for determining the correlation coefficient of the compensation equation;
The test unit is used for testing the relation between the correlation coefficient and the output error;
the equation building unit is used for performing data fitting according to the test data obtained by the test unit and building a compensation equation;
And the compensation unit is used for carrying out in-situ compensation on the output result by applying the compensation equation established in the equation establishment unit when the array outputs the result.
The invention has the beneficial effects that:
The compensation method can solve the problem of precision reduction caused by error layer-by-layer superposition when the memristor array is used for realizing the Skip Structure type deep neural network, thereby having great optimization effect on hardware realization of the Skip Structure neural network.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described, and it will be obvious to those skilled in the art that other drawings can be obtained according to these drawings without inventive effort.
FIG. 1 is a schematic flow chart of an in situ compensation method of the present invention;
FIG. 2 is a flow chart of a data testing scheme of the present invention;
FIG. 3 is a flow chart of a data testing scheme of the present invention;
fig. 4 is a schematic diagram of the process of compensation equation establishment of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
As shown in fig. 1, an in-situ compensation method for the hardware precision problem of the Skip Structure deep neural network comprises the following steps:
S1, determining a correlation coefficient of a compensation equation;
In this step, the invention primarily determines that the compensation equation should be related to the magnitudes of the input voltage V, the array on row number n row, and the memristor resistance R according to the phenomena observed in the experiment, that is, the magnitude of the memristor resistance R will have different effects on the array output as the magnitude of the input voltage V, the array on row number n row, and the memristor resistance R are changed.
S2, testing the relation between the correlation coefficient and the output error;
through designing and executing a test scheme, determining the influence relationship of the input voltage V, the array starting line number n row and the memristor resistance value R on the array output error;
S21, according to a fixed step size of 0.03V, when the input voltage is increased from 0.03V to 0.3V, the relation between the output error along with the array starting line number n row and the memristor resistance value R is tested, wherein the testing step is shown in figure 2;
the specific test steps are as follows:
S211, an input voltage V input is set to 0.03V, an input voltage fixed increment step V step is set to 0.03V, and a memristor resistance fixed increment step R step is set to 5KΩ.
S212, the memristor resistance R is set to 5KΩ.
S213, array starting line number n row is set to 1.
S214, measuring the actual output current I actual and the ideal output current I ideal of the array under the current parameter setting, and calculating an error I error=Iideal-Iactual between the actual output current I actual and the ideal output current I ideal. The array on row number is increased by 1.
S215, judging whether the current array starting line number exceeds the array maximum starting line number 32. If the array starting line number does not exceed the array maximum starting line number, the process goes to S214, otherwise, the process goes to S216.
S216, increasing the resistance of the memristor once according to a fixed step length R step.
S217, judging whether the current memristor resistance is larger than the upper limit of the memristor resistance by 50KΩ. If the memristor resistance does not exceed the memristor resistance upper limit, then S213 is entered, otherwise S218 is entered.
S218, increasing the input voltage once according to the fixed step V step.
S219, judging whether the current input voltage is larger than the upper limit of the input voltage by 0.3V. If the input voltage does not exceed the upper input voltage limit, the process proceeds to S212, otherwise the test is ended.
S22, testing the relation between the output error and the array starting line number n row and the input voltage magnitude V when the resistance value R of the memristor is from 5KΩ to 50KΩ according to a fixed step length of 5KΩ;
the specific test steps are as follows:
S221, the memristor resistance value R is set to 5KΩ, the input voltage fixed increment step V step is set to 0.001V, and the memristor resistance value fixed increment step R step is set to 5KΩ.
S222, the input voltage V input is set to 0.001V.
S223, the array starting line number n row is set to be 1.
S224, measuring the actual output current I actual and the ideal output current I ideal of the array under the current parameter setting, and calculating an error I error=Iideal-Iactual between the actual output current I actual and the ideal output current I ideal. The array on row number is increased by 1.
S225, judging whether the current array starting line number exceeds the array maximum starting line number 32. If the array starting line number does not exceed the array maximum starting line number, the process goes to S224, otherwise, the process goes to S226.
S226, the input voltage is increased once according to the fixed step V step.
S227, judging whether the current input voltage is larger than the upper limit of the input voltage by 0.2V. If the input voltage does not exceed the upper input voltage limit, then S223 is entered, otherwise S228 is entered
And S228, increasing the resistance of the memristor once according to a fixed step length R step.
S229, judging whether the current memristor resistance is greater than the upper limit of the memristor resistance by 50KΩ. If the memristor resistance does not exceed the memristor resistance upper limit, the method proceeds to S222, otherwise, the test is ended.
S3, performing data fitting according to the test data obtained in the step S2, and establishing a compensation equation;
As shown in fig. 4, according to the data measured in S2, a fitting mode most suitable for the two sets of data is selected, and linear fitting and nonlinear fitting are performed on the two sets of data respectively, so as to obtain a relationship between an output error corresponding to the linear fitting and the nonlinear fitting, an input voltage V, an array starting line number n row and a memristor resistance value R;
the compensation equation comprises four in-situ compensation schemes, which are respectively as follows:
the first compensation scheme is to linearly compensate the output only;
the second compensation scheme is to carry out nonlinear compensation on the output only;
the third compensation scheme is to compensate the output, wherein the compensation value is the average value of linear compensation and nonlinear compensation;
And in a fourth compensation scheme, the output is compensated, and the compensation value is a result of solving Chang Weifen equations of linear compensation and nonlinear compensation.
The compensation equation is established by the following steps:
S31, determining the relation among the ideal array output I ideal, the actual array output I actual and the compensation value M compensation as I ideal=Iactual+Mcompensation;
s32, according to the data measured in S21, taking the array output error I error as a dependent variable and the array starting line number n row as an independent variable, performing nonlinear fitting to obtain a nonlinear relation between the array output error I error and the input voltage V when the memristor resistance R and the input voltage V take different values Let the compensation value Namely, the establishment of a nonlinear compensation equation is completed;
S33, according to the data measured in S22, linear fitting is carried out by taking the array output error I error as a dependent variable and the input voltage V as an independent variable, so as to obtain the linear relation between the array output error I error and the input voltage V when the memristor resistance R and the array starting line number n row take different values Let the compensation valueNamely, the establishment of a linear compensation equation is completed;
s34, taking the average value of the linear compensation value and the nonlinear compensation value, namely, letting the compensation value Completing the establishment of a third compensation equation;
s35, solving Chang Weifen equations of linear compensation and nonlinear compensation to make compensation values Completing the establishment of a fourth compensation equation;
The final compensation equation is:
S4, when the array outputs a result, performing in-situ compensation on the output result by using four compensation equations established in the S3;
The four in-situ compensation schemes determined in the step S3 are used for compensating the output current, and the output error of the arrays before and after compensation is compared, so that the invention has the effect of optimizing the problem of larger output error of the jump type deep neural network in hardware realization.
In the description of the present specification, the descriptions of the terms "one embodiment," "example," "specific example," and the like, mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the present invention. In this specification, schematic representations of the above terms do not necessarily refer to the same embodiments or examples. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
The foregoing has shown and described the basic principles, principal features and advantages of the invention. It will be understood by those skilled in the art that the present invention is not limited to the embodiments described above, and that the above embodiments and descriptions are merely illustrative of the principles of the present invention, and various changes and modifications may be made without departing from the spirit and scope of the invention, which is defined in the appended claims.

Claims (7)

1. An in-situ compensation method for the hardware precision problem of a Skip Structure deep neural network is characterized by comprising the following steps of;
S1, determining a correlation coefficient of a compensation equation;
s2, testing the relation between the correlation coefficient and the output error;
s3, performing data fitting according to the test data obtained in the step S2, and establishing a compensation equation;
S4, when the array outputs a result, performing in-situ compensation on the output result by using a compensation equation established in the S3;
In S1, according to the experiment that the output of the array is affected differently along with the changes of the input voltage V, the array starting line number n row and the memristor resistance value R, a compensation equation is initially determined to be related to the input voltage V, the array starting line number n row and the memristor resistance value R;
in S2, determining the influence relation of the input voltage V, the array starting line number n row and the memristor resistance value R on the array output error by designing and executing a test scheme, wherein the specific steps comprise:
S21, setting a fixed step length, and testing the relation between the output error along with the array starting line number n row and the memristor resistance value R when the input voltage is increased;
s22, setting a fixed step length, and testing the relation between the output error and the starting line number n row of the array and the input voltage V when the resistance value R of the memristor is increased;
In S3, according to the data measured in S2, a fitting mode most suitable for the two sets of data is selected, and linear fitting and nonlinear fitting are performed on the two sets of data respectively, so as to obtain a relationship between an output error corresponding to the linear fitting and the nonlinear fitting, an input voltage V, an array starting line number n row and a memristor resistance value R.
2. The method for in-situ compensation of hardware accuracy problems of Skip Structure deep neural network according to claim 1, wherein in S21, the testing steps are specifically:
S211, setting an input voltage V input to 0.03V, setting an input voltage fixed increment step V step to 0.03V, and setting a memristor resistance fixed increment step R step to 5KΩ;
s212, setting the resistance value R of the memristor to be 5KΩ;
S213, array starting line number n row is set to be 1;
S214, measuring the actual output current I actual and the ideal output current I ideal of the array under the current parameter setting, and calculating an error I error=Iideal-Iactual between the actual output current I actual and the ideal output current I ideal;
s215, judging whether the current array starting line number exceeds the array maximum starting line number 32, if the array starting line number does not exceed the array maximum starting line number, entering S214, otherwise entering S216;
S216, increasing the resistance of the memristor once according to a fixed step length R step;
S217, judging whether the current memristor resistance is greater than the upper limit of the memristor resistance by 50KΩ, if the memristor resistance is not greater than the upper limit of the memristor resistance, entering S213, otherwise entering S218;
S218, increasing the primary input voltage according to a fixed step V step;
s219, judging whether the current input voltage is larger than the upper limit of the input voltage by 0.3V, if the input voltage does not exceed the upper limit of the input voltage, entering S212, otherwise ending the test.
3. The in-situ compensation method for the hardware accuracy problem of Skip Structure deep neural network according to claim 1, wherein the specific test steps in S22 are as follows:
S221, setting the memristor resistance value R to be 5KΩ, setting the input voltage fixed increment step V step to be 0.001V, and setting the memristor resistance value fixed increment step R step to be 5KΩ;
S222, the input voltage V input is set to 0.001V;
S223, setting the array starting line number n row to be 1;
s224, measuring the actual output current I actual and the ideal output current I ideal of the array under the current parameter setting, and calculating the error I error=Iideal-Iactual between the actual output current I actual and the ideal output current I ideal;
S225, judging whether the current array starting line number exceeds the array maximum starting line number 32, if the array starting line number does not exceed the array maximum starting line number, entering S224, otherwise entering S226;
S226, increasing the primary input voltage according to a fixed step V step;
s227, judging whether the current input voltage is larger than the upper limit of the input voltage by 0.2V, if the input voltage does not exceed the upper limit of the input voltage, entering S223, otherwise entering S228;
S228, increasing the resistance of the memristor once according to a fixed step length R step;
S229, judging whether the current memristor resistance is greater than the upper limit of the memristor resistance by 50KΩ, if the memristor resistance is not greater than the upper limit of the memristor resistance, entering S222, otherwise ending the test.
4. The in-situ compensation method for the hardware precision problem of the Skip Structure deep neural network according to claim 1, wherein the compensation equation comprises four in-situ compensation schemes, which are respectively as follows:
the first compensation scheme is to linearly compensate the output only;
the second compensation scheme is to carry out nonlinear compensation on the output only;
the third compensation scheme is to compensate the output, wherein the compensation value is the average value of linear compensation and nonlinear compensation;
And in a fourth compensation scheme, the output is compensated, and the compensation value is a result of solving Chang Weifen equations of linear compensation and nonlinear compensation.
5. The method for in-situ compensation of Skip Structure deep neural network hardware accuracy problems according to claim 4, wherein the step of creating the compensation equation is:
S31, determining the relation among the ideal array output I ideal, the actual array output I actual and the compensation value M compensation as I ideal=Iactual+Mcompensation;
s32, according to the data measured in S21, taking the array output error I error as a dependent variable and the array starting line number n row as an independent variable, performing nonlinear fitting to obtain a nonlinear relation between the array output error I error and the input voltage V when the memristor resistance R and the input voltage V take different values Let the compensation value Wherein a (V,R)、b(V,R) and c (V,R) are three coefficients determined by the array input voltage V and the memristor resistance R, namely, the establishment of the nonlinear compensation equation is completed;
S33, according to the data measured in S22, linear fitting is carried out by taking the array output error I error as a dependent variable and the input voltage V as an independent variable, so as to obtain the linear relation between the array output error I error and the input voltage V when the memristor resistance R and the array starting line number n row take different values Let the compensation valueWherein, the AndThe linear compensation equation is established by two coefficients determined by the array starting line number n row and the memristor resistance value R;
s34, taking the average value of the linear compensation value and the nonlinear compensation value, namely, letting the compensation value Completing the establishment of a third compensation equation;
s35, solving Chang Weifen equations of linear compensation and nonlinear compensation to make compensation values Wherein, p and q are two coefficients obtained by solving Chang Weifen equation, and the fourth compensation equation is established;
The final compensation equation is:
wherein x and y are respectively nonlinear compensation Linear compensationThe specific value of the coefficient of (2) is determined by the compensation scheme.
6. The in-situ compensation method for hardware accuracy problems of Skip Structure deep neural network according to claim 5, wherein in S4, the four in-situ compensation schemes determined in S3 are used to compensate the output current, and the effect of optimizing the problem of larger output error of Skip Structure deep neural network in hardware implementation is determined by comparing the output error of the arrays before and after compensation.
7. An in-situ compensation system for Skip Structure deep neural network hardware accuracy problems, performing the in-situ compensation method of any of claims 1-6, comprising:
The coefficient confirming unit is used for determining the correlation coefficient of the compensation equation;
The test unit is used for testing the relation between the correlation coefficient and the output error;
the equation building unit is used for performing data fitting according to the test data obtained by the test unit and building a compensation equation;
And the compensation unit is used for carrying out in-situ compensation on the output result by applying the compensation equation established in the equation establishment unit when the array outputs the result.
CN202310438357.7A 2023-04-23 2023-04-23 In-situ compensation method for hardware precision problem of Skip Structure deep neural network Active CN116579395B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310438357.7A CN116579395B (en) 2023-04-23 2023-04-23 In-situ compensation method for hardware precision problem of Skip Structure deep neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310438357.7A CN116579395B (en) 2023-04-23 2023-04-23 In-situ compensation method for hardware precision problem of Skip Structure deep neural network

Publications (2)

Publication Number Publication Date
CN116579395A CN116579395A (en) 2023-08-11
CN116579395B true CN116579395B (en) 2025-10-31

Family

ID=87538693

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310438357.7A Active CN116579395B (en) 2023-04-23 2023-04-23 In-situ compensation method for hardware precision problem of Skip Structure deep neural network

Country Status (1)

Country Link
CN (1) CN116579395B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111144058A (en) * 2020-01-03 2020-05-12 首都师范大学 Method for relieving sneak path influence in memristor cross array and related equipment
CN111210859A (en) * 2020-01-03 2020-05-29 首都师范大学 Method and related equipment for mitigating the effects of sneak paths in memristor crossbar arrays

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11521047B1 (en) * 2018-04-20 2022-12-06 Brown University Deep neural network
CN110796241B (en) * 2019-11-01 2022-06-17 清华大学 Memristor-based neural network training method and training device

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111144058A (en) * 2020-01-03 2020-05-12 首都师范大学 Method for relieving sneak path influence in memristor cross array and related equipment
CN111210859A (en) * 2020-01-03 2020-05-29 首都师范大学 Method and related equipment for mitigating the effects of sneak paths in memristor crossbar arrays

Also Published As

Publication number Publication date
CN116579395A (en) 2023-08-11

Similar Documents

Publication Publication Date Title
CN112439794B (en) Hot rolling bending force prediction method based on LSTM
CN110633792A (en) End-to-end bearing health index construction method based on convolutional recurrent neural network
CN107688850A (en) A kind of deep neural network compression method
CN113489014A (en) Rapid and flexible full-pure embedded type power system optimal power flow evaluation method
CN118311434B (en) Lithium ion battery SOH estimation method and system based on electrochemical impedance spectrum
CN114966409B (en) A method for estimating the state of charge of a power lithium battery based on a multi-layer perceptron algorithm
CN104503420B (en) Non-linear process industry fault prediction method based on novel FDE-ELM and EFSM
CN116579395B (en) In-situ compensation method for hardware precision problem of Skip Structure deep neural network
CN110322342B (en) Construction method and system of loan risk prediction model and loan risk prediction method
CN111061708A (en) Electric energy prediction and restoration method based on LSTM neural network
CN110880044A (en) A Load Forecasting Method Based on Markov Chain
CN102156783A (en) Comprehensive assessment method for simulation accuracy of electrical power system
Wu et al. A hybrid squeeze excitation gate recurrent unit-autoregressive integrated moving average model for long-term state of health estimation of lithium-ion batteries with adaptive enhancement ability
CN119784448A (en) Day-ahead electricity price forecasting method based on long-term and short-term time series networks with attention mechanism
CN109541729B (en) A NARX-based forecasting method for growing season rainfall in grassland areas in northern China
CN115527638B (en) Screening method for rare earth giant magnetostrictive materials based on machine learning
CN113239021B (en) Data migration method for predicting residual life of similar products
CN115571139B (en) Vehicle speed prediction method based on two-stage decomposition
Tang et al. Adaptive engineering-assisted deep learning for battery module health monitoring across dynamic operations
CN115659809B (en) Equipment life prediction method and device based on multi-source domain hierarchy migration
CN117669454A (en) Chip performance and power consumption analysis methods, devices and related equipment
Shi et al. Improved back-propagation neural network-multi-information gain optimization Kalman filter method for high-precision estimation of state-of-energy in lithium-ion batteries
CN118965945B (en) Mutual inductor prediction method based on MHA-CNN-SLSTM and error compensation
CN119224611B (en) Methods, apparatus, equipment, storage media and products for testing battery cell capacity
CN119361043B (en) Inverse design method for energy-absorbing metamaterials based on diffusion model and parameterized level set

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant