[go: up one dir, main page]

CN119292059A - A method for predicting and controlling the temperature of a heat shrink tube winding motor - Google Patents

A method for predicting and controlling the temperature of a heat shrink tube winding motor Download PDF

Info

Publication number
CN119292059A
CN119292059A CN202411401858.9A CN202411401858A CN119292059A CN 119292059 A CN119292059 A CN 119292059A CN 202411401858 A CN202411401858 A CN 202411401858A CN 119292059 A CN119292059 A CN 119292059A
Authority
CN
China
Prior art keywords
winding motor
temperature
lstm
data
heat shrink
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202411401858.9A
Other languages
Chinese (zh)
Inventor
唐胜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ganzhou Yanghui Intelligent Technology Co ltd
Original Assignee
Ganzhou Yanghui Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ganzhou Yanghui Intelligent Technology Co ltd filed Critical Ganzhou Yanghui Intelligent Technology Co ltd
Priority to CN202411401858.9A priority Critical patent/CN119292059A/en
Publication of CN119292059A publication Critical patent/CN119292059A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B13/00Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion
    • G05B13/02Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion electric
    • G05B13/04Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion electric involving the use of models or simulators
    • G05B13/042Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion electric involving the use of models or simulators in which a parameter or coefficient is automatically adjusted to optimise the performance

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Control Of Electric Motors In General (AREA)

Abstract

本发明是一种热缩管收卷电机温度预测控制方法,该方法采集历史数据集,依次完成数据处理、数据筛选及划分训练集和测试集,引入Circle优化种群、非线性惯性权重和非对称学习因子,获取更加均匀和多样化的初始种群,更好地平衡算法全局搜索和局部搜索能力,利用IPSO算法对构建CNN‑LSTM模型参数进行优化,提高模型的泛化能力,设置IPSO‑CNN‑LSTM模型各类评估指标阀值动态优化模型精度,获取最优IPSO‑CNN‑LSTM预测及残差报警阀值,实时采集热缩管收卷电机相关参数,采用滑动窗口算法统计温度异常频次结合最优IPSO‑CNN‑LSTM预测模型实现热缩管收卷电机温度的趋势分级预警,有效解决现有热缩管收卷电机温度监测因相关影响参数不稳定、波动大和非线性的问题,实现较高精度的提前预测和趋势分级预警。

The invention discloses a temperature prediction and control method for a heat shrink tube winding motor. The method collects historical data sets, sequentially completes data processing, data screening and divides a training set and a test set, introduces a Circle optimization population, a nonlinear inertia weight and an asymmetric learning factor, obtains a more uniform and diversified initial population, better balances the global search and local search capabilities of the algorithm, optimizes the parameters of a CNN-LSTM model by using an IPSO algorithm, improves the generalization ability of the model, sets various evaluation index thresholds of the IPSO-CNN-LSTM model to dynamically optimize the model accuracy, obtains an optimal IPSO-CNN-LSTM prediction and residual alarm threshold, collects relevant parameters of a heat shrink tube winding motor in real time, adopts a sliding window algorithm to count the frequency of temperature anomalies and combines an optimal IPSO-CNN-LSTM prediction model to realize a trend classification warning of the temperature of the heat shrink tube winding motor, effectively solves the problems of instability, large fluctuation and nonlinearity of relevant influencing parameters in the existing heat shrink tube winding motor temperature monitoring, and realizes high-precision advance prediction and trend classification warning.

Description

Temperature prediction control method for heat shrinkage tube winding motor
Technical Field
The invention belongs to the technical field of temperature monitoring of heat-shrinkable tube production equipment, and particularly relates to a temperature prediction control method of a heat-shrinkable tube winding motor.
Background
At present, the heat-shrinkable tube has the characteristics of insulation, sealing and protection, good chemical corrosion resistance, abrasion resistance, weather aging resistance, identification tools used for identification and classification, and the like. The heat shrink tube has a wide application range, including but not limited to wire and cable industry, electronics and electrical industry, automobile manufacturing and maintenance, chemical industry, petroleum industry and the like. Therefore, the improvement and optimization of the heat shrinkage tube production process is particularly urgent.
The heat shrinkage tube rolling process is one of core processes for producing heat shrinkage tubes. The heat shrinkage tube rolling process not only relates to the quality of products, but also directly influences the use effect and service life of the products. The winding speed is adjusted to help improve the winding quality and efficiency, and the heat shrinkage tube winding mechanism can reduce the problems of folding, curling and wrinkling, so that the quality of the finished heat shrinkage tube winding is further improved. The winding motor is used as a core component of the heat-shrinkable tube winding machine, and the normal operation of the winding motor is critical to the winding process of the heat-shrinkable tube. The heat shrinkage tube winding motor is likely to cause motor temperature faults due to long-time operation and high-speed rotation, and the reasons for the faults may include internal faults of the motor, power supply problems, triggering of an overheat protection device and the like. The key to solving the temperature fault of the winding motor is to diagnose the fault cause and carry out corresponding maintenance or replacement. At present, the temperature fault production field of the heat-shrinkable tube winding motor is basically solved by relying on experience of maintenance operators, certain hysteresis exists, the health state of the winding motor is predicted in advance and a real-time monitoring method is lacked, the problem that the heat-shrinkable tube winding motor is abnormal, the overall winding quality of the heat-shrinkable tube is low due to the fact that the heat-shrinkable tube winding motor is abnormal can not be solved in time, and the yield of products is further reduced. Therefore, how to realize the advanced temperature prediction and real-time monitoring of the heat-shrinkable tube winding motor is a problem to be solved in order to improve the heat-shrinkable tube winding process quality.
Disclosure of Invention
The invention aims to overcome the problems in the prior art and provides a temperature prediction control method for a heat shrinkage tube winding motor.
In order to achieve the technical purpose and the technical effect, the invention is realized by the following technical scheme:
a temperature prediction control method for a heat shrinkage tube winding motor comprises the following steps:
Step S1, collecting a historical data set;
S2, data processing;
Step S3, data screening;
S4, dividing a training data set and a test data set;
s5, constructing an improved CNN-LSTM network model structure;
step S6, initializing particle swarm parameters and speed position information;
Step S7, optimizing the population;
Step S8, optimizing inertia weight and learning factors;
step S9, updating the speed, the position, the inertia weight and the learning factor of the particles;
S10, inputting parameters into a CNN-LSTM neural network for training;
Step S11, defining a fitness function;
Step S12, calculating the particle fitness to update the optimal position;
step S13, calculating and judging a model, if the maximum iteration times are met, continuing to execute downwards, otherwise, repeating the step S9;
step S14, obtaining optimal parameters;
s15, inputting optimal parameters into an IPSO-CNN-LSTM prediction model;
s16, training an IPSO-CNN-LSTM model;
S17, inputting a historical test set evaluation model, and setting various model evaluation index thresholds;
S18, calculating various evaluation indexes of the IPSO-CNN-LSTM model and comparing the evaluation indexes with corresponding threshold values;
Step S19, outputting an optimal IPSO-CNN-LSTM prediction model, and simultaneously adopting a Z-Score standardized inverse normalization method to process a residual error alarm threshold delta;
and S20, counting abnormal frequency grading early warning of the temperature of the heat shrinkage tube winding motor by adopting a sliding window algorithm.
Further, the step S20 includes the following specific steps:
step S20.1, real-time data acquisition;
Step S20.2, initializing the abnormal frequency of the current temperature in the sliding window, wherein f T =0;
s20.3, inputting relevant parameters of the temperature of the heat shrinkage tube rolling motor in a period of time to be acquired in real time to an optimal IPSO-CNN-LSTM prediction model;
s20.3, inputting relevant parameters of the temperature of the heat shrinkage tube rolling motor in a period of time to be acquired in real time to an optimal IPSO-CNN-LSTM prediction model;
step S20.4, outputting a corresponding predicted temperature set of the heat-shrinkable tube winding motor: meanwhile, calculating the average absolute error between the predicted value and the actual value of the temperature of the heat shrinkage tube winding motor:
S20.5, adopting a Z Score standardized inverse normalization method to respectively treat predicted temperature sets of the heat shrinkage tube rolling motor Absolute error from averageOutputting corresponding T ΔT and average absolute error MAE update;
Step S20.6, if T i is greater than maxT, wherein T i∈TΔT, maxT is an upper temperature limit, and the corresponding temperature limit exceeds a larger threshold, setting various evaluation indexes of the optimal IPSO-CNN-LSTM prediction model by combining the temperature provided by the general comprehensive heat-shrinkable tube winding motor, and jumping to step S20.10, otherwise, continuing to downwards execute step S20.7;
Step S20.7, setting a sliding window length to l=10, and gradually sliding and traversing along the prediction time.
Step S20.8, if MAE update is smaller than residual error alarm threshold delta, adding 1 to the current temperature abnormality frequency in the sliding window to process, namely, f T=fT +1, otherwise, keeping the current temperature abnormality frequency in the sliding window to be the original value, namely, f T=fT +0;
step S20.9, if the current temperature anomaly frequency f T in the sliding window is more than or equal to 6, executing step S20.10, otherwise, jumping to step S20.11;
Step S20.10, a heat shrinkage tube winding machine monitoring early warning system automatically displays primary early warning, namely red early warning, which indicates that a heat shrinkage tube winding motor is in an abnormal state, and sends a command to a winding machine equipment maintenance worker to stop in time for carrying out abnormality treatment;
Step S20.11, if the current temperature abnormality frequency in the sliding window is more than or equal to 2 and less than or equal to T, executing step S20.12, otherwise, jumping to step S20.13;
Step S20.12, a heat shrinkage tube winding machine monitoring and early warning system automatically displays secondary early warning, namely yellow early warning, which indicates that an abnormal phenomenon occurs in a heat shrinkage tube winding motor, sends a command to winding machine equipment maintenance operators to observe the real-time running state of equipment in an equipment operation area, evaluates the running state and then determines whether to take corresponding abnormal treatment measures or not;
And step S20.13, automatically displaying green by a monitoring and early warning system of the shrink tube winding machine to indicate that the equipment temperature is normal, and jumping to step S20.2 to perform grading early warning on the abnormal frequency of the temperature of the heat shrink tube winding motor counted by a sliding window algorithm of the next round.
Further, in the step S1, in the process of collecting the historical data set, the data interval is located in the normal operation period of the heat-shrinkable tube winding machine, so that the validity of the data is ensured, the obtained data sources are winding motor rotating speed, winding motor active power, winding motor voltage, winding motor temperature, winding machine tension, environment temperature and environment humidity parameters at the same moment, and the parameter data at the same sampling moment are 1 group, the sampling interval is set for 30S, and the heat-shrinkable tube winding machine is continuously collected for 2 months, so that the sample quantity is ensured to be sufficient.
Further, in the step S2, in order to improve the data quality, remove redundant information, process missing values, and normalize data, the method specifically includes the following steps:
Step S2.1, data cleaning, setting the size of a moving window to be n by adopting a method of combining the moving window with abnormal threshold values of various parameters, firstly traversing all acquired parameters, searching repeated values existing in a data set, deleting the repeated values, secondly judging whether the number of abnormal occurrence is smaller than m by comparing the continuous n time step data with the corresponding abnormal threshold values, if so, performing interpolation by adopting a Newton polynomial interpolation method, otherwise, directly deleting other characteristic parameter data containing the abnormal data parameters in the same time step, wherein the Newton interpolation calculation formula is as follows :Nn(x)=f(x0)+f[x0,x1](x-x0)+f[x0,x1,x2](x-x0)(x-x1)+...+f[x0,x1,...,xn](x-x0)(x-x1)…(x-xn-1) (1)
In the formulas (1) - (2), (x i,f(xi)) represents normal data of samples in a window N, f [ x 0,x1,...,xn ] represents an N-order difference quotient, for the difference quotient f [ x 0,x1,...,xn ], the sequence of sequence numbers subscript does not affect the value of the difference quotient, x i represents the time coordinate of abnormal data, N n (x) represents an inserted value calculated by Newton interpolation, and in actual interpolation, only the result N n-1 (x) of interpolation after the last sampling is saved, and the sum of N n-1 (x) and a new term is directly calculated next time, so that calculation and storage space are saved;
Step S2.2, Z-Score data normalization processing, wherein the data normalization can accelerate the gradient descent optimal solution speed during model training, improve model precision, and uniformly map data in intervals [0,1], and the calculation formula is as follows:
In the formula (3), y is a data value, i.e., an initial value before data normalization, Is the average of the dataset, σ is the standard value, and y' is the data normalization corresponding value.
Further, in step S3, through pearson correlation coefficient calculation and analysis, parameters related to the heat shrinkage tube winding motor are selected, that is, 6 groups of parameters including winding motor rotation speed, winding motor active power, winding motor voltage, winding machine tension, ambient temperature and winding motor temperature are selected as follow-up model training and test sample data, and the selected sample data are integrated into the following matrix expression form:
In the formula (10), Z epsilon R 6×s represents a sample data matrix after 6 groups of historical data are integrated, wherein n is a rolling motor rotating speed set, P is a rolling motor active power set, U is a rolling motor voltage set, F is a rolling motor tension set, K is an environment temperature set, T is a rolling motor temperature set, and s is a data set s-th time step.
Further, in the step S4, the preprocessed Z matrix sample data set is randomly divided into a training set and a testing set according to conditions, wherein the training set accounts for 80% and is used for training an improved CNN-LSTM model, and the testing set accounts for 20% and is used for evaluating the improved CNN-LSTM model;
In the step S5, the network model structure comprises 2 one-dimensional convolution layers, 1 full-connection layer and 2 LSTM layers, the normalization and nonlinear operation of data are realized through a BN layer and an activation function Leaky-ReLU after the convolution layers, so that the model convergence speed is increased, the gradient disappearance problem is solved, deep features affecting the temperature factors of a winding motor are extracted, and the extracted deep features are input into the LSTM to better predict feature information.
Further, in the step S5, the parameters of the convolution layer are the weight of the convolution kernel and the offset of each channel, and the mathematical model of the convolution operation of the convolutional neural network CNN is as follows:
in the formula (11), the amino acid sequence of the compound, Represents the jth output feature map of the mth layer, M m-1 represents the input feature set, M represents the mth layer of the CNN,Representing the convolution kernel used by the m-th layer convolution operation,Representing the bias of the jth feature map, representing convolution operation, f (·) representing the activation function;
The pooling layer pools the features extracted by the convolution layer to realize dimension reduction of the features and reduce network parameters, prevents model overfitting to a certain extent, adopts maximum pooling, seeks the maximum value of a pooling area to obtain local features, the obtained feature map is more sensitive to texture features, and the maximum pooling operation is expressed as follows for the output X i of a kth filter of the convolution layer in the ith dimension:
In the formula (12), p i (j) is the j-th output of the pooling layer, and w is the width of the pooling core;
As a variant LSTM of the RNN of the recurrent neural network, a gating unit is introduced to alleviate the gradient extinction and gradient explosion problems existing during RNN training, to improve the prediction accuracy, and at time t, the memory unit C t is a core part of the LSTM neuron, and the gating structure of the network is composed of a forgetting gate, an input gate and an output gate, so as to determine the transmission of information to C t, and in one time step, the LSTM neuron obtains the values of C t and other state quantities in the unit through a series of calculations, and the specific calculation formula is as follows:
ft=σ(Wf[ht-1,xt]+bf) (13)
it=σ(Wi[ht-1,xt]+bi) (14)
ot=σ(Wo[ht-1,xt]+bo) (15)
h t=ot⊙tanh(Ct) (18), wherein in the formulas (13) - (18), f t、it、ot is the states of an input gate, a forgetting gate and an output gate at the time t respectively, and C t-1 is a memory unit of LSTM at the time t-1; And C t is a candidate memory unit and a current time memory unit of the current time LSTM respectively, h t-1 is an output value of the current time LSTM at t-1, x t is an input value of the current time LSTM, W i、Wf、Wo、WC and b i、bf、bo、bC are weights and biases of an input gate, a forgetting gate, an output gate and a candidate memory respectively, sigma (&) is a sigmoid activation function, tan h (&) is a hyperbolic tangent function, and as-is-a-bit multiplication of all elements.
Further, in the step S6, the method specifically includes the following steps:
S6.1, initializing IPSO super parameters and determining initial values of the IPSO parameters;
Step S6.2, initializing the position and the speed of particles, randomly generating a population x i=(n1,m1,m2,m3,α,p,batchsize), wherein n 1 is the number of CNN convolution kernels, m 1、m2 is the number of neurons of 2 LSTM hidden layers, m 3 is the number of neurons of a full-connection layer, alpha is an initialization learning rate, p is the inactivation rate of the neurons, batch size is a Batch Size, and the range of values of the parameters is specifically as follows:
n1∈[2,64],m1∈[1,30],m2∈[1,30],m3∈[1,20],α∈[0.001,0.005],p∈[0.01,0.90],batchsize∈[1,50].
further, in the step S7, a Circle mapping method is introduced to initialize the particle swarm, so as to obtain a more uniform and diversified initial swarm, so as to improve the convergence speed and precision of the algorithm, and the expression of the chaotic sequence generated by the Circle mapping is as follows:
Wherein X i represents the ith chaotic sequence number, X i+1 represents the (i+1) th chaotic sequence number, e=0.5, f=0.2, mod represents a modulo operator, and the population initialization operation using Circle mapping comprises speed initialization and position initialization, and the specific initial operation is as follows:
vi,j=vlower_b+(vup_b-vlower_b)*Xi,j (22)
In the formulas (22) - (23), v lower_b and v up_b are respectively the upper and lower limits of particle speed, X lower_b and X up_b are respectively the upper and lower limits of particle position, and X i,j and X' i,j are respectively chaotic sequence values generated by Circle mapping in the corresponding particle dimension.
Further, in the step S8, the method specifically includes the following steps:
step S8.1, introducing dynamic nonlinear variation inertial weight, and selecting dynamic nonlinear variation inertial weight omega, wherein the method comprises the following steps of:
Wherein ω (k) is the inertia weight of the kth iteration, ω max and ω min are respectively the maximum value 0.9 and the minimum value 0.4 of the inertia weight, t max is the maximum iteration number, and k is the iteration number;
Step S8.2, introducing asymmetric learning factors, wherein the asymmetric learning factors comprise individual learning factors c 1 and social learning factors c 2,c1 and c 2, and the individual learning factors and the social learning factors are adaptively adjusted according to different search periods, and the specific expression is as follows:
In the formulas (25) - (26), c 1(k)、c2 (k) are the individual learning factor and the social learning factor of the kth iteration respectively, c 1_e、c1_s is the initial value 2.5 and the termination value 0.5 of the individual learning factor respectively, c 2_e、c2_s is the initial value 1 and the termination value 2.25 of the social learning factor respectively, t max is the maximum iteration number, and k is the iteration number.
Further, in the step S11, the optimized initial particles are used as parameters of the CNN-LSTM model, and the mean square error MSE is used as fitness of the IPSO algorithm, which is specifically as follows:
In formula (27): For the predicted value, y i is the actual value and N is the number of samples.
Further, in the step S17, an average absolute error MAE threshold MAE th_max, an average absolute percentage error MAPE threshold MAPE th_max, a root mean square difference RMSE threshold RMSE th_max, and a determination coefficient R 2 threshold R 2 th_min are introduced to comprehensively evaluate the IPSO-CNN-LSTM model prediction effect.
Further, in the step S18, the method specifically includes the following steps:
step S18.1, calculating evaluation indexes, namely calculating and evaluating various indexes of the IPSO-CNN-LSTM model, wherein the evaluation indexes are specifically as follows:
In the steps (28) to (31), For the predicted value, y i is the actual value,The average value is N, and the number of samples is N;
Step S18.2, judging a threshold range, and sequentially comparing whether various evaluation indexes of the IPSO-CNN-LSTM model are positioned in the designated threshold range or not, wherein the specific steps are as follows:
If the requirement of the inequality group (32) is met, the prediction accuracy of the IPSO-CNN-LSTM model is higher, the MAE find is recorded as the absolute upper limit of the predicted residual error alarm threshold value when the model training is ended, the downward execution is continued, and otherwise, the operation step S14 is repeated.
Further, in the step S19, a specific calculation formula is as follows:
δ=mae final×σtesttest (33), where MAE final is the final mean absolute error MAE of the optimal IPSO-CNN-LSTM prediction model, δ test is the standard deviation of the test set, and μ test is the mean of the test set in equation (33).
The beneficial effects of the invention are as follows:
The intelligent grading early warning of the temperature of the heat shrinkage tube winding motor, which is acquired in real time, is realized by combining the method model with the sliding window algorithm to count the abnormal frequency of the temperature. Aiming at the defects of the traditional PSO algorithm, the method acquires a more uniform and diversified initial population by introducing Circle optimization population, is favorable for improving the convergence speed and precision of the algorithm, and can better balance the global searching and local searching capacity of the algorithm by introducing nonlinear inertial weight. The particles have stronger global convergence capability in searching by introducing asymmetric learning factors. Aiming at the temperature related parameter data of the highly nonlinear heat shrinkage tube winding motor, the parameter of the constructed CNN-LSTM model is optimized by using an IPSO algorithm, so that the model is prevented from being locally optimized, and the generalization capability of the model is improved. Aiming at the fact that certain hysteresis exists in the prior heat shrinkage tube winding motor temperature early warning, real-time collection of relevant parameters such as winding motor temperature, winding motor rotating speed, winding motor active power, winding motor voltage, winding machine tension, environmental temperature and the like is adopted, trend early warning of the heat shrinkage tube winding motor temperature is achieved through combination of sliding window algorithm statistics temperature anomaly frequency and optimal IPSO-CNN-LSTM prediction model, and the problems that the prior winding motor temperature monitoring cannot achieve advanced prediction and trend grading early warning with higher precision due to the fact that winding setting parameters are unstable, fluctuation is large and nonlinear are effectively solved.
Drawings
FIG. 1 is a schematic diagram of a CNN-LSTM network architecture of the present invention;
FIG. 2 is a schematic diagram of the LSTM neuron structure of the present invention;
FIG. 3 is a schematic diagram of a heat shrinkage tube winding motor temperature prediction and monitoring flow based on IPSO-CNN-LSTM;
Fig. 4 is a schematic diagram of a hierarchical early warning flow for counting abnormal temperature frequency by using a sliding window algorithm.
Detailed Description
The invention will be described in detail below with reference to the drawings in combination with embodiments.
A temperature prediction control method for a heat shrinkage tube winding motor comprises the following steps:
Step S1, collecting a historical data set, wherein all data in the embodiment of the invention are acquired from a SCADA system matched with a heat-shrinkable tube winding machine, and during the normal operation of a data interval 2024, 5 months, 1 day to 2024, 6 months and 30 days, the acquired data sources are 7 types of parameters such as the rotating speed of a winding motor, the active power of the winding motor, the voltage of the winding motor, the temperature of the winding motor, the tension of the winding machine, the ambient temperature, the ambient humidity and the like at the same moment, the parameter data at the same sampling moment are 1 group, the sampling interval between each group of data is 30S, and the total data of 164160 groups are obtained;
step S2, data processing, in order to improve the data quality, remove redundant information, process missing values and normalize data, specifically comprising the following steps:
Step S2.1, data cleaning, wherein abnormal data in original data are mainly divided into 2 types of repetition and deletion, which causes deviation of a model, a method of combining a moving window with abnormal threshold values of various parameters is adopted, the size of the moving window is set to be n, firstly, all collected parameters are traversed, repeated values existing in the data set are searched for, the repeated values are deleted, secondly, whether the number of abnormal data is smaller than m by comparing the data in n continuous time steps with the corresponding abnormal threshold values, if yes, interpolation is carried out by adopting a Newton polynomial interpolation method, otherwise, other characteristic parameter data containing abnormal data parameters in the same time step are directly deleted, and a Newton interpolation calculation formula is as follows :Nn(x)=f(x0)+f[x0,x1](x-x0)+f[x0,x1,x2](x-x0)(x-x1)+...+f[x0,x1,...,xn](x-x0)(x-x1)…(x-xn-1) (1)
In formulas (1) - (2), (x i,f(xi)) represents normal data of samples in a window N, f [ x 0,x1,...,xn ] represents an N-order difference quotient, for the difference quotient f [ x 0,x1,...,xn ], the sequence of sequence numbers subscript does not affect the value of the difference quotient, x i represents the time coordinate of abnormal data, N n (x) represents an inserted value calculated by newton interpolation, and in actual interpolation, only the result of interpolation N n-1 (x) after the last sampling is needed to be saved, and the sum of N n-1 (x) and a new term is directly calculated next time, so that calculation and storage space are saved, and in the embodiment, n=10, m=2;
Step S2.2, Z-Score data normalization processing, wherein the data normalization can accelerate the gradient descent optimal solution speed during model training, improve model precision, and uniformly map data in intervals [0,1], and the calculation formula is as follows:
In the formula (3), y is a data value, i.e., an initial value before data normalization, Is the average value of the data set, sigma is the standard value, and y' is the corresponding value of data normalization;
Step S3, data screening, namely calculating a Pearson correlation coefficient (Pearson Correlation Coeffient) aiming at the rolling motor rotating speed, rolling motor active power, rolling motor voltage, rolling machine tension, environment temperature and environment humidity of data normalization, screening parameters which are strongly correlated with the rolling motor temperature to participate in subsequent model training, wherein the correlation coefficient is calculated as follows:
In formulas (4) - (9), s represents the number of data sets, n j、ni represents the jth rotating speed and the ith rotating speed in the winding motor rotating speed data set n respectively, P j、Pi represents the jth active power and the ith active power in the winding motor active power data set P respectively, U j、Ui represents the jth voltage and the ith voltage of the winding motor voltage data set U respectively, F j、Fi represents the jth tension and the ith tension of the winding machine tension data set F respectively, AH j、AHi represents the jth environmental temperature and the ith environmental temperature of the environmental temperature data set AH respectively, K j、Ki represents the jth environmental humidity and the ith environmental humidity of the environmental humidity data set K respectively, ρ nT、ρPT、ρUT、ρFT、ρAHT、ρKT represents the pearson correlation coefficient of the motor rotating speed set n and the motor temperature set T respectively, the pearson correlation coefficient of the motor active power set P and the motor temperature set T respectively, the pearson correlation coefficient of the motor voltage set U and the motor temperature set T, the pearson correlation coefficient of the tension set F and the motor temperature set T, the pearson correlation coefficient of the motor temperature set K and the pearson correlation coefficient of the motor temperature set T respectively, and the pearson correlation coefficient of the temperature coefficient of the motor temperature set T and the temperature coefficient of the temperature value of the motor temperature and the pearson temperature set T of the temperature and the pearson-T bearing value of the temperature set T respectively, and the pearson correlation coefficient of the temperature and the temperature value of the temperature set T and T value of the temperature coefficient of the temperature and T is respectively, and the pea:
table 1 analysis of pearson correlation coefficients of parameters and motor bearing temperatures, respectively
Parameter class Correlation coefficient (ρ) Level of significance
Winding motor rotation speed 0.912 0.000
Active power of winding motor 0.855 0.000
Winding motor voltage 0.824 0.000
Tension of winding machine 0.645 0.000
Ambient humidity 0.158 -0.256
Ambient temperature 0.769 0.000
By comparing the correlation coefficients in table 1, the pearson correlation coefficient generally represents strong correlation at [0.6,0.8], the pearson correlation coefficient represents extremely strong correlation at [0.8,1.0], and the correlation strength is lower than 0.6, so that 6 groups of parameters including the rotational speed of the winding motor, the active power of the winding motor, the voltage of the winding motor, the tension of the winding machine, the ambient temperature and the temperature and temperature of the winding motor are selected as the training and testing sample data of the subsequent model, and the selected sample data are integrated into the following matrix expression form:
in the formula (10), Z epsilon R 6×s represents a sample data matrix after 6 groups of historical data are integrated, wherein n is a rolling motor rotating speed set, P is a rolling motor active power set, U is a rolling motor voltage set, F is a rolling motor tension set, K is an environment temperature set, T is a rolling motor temperature set, and s is a data set s-th time step;
S4, dividing a training data set and a test data set, and randomly dividing the preprocessed Z matrix sample data set into a training set and a test set according to conditions, wherein the training set accounts for 80% and is used for training an improved CNN-LSTM model, and the test set accounts for 20% and is used for evaluating the improved CNN-LSTM model;
Step S5, constructing an improved CNN-LSTM network model structure, as shown in FIG. 1, aiming at the problem of insufficient feature extraction capability of a traditional winding motor temperature prediction model, the embodiment introduces a winding motor temperature prediction model based on combination of CNN and LSTM, wherein the network model structure comprises 2 one-dimensional convolution layers, 1 full-connection layers and 2 LSTM layers, the convolution layers realize normalization and nonlinear operation of data through BN layers and an activation function Leaky-ReLU after the convolution layers, so that the model convergence speed is increased, the gradient disappearance problem is solved, deep features affecting winding motor temperature factors are extracted, the extracted deep features are input into LSTM to better predict feature information, and a convolution neural network (Convolutional Neural Network, CNN) generally comprises an input layer, a convolution layer, a pooling layer, a full-connection layer and an output layer. The convolution layer is a core module of the CNN, the parameters of the core module are the weight of the convolution kernel and the offset of each channel, and a mathematical model of convolution operation of the convolution neural network CNN is as follows:
in the formula (11), the amino acid sequence of the compound, Represents the jth output feature map of the mth layer, M m-1 represents the input feature set, M represents the mth layer of the CNN,Representing the convolution kernel used by the m-th layer convolution operation,Representing the bias of the j-th feature map, representing convolution operation, f (·) representing the activation function, in this embodiment, the leak-ReLU activation function is used;
The pooling layer pools the features extracted by the convolution layer to realize dimension reduction of the features and reduce network parameters, prevents model overfitting to a certain extent, adopts maximum pooling, seeks the maximum value of a pooling area to obtain local features, the obtained feature map is more sensitive to texture features, and the maximum pooling operation is expressed as follows for the output X i of a kth filter of the convolution layer in the ith dimension:
In the formula (12), p i (j) is the j-th output of the pooling layer, and w is the width of the pooling core;
As a variant LSTM of the recurrent neural network RNN (Recurrent Neural Network, RNN), a gating unit is introduced to alleviate gradient extinction and gradient explosion problems existing during RNN training, and to improve prediction accuracy, and its internal structure is shown in fig. 2, and is an internal structure of an LSTM neuron at time t, where the current time memory unit C t is a core part of the LSTM neuron, and the gating structure of the network is composed of a forgetting gate, an input gate and an output gate, so as to determine transmission of information to C t, and in one time step, the LSTM neuron obtains values of C t and other state quantities in the unit through a series of calculations, and a specific calculation formula is as follows:
ft=σ(Wf[ht-1,xt]+bf) (13)
it=σ(Wi[ht-1,xt]+bi) (14)
ot=σ(Wo[ht-1,xt]+bo) (15)
h t=ot⊙tanh(Ct) (18), wherein in the formulas (13) - (18), f t、it、ot is the states of an input gate, a forgetting gate and an output gate at the time t respectively, and C t-1 is a memory unit of LSTM at the time t-1; W i、Wf、Wo、WC and b i、bf、bo、bC are respectively input gate, forget gate, output gate, weight and bias of candidate memory, sigma (&) is a sigmoid activation function, and tan h (&) is a hyperbolic tangent function, as indicated by the following, which are multiplied by each element according to bit;
And S6, initializing particle swarm parameters and speed position information, constructing an IPSO-CNN-LSTM model, wherein the flow of the model is shown in figure 3, and the precision, the number of convolution kernels, the number of neurons of a full connection layer and the number of hidden neurons of the LSTM have strong correlation when the temperature of the winding motor is predicted by using the CNN-LSTM. Therefore, the invention optimizes the parameters by adopting an improved Particle Swarm Optimization (PSO) algorithm (the improved PSO algorithm is abbreviated as IPSO) to improve the generalization capability of the model;
PSO is a population intelligent optimization algorithm, global optimal values are obtained through cooperation and information sharing among particles to solve the problems of multiple targets, nonlinearity and multiple variables, wherein each particle is composed of 3 indexes such as speed, position and fitness value, the size of the fitness value represents the advantages and disadvantages of the particle, the value is calculated according to a fitness function, the space dimension N is set, the number m of the particles is X 1、x2、…、xm, the corresponding speed is v 1、v2、…、vm, the position of the jth particle is X j=(xj1,xj2,…,xjN, the speed is v j=(vj1,vj2,…,vjN), the ith particle searches for the local optimal position and marks p i=(pi1,pi2,…,piN), all the particles search for the global optimal position and mark p g=(pg1,pg2,…,pgN), and the position and speed formula of the jth component of the ith particle are updated as follows:
vij(k+1)=ωvij(k))+c1*r1[pij(k)-xij(k)]+c2*r2[pgj(k)-xij(k)] (19)
xij(k+1)=xij(k)+vij(k+1) (20)
In the formulas (19) - (20), k is iteration times, ω is inertia weight, the inertia weight is used for balancing and adjusting the local searching and global searching capabilities of the PSO algorithm, r 1、c2 is a learning factor, the self learning and social learning capabilities of the reaction particles are generally set as 2, and r 1、r2 is a random number distributed in [0,1 ];
Step S7, optimizing the population;
Step S8, optimizing inertia weight and learning factor, wherein the inertia weight and the learning factor in the PSO algorithm are generally set to be constant or linearly reduced, so that the PSO algorithm can be sunk into a local optimal stage too early, and global and local searching capacity is balanced, and therefore, the particle swarm optimization algorithm is improved from the inertia weight and the learning factor;
step S9, updating the speed, the position, the inertia weight and the learning factor of the particles;
S10, inputting parameters into a CNN-LSTM neural network for training;
Step S11, defining a fitness function;
Step S12, calculating the particle fitness to update the optimal position;
step S13, calculating and judging a model, if the maximum iteration times are met, continuing to execute downwards, otherwise, repeating the step S9;
Step S14, obtaining optimal parameters including optimal parameters such as the number of IPSO (Internet protocol security) optimization convolution kernels, the number of neurons, the learning rate, the iteration times, the batch processing size and the like;
s15, inputting the optimal parameters searched by the IPSO algorithm into an IPSO-CNN-LSTM prediction model;
s16, training an IPSO-CNN-LSTM model;
S17, inputting a historical test set evaluation model, and setting various model evaluation index thresholds;
S18, calculating various evaluation indexes of the IPSO-CNN-LSTM model and comparing the evaluation indexes with corresponding threshold values;
Step S19, outputting an optimal IPSO-CNN-LSTM prediction model, and simultaneously adopting a Z-Score standardized inverse normalization method to process a residual error alarm threshold delta;
and S20, counting abnormal frequency grading early warning of the temperature of the heat shrinkage tube winding motor by adopting a sliding window algorithm.
In the step S20, as shown in fig. 4, the process includes the following specific steps:
step S20.1, real-time data acquisition;
Step S20.2, initializing the abnormal frequency of the current temperature in the sliding window, wherein f T =0;
s20.3, inputting relevant parameters of the temperature of the heat shrinkage tube rolling motor in a period of time to be acquired in real time to an optimal IPSO-CNN-LSTM prediction model;
s20.3, inputting relevant parameters of the temperature of the heat shrinkage tube rolling motor in a period of time to be acquired in real time to an optimal IPSO-CNN-LSTM prediction model;
step S20.4, outputting a corresponding predicted temperature set of the heat-shrinkable tube winding motor: meanwhile, calculating the average absolute error between the predicted value and the actual value of the temperature of the heat shrinkage tube winding motor:
S20.5, adopting a Z Score standardized inverse normalization method to respectively treat predicted temperature sets of the heat shrinkage tube rolling motor Absolute error from averageOutputting corresponding T ΔT and average absolute error MAE update;
Step S20.6, if T i is greater than maxT, wherein T i∈TΔT, maxT is an upper temperature limit, and the corresponding temperature limit exceeds a larger threshold, setting various evaluation indexes of the optimal IPSO-CNN-LSTM prediction model by combining the temperature provided by the general comprehensive heat-shrinkable tube winding motor, and jumping to step S20.10, otherwise, continuing to downwards execute step S20.7;
Step S20.7, setting a sliding window length to l=10, and gradually sliding and traversing along the prediction time.
Step S20.8, if MAE update is smaller than residual error alarm threshold delta, adding 1 to the current temperature abnormality frequency in the sliding window to process, namely, f T=fT +1, otherwise, keeping the current temperature abnormality frequency in the sliding window to be the original value, namely, f T=fT +0;
step S20.9, if the current temperature anomaly frequency f T in the sliding window is more than or equal to 6, executing step S20.10, otherwise, jumping to step S20.11;
Step S20.10, a heat shrinkage tube winding machine monitoring early warning system automatically displays primary early warning, namely red early warning, which indicates that a heat shrinkage tube winding motor is in an abnormal state, and sends a command to a winding machine equipment maintenance worker to stop in time for carrying out abnormality treatment;
Step S20.11, if the current temperature abnormality frequency in the sliding window is more than or equal to 2 and less than or equal to T, executing step S20.12, otherwise, jumping to step S20.13;
Step S20.12, a heat shrinkage tube winding machine monitoring and early warning system automatically displays secondary early warning, namely yellow early warning, which indicates that an abnormal phenomenon occurs in a heat shrinkage tube winding motor, sends a command to winding machine equipment maintenance operators to observe the real-time running state of equipment in an equipment operation area, evaluates the running state and then determines whether to take corresponding abnormal treatment measures or not;
And step S20.13, automatically displaying green by a monitoring and early warning system of the shrink tube winding machine to indicate that the equipment temperature is normal, and jumping to step S20.2 to perform grading early warning on the abnormal frequency of the temperature of the heat shrink tube winding motor counted by a sliding window algorithm of the next round.
The step S6 specifically includes the following steps:
step S6.1, initializing IPSO super parameters, and determining initial values of the IPSO parameters, wherein the initial values are shown in a table 2:
TABLE 2 IPSO parameter settings
IPSO parameters Parameter value
Population number 30
Particle dimension 7
Maximum number of iterations 50
Initial value of individual learning factor c 1_s 2.5
Individual learning factor final value c 1_e 0.5
Initial value of social learning factor c 2_s 1
Social individual learning factor final value c 2_e 2.25
Inertial weight omega max 0.9
Inertial weight omega min 0.4
Step S6.2, initializing the position and the speed of particles, randomly generating a population x i=(n1,m1,m2,m3,α,p,batchsize), wherein n 1 is the number of CNN convolution kernels, m 1、m2 is the number of neurons of 2 LSTM hidden layers, m 3 is the number of neurons of a full-connection layer, alpha is an initialization learning Rate, p is the inactivation Rate (Dropout Rate) of the neurons, and Batch size is the Batch Size (Batch Size), and the values of the parameters are as follows:
n1∈[2,64],m1∈[1,30],m2∈[1,30],m3∈[1,20],α∈[0.001,0.005],p∈[0.01,0.90],batchsize∈[1,50].
In step S7, the initial population is generally generated by using random initialization operation, but the initial population thus obtained has fundamental limitation on the convergence performance of the algorithm due to uneven distribution of individuals, and the initial population can be optimized based on the characteristics of randomness, ergodic performance, regularity and the like of the chaotic map, so that the method of introducing the Circle mapping is selected to initialize the particle population to obtain a more uniform and diversified initial population so as to improve the convergence speed and precision of the algorithm, the Circle mapping is a chaotic map which can be used for generating chaotic numbers between 0 and 1, and the expression of the chaotic sequence generated by the Circle mapping is as follows:
Wherein X i represents the ith chaotic sequence number, X i+1 represents the (i+1) th chaotic sequence number, e=0.5, f=0.2, mod represents a modulo operator, and the population initialization operation using Circle mapping comprises speed initialization and position initialization, and the specific initial operation is as follows:
vi,j=vlower_b+(vup_b-vlower_b)*Xi,j (22)
in the formulas (22) - (23), v lower_b and v up_b are respectively the upper and lower limits of particle speed, X lower_b and X up_b are respectively the upper and lower limits of particle position, and X i,j and X' i,j are respectively chaotic sequence values generated by Circle mapping in the corresponding particle dimension.
The step S8 specifically includes the following steps:
Step S8.1, introducing dynamic nonlinear variation inertial weight, wherein the size of the inertial weight represents the capability of a particle to maintain a motion state at the previous moment, ensuring the global searching capability of a PSO algorithm, but ensuring lower convergence accuracy and slower convergence speed when omega is larger, ensuring the local searching capability of the PSO algorithm, ensuring higher convergence speed but easily sinking into local optimum when omega is smaller, limiting the global searching capability and the convergence speed of the PSO when the inertial weight omega is a fixed value, and selecting the dynamic nonlinear variation inertial weight omega in order to better balance the global searching and the local searching capability of the PSO algorithm in the exploration stage and improve the convergence accuracy, wherein the method comprises the following steps of:
Wherein ω (k) is the inertia weight of the kth iteration, ω max and ω min are respectively the maximum value 0.9 and the minimum value 0.4 of the inertia weight, t max is the maximum iteration number, and k is the iteration number;
Step S8.2, introducing asymmetric learning factors, including an individual learning factor c 1 and a social learning factor c 2, wherein the larger the value of the individual learning factor c 1 is beneficial to complete searching, the larger the value of the social learning factor c 2 is beneficial to partial searching, and the adaptive adjustment of c 1 and c 2 is carried out according to different searching periods, and the specific expression is as follows:
In the formulas (25) - (26), c 1(k)、c2 (k) are the individual learning factor and the social learning factor of the kth iteration respectively, c 1_e、c1_s is the initial value 2.5 and the termination value 0.5 of the individual learning factor respectively, c 2_e、c2_s is the initial value 1 and the termination value 2.25 of the social learning factor respectively, t max is the maximum iteration number, and k is the iteration number.
In the step S11, the optimized initial particles are used as parameters of the CNN-LSTM model, and the mean square error MSE is used as fitness of the IPSO algorithm, specifically as follows:
In formula (27): For the predicted value, y i is the actual value and N is the number of samples.
In the step S12, p i and p g are calculated and determined, and the individual optima and global optima of the population are updated according to the formula (19) and the formula (20).
In the step S17, the evaluation indexes of the mean absolute error MAE threshold MAE th_max, the mean absolute percentage error MAPE threshold MAPE th_max, the root mean square difference RMSE threshold MASE th_max and the decision coefficient R 2 threshold R 2 th_min are introduced to comprehensively evaluate the prediction effect of the IPSO-CNN-LSTM model, and the specific evaluation index thresholds are shown in table 3:
Table 3 IPSO-CNN-LSTM model various evaluation index thresholds
MAEth_max MAPEth_max/% RMSEth_max R2 th_min
0.5 4.5 0.5 0.9
The step S18 specifically includes the following steps:
step S18.1, calculating evaluation indexes, namely calculating and evaluating various indexes of the IPSO-CNN-LSTM model, wherein the evaluation indexes are specifically as follows:
In the steps (28) to (31), For the predicted value, y i is the actual value,The average value is N, and the number of samples is N;
Step S18.2, judging a threshold range, and sequentially comparing whether various evaluation indexes of the IPSO-CNN-LSTM model are positioned in the designated threshold range or not, wherein the specific steps are as follows:
If the requirement of the inequality group (32) is met, the prediction accuracy of the IPSO-CNN-LSTM model is higher, the MAE find is recorded as the absolute upper limit of the predicted residual error alarm threshold value when the model training is ended, the downward execution is continued, and otherwise, the operation step S14 is repeated.
In the step S19, a specific calculation formula is as follows:
δ=mae final×σtesttest (33), where MAE final is the final mean absolute error MAE of the optimal IPSO-CNN-LSTM prediction model, δ test is the standard deviation of the test set, and μ test is the mean of the test set in equation (33).
The above description is only of the preferred embodiments of the present invention and is not intended to limit the present invention, but various modifications and variations can be made to the present invention by those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (14)

1.一种热缩管收卷电机温度预测控制方法,其特征在于,该方法包括以下步骤:1. A method for predicting and controlling the temperature of a heat shrink tube winding motor, characterized in that the method comprises the following steps: 步骤S1:历史数据集采集;Step S1: historical data set collection; 步骤S2:数据处理;Step S2: data processing; 步骤S3:数据筛选;Step S3: data screening; 步骤S4:划分训练数据集与测试数据集;Step S4: Divide the training data set and the test data set; 步骤S5:构建改进CNN-LSTM网络模型结构;Step S5: construct an improved CNN-LSTM network model structure; 步骤S6:初始化粒子群参数和速度位置信息;Step S6: Initialize particle swarm parameters and speed position information; 步骤S7:种群优化;Step S7: population optimization; 步骤S8:优化惯性权重与学习因子;Step S8: Optimizing inertia weight and learning factor; 步骤S9:更新粒子的速度、位置、惯性权重以及学习因子;Step S9: Update the particle's velocity, position, inertia weight and learning factor; 步骤S10:将参数输入至CNN-LSTM神经网络进行训练;Step S10: input the parameters into the CNN-LSTM neural network for training; 步骤S11:定义适应度函数;Step S11: define fitness function; 步骤S12:计算粒子适应度更新最优位置;Step S12: Calculate the particle fitness and update the optimal position; 步骤S13:模型计算且判断,若满足最大迭代次数,则继续向下执行;否则,重复步骤S9;Step S13: Model calculation and judgment, if the maximum number of iterations is met, continue to execute downward; otherwise, repeat step S9; 步骤S14:获取最优参数;Step S14: Obtaining optimal parameters; 步骤S15:最优参数输入IPSO-CNN-LSTM预测模型;Step S15: The optimal parameters are input into the IPSO-CNN-LSTM prediction model; 步骤S16:IPSO-CNN-LSTM模型训练;Step S16: IPSO-CNN-LSTM model training; 步骤S17:输入历史测试集评估模型,且设置各类模型评估指标阀值;Step S17: input the historical test set evaluation model and set the thresholds of various model evaluation indicators; 步骤S18:计算IPSO-CNN-LSTM模型各类评估指标且与对应阀值比对;Step S18: Calculate various evaluation indicators of the IPSO-CNN-LSTM model and compare them with the corresponding thresholds; 步骤S19:输出最优IPSO-CNN-LSTM预测模型;同时,采用Z-Score标准化反归一化方法处理残差报警阀值δ;Step S19: output the optimal IPSO-CNN-LSTM prediction model; at the same time, use the Z-Score standardization denormalization method to process the residual alarm threshold δ; 步骤S20:采用滑动窗口算法统计热缩管收卷电机温度异常频次分级预警。Step S20: Using a sliding window algorithm to calculate the frequency of abnormal temperature of the heat shrink tube winding motor and to issue a graded warning. 2.根据权利要求1所述的热缩管收卷电机温度预测控制方法,其特征在于,所述步骤S20中,包括以下具体步骤:2. The method for predicting and controlling the temperature of a heat shrink tube winding motor according to claim 1, characterized in that step S20 comprises the following specific steps: 步骤S20.1:实时数据采集;Step S20.1: real-time data collection; 步骤S20.2:滑动窗口内当前温度异常频次初始化:fT=0;Step S20.2: Initialize the current temperature anomaly frequency in the sliding window: f T = 0; 步骤S20.3:输入实时采集一个时间段内热缩管收卷电机温度相关参数至最优IPSO-CNN-LSTM预测模型;Step S20.3: inputting the temperature-related parameters of the heat shrink tube winding motor collected in real time within a time period into the optimal IPSO-CNN-LSTM prediction model; 步骤S20.3:输入实时采集一个时间段内热缩管收卷电机温度相关参数至最优IPSO-CNN-LSTM预测模型;Step S20.3: inputting the temperature-related parameters of the heat shrink tube winding motor collected in real time within a time period into the optimal IPSO-CNN-LSTM prediction model; 步骤S20.4:输出对应的热缩管收卷电机预测温度集合:同时,计算热缩管收卷电机温度预测值与实际值之间的平均绝对误差: Step S20.4: Output the corresponding predicted temperature set of the heat shrink tube winding motor: At the same time, the mean absolute error between the predicted value and the actual value of the heat shrink tube winding motor temperature is calculated: 步骤S20.5:采用Z-Score标准化反归一化方法分别处理热缩管收卷电机预测温度集合与平均绝对误差输出对应的TΔT与平均绝对误差MAEupdateStep S20.5: Use the Z-Score standardization and denormalization method to process the predicted temperature set of the heat shrink tube winding motor separately The mean absolute error Output the corresponding T ΔT and mean absolute error MAE update ; 步骤S20.6:若Ti>maxT,其中Ti∈TΔT;maxT为温度上界限,对应的超出阀值较大的温度界限,一般综合热缩管收卷电机提供的温度结合最优IPSO-CNN-LSTM预测模型各类评估指标设置,跳转至步骤S20.10;否则,继续向下执行步骤S20.7;Step S20.6: If Ti > maxT, where Ti ∈T ΔT ; maxT is the upper temperature limit, corresponding to the temperature limit that exceeds the threshold value, generally the temperature provided by the heat shrink tube winding motor is combined with the various evaluation index settings of the optimal IPSO-CNN-LSTM prediction model, and jump to step S20.10; otherwise, continue to execute step S20.7; 步骤S20.7:设置一个滑动窗口长度为L=10,沿预测时间逐步滑动遍历。Step S20.7: Set a sliding window length of L=10, and slide and traverse step by step along the prediction time. 步骤S20.8:若MAEupdate小于残差报警阀值δ,则滑动窗口内的当前温度异常频次加1处理,即fT=fT+1;否则,滑动窗口内的当前温度异常频次保持原值,即fT=fT+0;Step S20.8: If MAE update is less than the residual alarm threshold δ, the current temperature abnormality frequency in the sliding window is increased by 1, that is, f T = f T +1; otherwise, the current temperature abnormality frequency in the sliding window remains the original value, that is, f T = f T +0; 步骤S20.9:若滑动窗口内的当前温度异常频次fT≥6,执行步骤S20.10;否则,跳转至步骤S20.11;Step S20.9: If the current temperature anomaly frequency f T in the sliding window is ≥6, execute step S20.10; otherwise, jump to step S20.11; 步骤S20.10:热缩管收卷机监控预警系统自动显示一级预警,即红色预警,表示热缩管收卷电机正处于异常状态,发送指令至收卷机设备维保作业员及时停机,进行异常处理;Step S20.10: The heat shrink tube winding machine monitoring and warning system automatically displays a first-level warning, i.e., a red warning, indicating that the heat shrink tube winding motor is in an abnormal state, and sends a command to the winding machine equipment maintenance operator to stop the machine in time and handle the abnormality; 步骤S20.11:若滑动窗口内的当前温度异常频次2≤fT≤5,执行步骤S20.12;否则跳转至步骤S20.13;Step S20.11: If the current temperature anomaly frequency in the sliding window is 2≤f T ≤5, execute step S20.12; otherwise jump to step S20.13; 步骤S20.12:热缩管收卷机监控预警系统自动显示二级预警,即黄色预警,表示热缩管收卷电机出现异常现象,发送指令至收卷机设备维保作业员至设备作业区域观察设备运行实时状态,评估运行状况再决定是否采取对应的异常处理措施;Step S20.12: The heat shrink tube winding machine monitoring and warning system automatically displays a second-level warning, i.e., a yellow warning, indicating that an abnormality occurs in the heat shrink tube winding motor. A command is sent to the winder equipment maintenance operator to go to the equipment operation area to observe the real-time operation status of the equipment, evaluate the operation status, and then decide whether to take corresponding abnormality handling measures; 步骤S20.13:缩管收卷机监控预警系统自动显示绿色,表示设备温度正常;跳转至步骤S20.2,进行下一轮次滑动窗口算法统计热缩管收卷电机温度异常频次分级预警。Step S20.13: The monitoring and early warning system of the heat shrink tube winding machine automatically displays green, indicating that the equipment temperature is normal; jump to step S20.2, and perform the next round of sliding window algorithm statistics on the frequency of abnormal temperature of the heat shrink tube winding motor for graded early warning. 3.根据权利要求2所述的热缩管收卷电机温度预测控制方法,其特征在于,所述步骤S1中,历史数据集采集过程中,数据区间位于热缩管收卷机正常运行期间,保证数据的有效性;获取的数据源为同一时刻的收卷电机转速、收卷电机有功功率、收卷电机电压、收卷电机温度、收卷机张力、环境温度和环境湿度参数且同一采样时刻参数数据为1组,采样间隔设置30s,连续采集2个月热缩管收卷机,保证样本数量充足。3. The temperature prediction and control method of the heat shrink tube winding motor according to claim 2 is characterized in that, in the step S1, during the historical data set acquisition process, the data interval is located during the normal operation of the heat shrink tube winding machine to ensure the validity of the data; the data source obtained is the winding motor speed, winding motor active power, winding motor voltage, winding motor temperature, winding machine tension, ambient temperature and ambient humidity parameters at the same time, and the parameter data at the same sampling time is 1 group, the sampling interval is set to 30s, and the heat shrink tube winding machine is collected continuously for 2 months to ensure a sufficient number of samples. 4.根据权利要求3所述的热缩管收卷电机温度预测控制方法,其特征在于,所述步骤S2中,为了提高数据质量、去除冗余信息、处理缺失值以及数据归一化,具体包括以下步骤:4. The method for predicting and controlling the temperature of a heat shrink tube winding motor according to claim 3 is characterized in that, in step S2, in order to improve data quality, remove redundant information, process missing values and normalize data, the following steps are specifically included: 步骤S2.1:数据清洗;采用移动窗口结合各类参数异常阀值的方法,设定移动窗口大小为n,首先,遍历所有采集的参数,查找数据集中存在的重复值,删除重复值;其次,判断连续n个时间步数据中与其对应的异常阀值比对,出现异常的个数小于m是否成立,若成立,采用牛顿多项式插值法进行插补;否则,直接删除含异常数据参数同一时间步的其他特征参数数据;牛顿插值计算公式如下:Step S2.1: Data cleaning; use a moving window combined with various parameter abnormality thresholds, set the moving window size to n, first, traverse all collected parameters, find duplicate values in the data set, and delete the duplicate values; second, compare the abnormality thresholds corresponding to the data in n consecutive time steps to determine whether the number of abnormalities is less than m. If so, use Newton polynomial interpolation to interpolate; otherwise, directly delete other feature parameter data in the same time step containing the abnormal data parameters; the Newton interpolation calculation formula is as follows: Nn(x)=f(x0)+f[x0,x1](x-x0)+f[x0,x1,x2](x-x0)(x-x1)+..+f[x0,x1,...,xn](x-x0)(x-x1)…(x-xn-1) (1)N n (x)=f(x 0 )+f[x 0 , x 1 ](xx 0 )+f[x 0 , x 1 , x 2 ](xx 0 )(xx 1 )+..+f[ x 0 , x 1 ,..., x n ](xx 0 )(xx 1 )...(xx n-1 ) (1) 式(1)~(2)中,(xi,f(xi))表示窗口n内的样本正常数据,f[x0,x1...,xn]表示n阶差商,对于差商f[x0,x1,...,xn]而言,序号下标的顺序不影响差商的值,xi表示异常数据的时间坐标,Nn(x)表示经过牛顿插值法计算后的插入值,实际插值时,只需要把上一次采样后插值的结果Nn-1(x)保存,下一次直接求Nn-1(x)与新项之和即可,以节省计算和存储空间;In formulas (1) to (2), ( xi , f( xi )) represents the normal data of samples in window n, f[ x0 , x1 ..., xn ] represents the n-order difference quotient. For the difference quotient f[ x0 , x1 ,..., xn ], the order of the serial number subscripts does not affect the value of the difference quotient, xi represents the time coordinate of the abnormal data, Nn (x) represents the interpolation value calculated by the Newton interpolation method. During the actual interpolation, it is only necessary to save the interpolation result Nn-1 (x) after the last sampling, and directly calculate the sum of Nn-1 (x) and the new term next time, so as to save the calculation and storage space. 步骤S2.2:Z-Score数据归一化处理;对数据归一化可加速模型训练时梯度下降最优解速度,提高模型精度,数据统一映射在区间[0,1]中,计算公式如下:Step S2.2: Z-Score data normalization processing; data normalization can accelerate the speed of gradient descent optimal solution during model training and improve model accuracy. The data is uniformly mapped in the interval [0, 1]. The calculation formula is as follows: 式(3)中,y是数据值,即数据归一化前的初始值,是数据集的平均值,σ是标准值,y′是数据归一化对应值。 In formula (3), y is the data value, that is, the initial value before data normalization. is the mean value of the data set, σ is the standard value, and y′ is the corresponding value of the normalized data. 5.根据权利要求4所述的热缩管收卷电机温度预测控制方法,其特征在于,所述步骤S3中,通过皮尔逊相关系数计算分析,筛选出与热缩管收卷电机相关强的参数,即收卷电机转速、收卷电机有功功率、收卷电机电压、收卷机张力、环境温度和收卷电机温度温度共6组参数,作为后续模型训练与测试样本数据,将筛选出的样本数据整合如下矩阵表达形式:5. The method for predicting and controlling the temperature of the heat shrinkable tube winding motor according to claim 4 is characterized in that, in the step S3, the parameters strongly correlated with the heat shrinkable tube winding motor are screened out through Pearson correlation coefficient calculation and analysis, namely, winding motor speed, winding motor active power, winding motor voltage, winding machine tension, ambient temperature and winding motor temperature, a total of 6 groups of parameters, which are used as sample data for subsequent model training and testing, and the screened sample data are integrated into the following matrix expression form: 式(10)中,Z∈R6×s,表示6组历史数据整合后的样本数据矩阵,其中,n为收卷电机转速集,P为收卷电机有功功率集,U为收卷电机电压集,F为收卷电机张力集,K为环境温度集,T为收卷电机温度集,s为数据集第s个时间步。 In formula (10), Z∈R 6×s , represents the sample data matrix after the integration of 6 sets of historical data, where n is the winding motor speed set, P is the winding motor active power set, U is the winding motor voltage set, F is the winding motor tension set, K is the ambient temperature set, T is the winding motor temperature set, and s is the sth time step of the data set. 6.根据权利要求5所述的热缩管收卷电机温度预测控制方法,其特征在于,所述步骤S4中,从预处理后的Z矩阵样本数据集按条件随机划分成训练集和测试集,其中训练集占比80%,用于训练改进的CNN-LSTM模型;测试集占比20%,用来对改进的CNN-LSTM模型进行评价;6. The heat shrink tube winding motor temperature prediction and control method according to claim 5 is characterized in that, in the step S4, the pre-processed Z matrix sample data set is randomly divided into a training set and a test set according to conditions, wherein the training set accounts for 80% and is used to train the improved CNN-LSTM model; the test set accounts for 20% and is used to evaluate the improved CNN-LSTM model; 所述步骤S5中,该网络模型结构包含2个一维卷积层、1个全连接层和2个LSTM层,卷积层之后通过BN层和激活函数Leaky-ReLU实现数据的归一化和非线性操作,从而加快模型收敛速度和解决梯度消失问题,提取影响收卷电机温度因素的深层次特征;将提取的深层特征输入到LSTM更好地预测出特征信息。In step S5, the network model structure includes two one-dimensional convolutional layers, one fully connected layer and two LSTM layers. After the convolutional layer, the BN layer and the activation function Leaky-ReLU are used to realize data normalization and nonlinear operation, thereby accelerating the model convergence speed and solving the gradient vanishing problem, and extracting the deep-level features of the factors affecting the temperature of the winding motor; the extracted deep-level features are input into the LSTM to better predict the feature information. 7.根据权利要求1所述的热缩管收卷电机温度预测控制方法,其特征在于,所述步骤S5中,卷积层的参数是卷积核的权值和各通道的偏置量,卷积神经网络CNN的卷积运算的数学模型如下:7. The method for predicting and controlling the temperature of a heat shrink tube winding motor according to claim 1, characterized in that in step S5, the parameters of the convolution layer are the weights of the convolution kernel and the offsets of each channel, and the mathematical model of the convolution operation of the convolutional neural network CNN is as follows: 式(11)中,表示第m层的第j个输出特征映射,Mm-1表示输入特征集,m表示CNN的第m层,表示第m层卷积运算所使用的卷积核,表示第j个特征图的偏置,*表示卷积运算,f(·)表示激活函数; In formula (11), represents the jth output feature map of the mth layer, M m-1 represents the input feature set, m represents the mth layer of CNN, Represents the convolution kernel used by the m-th layer convolution operation, represents the bias of the j-th feature map, * represents the convolution operation, and f(·) represents the activation function; 池化层对卷积层提取的特征进行池化操作,以实现特征的降维与减少网络参数,并在一定程度上防止模型过拟合,采用最大池化,寻求池化区域的最大值获得局部特征,所得到的特征图对于纹理特征更为敏感,对于卷积层第k个滤波器在第i个维度的输出Xi,最大池化运算表示如下:The pooling layer performs a pooling operation on the features extracted by the convolution layer to achieve feature dimensionality reduction and reduce network parameters, and to a certain extent prevent the model from overfitting. The maximum pooling is used to seek the maximum value of the pooling area to obtain local features. The resulting feature map is more sensitive to texture features. For the output Xi of the kth filter of the convolution layer in the i-th dimension, the maximum pooling operation is expressed as follows: 式(12)中,pi(j)为池化层的第j个输出,w为池化核的宽度; In formula (12), p i (j) is the j-th output of the pooling layer, and w is the width of the pooling kernel; 作为循环神经网络RNN的变体LSTM,引入门控单元以缓解RNN训练时存在的梯度消失和梯度爆炸问题,提高预测精度,在t时刻,当前时刻记忆单元Ct是LSTM神经元的核心部分,网络的门控结构由遗忘门、输入门和输出门组成,决定信息到Ct的传输,在一个时间步长内,LSTM神经元通过一系列计算求取Ct及单元内其他状态量的值,具体计算公式如下:As a variant of the recurrent neural network RNN, LSTM introduces a gated unit to alleviate the gradient vanishing and gradient exploding problems during RNN training and improve prediction accuracy. At time t, the current memory unit Ct is the core part of the LSTM neuron. The network's gated structure consists of a forget gate, an input gate, and an output gate, which determines the transmission of information to Ct . Within a time step, the LSTM neuron obtains the value of Ct and other state quantities in the unit through a series of calculations. The specific calculation formula is as follows: ft=σ(Wf[ht-1,xt]+bf) (13)f t =σ(W f [h t-1 , x t ]+b f ) (13) it=σ(Wi[ht-1,xt]+bi) (14)i t =σ(W i [h t-1 , x t ]+b i ) (14) ot=σ(Wo[ht-1,xt]+bo) (15)o t =σ(W o [h t-1 , x t ]+b o ) (15) ht=ot⊙tanh(Ct) (18),式(13)~(18)中,ft、it、ot分别为t时刻输入门、遗忘门、输出门的状态,Ct-1为t-1时刻LSTM的记忆单元;和Ct分别为当前t时刻LSTM的候选记忆单元和当前时刻记忆单元,ht-1为t-1时刻LSTM的输出值,xt为t时刻LSTM的输入值;Wi、Wf、Wo、WC和bi、bf、bo、bC分别为输入门、遗忘门、输出门、候选记忆的权重和偏置,σ(·)为sigmoid激活函数,tanh(·)为双曲正切函数;⊙为各元素按位相乘。h t = o t ⊙ tanh(C t ) (18), where f t , i t , and o t are the states of the input gate, forget gate, and output gate at time t, respectively, and C t-1 is the memory unit of the LSTM at time t-1; and C t are the candidate memory unit and current memory unit of LSTM at time t, respectively; h t-1 is the output value of LSTM at time t-1; x t is the input value of LSTM at time t; Wi , Wf , Wo , WC and bi , bf , bo , bC are the weights and biases of input gate, forget gate, output gate, candidate memory, respectively; σ(·) is the sigmoid activation function; tanh(·) is the hyperbolic tangent function; ⊙ is the bitwise multiplication of each element. 8.根据权利要求7所述的热缩管收卷电机温度预测控制方法,其特征在于,所述步骤S6中,具体包括以下步骤:8. The method for predicting and controlling the temperature of a heat shrink tube winding motor according to claim 7, characterized in that the step S6 specifically comprises the following steps: 步骤S6.1:初始化IPSO超参数,确定IPSO参数的初始值;Step S6.1: Initialize IPSO hyperparameters and determine the initial values of IPSO parameters; 步骤S6.2:初始化粒子位置与速度,随机生成种群:xi=(n1,m1,m2,m3,α,p,batchsize),其中,n1为CNN卷积核数量,m1、m2分别为2个LSTM隐含层的神经元个数,m3为全连接层神经元个数,α为初始化学习率,p为神经元的失活率,batchsize为批处理大小(Batch Size),以上各参数取值范围具体如下:Step S6.2: Initialize the particle position and velocity, and randomly generate a population: x i = (n 1 , m 1 , m 2 , m 3 , α, p, batch size ), where n 1 is the number of CNN convolution kernels, m 1 and m 2 are the number of neurons in the two LSTM hidden layers, m 3 is the number of neurons in the fully connected layer, α is the initialization learning rate, p is the neuron inactivation rate, and batch size is the batch size. The value ranges of the above parameters are as follows: n1∈[2,64],m1∈[1,30],m2∈[1,30],m3∈[1,20],α∈[0.001,0.005],p∈[0.01,0.90],batchsize∈[1,50]。n 1 ∈[2,64], m 1 ∈[1,30], m 2 ∈[1,30], m 3 ∈[1,20], α∈[0.001,0.005], p∈[0.01,0.90 ], batch size ∈[1,50]. 9.根据权利要求8所述的热缩管收卷电机温度预测控制方法,其特征在于,所述步骤S7中,引入Circle映射法对粒子群进行初始化,获取更加均匀和多样化的初始种群,以提高算法的收敛速度和精度,Circle映射产生混沌序列的表达式如下:9. The method for predicting and controlling the temperature of a heat shrink tube winding motor according to claim 8 is characterized in that, in step S7, a Circle mapping method is introduced to initialize the particle swarm to obtain a more uniform and diversified initial population to improve the convergence speed and accuracy of the algorithm. The expression of the chaotic sequence generated by the Circle mapping is as follows: 式(21)中,Xi表示第i个混沌序列数,Xi+1表示第i+1个混沌序列数,e=0.5,f=0.2,mod表示取模运算符,采用Circle映射的种群初始化操作包含对速度的初始化和对位置的初始化,具体初始操作如下: In formula (21), Xi represents the number of the i-th chaotic sequence, Xi+1 represents the number of the i+1-th chaotic sequence, e=0.5, f=0.2, mod represents the modulus operator, and the population initialization operation using Circle mapping includes the initialization of velocity and the initialization of position. The specific initial operation is as follows: vi,j=vlower_b+(vup_b-vlower_b)*Xi,i (22)v i, j = v lower_b + (v up_b - v lower_b )*X i, i (22) xi,j=xlower_b+(xup_b-xlower_b))*X’i,j(23),式(22)~(23)中,vlower_b和vup_b分别为粒子速度的上下界,xlower_b和xup_b分别为粒子位置的上下界,Xi,j和X’i,j分别是对应粒子维度上的Circle映射产生的混沌序列值。x i,j =x lower_b +(x up_b -x lower_b ))*X' i,j (23), in formulas (22) to (23), v lower_b and v up_b are the upper and lower bounds of the particle velocity, x lower_b and x up_b are the upper and lower bounds of the particle position, Xi ,j and X' i,j are the chaotic sequence values generated by the Circle mapping on the corresponding particle dimensions. 10.根据权利要求9所述的热缩管收卷电机温度预测控制方法,其特征在于,所述步骤S8中,具体包括以下步骤:10. The method for predicting and controlling the temperature of a heat shrink tube winding motor according to claim 9, characterized in that the step S8 specifically comprises the following steps: 步骤S8.1:引入动态非线性变化惯性权重,选取动态非线性变化的惯性权重ω,具体如下:Step S8.1: Introduce the dynamic nonlinear change inertia weight, and select the dynamic nonlinear change inertia weight ω, as follows: 式(24)中ω(k)为第k次迭代的惯性权重,ωmax和ωmin分别为惯性权重的最大值0.9、最小值0.4,tmax为最大迭代次数,k为迭代次数; In formula (24), ω(k) is the inertia weight of the kth iteration, ω max and ω min are the maximum value of the inertia weight 0.9 and the minimum value 0.4 respectively, t max is the maximum number of iterations, and k is the number of iterations; 步骤S8.2:引入非对称学习因子,包括个体学习因子c1和社会学习因子c2,c1和c2随着搜索时期的不同而进行适应性调整,具体表达式如下:Step S8.2: Introduce asymmetric learning factors, including individual learning factor c1 and social learning factor c2. c1 and c2 are adaptively adjusted according to the different search periods. The specific expressions are as follows: 式(25)~(26)中,c1(k)、c2(k)分别为第k次迭代的个体学习因子与社会学习因子,C1_e、c1_s分别为个体学习因子初始值2.5和终止值0.5,c2_e、c2_s分别为社会学习因子初始值1和终止值2.25,tmax为最大迭代次数,k为迭代次数。 In formulas (25) to (26), c 1 (k) and c 2 (k) are the individual learning factor and social learning factor of the kth iteration, respectively; C 1_e and c 1_s are the initial value 2.5 and the final value 0.5 of the individual learning factor, respectively; c 2_e and c 2_s are the initial value 1 and the final value 2.25 of the social learning factor, respectively; t max is the maximum number of iterations, and k is the number of iterations. 11.根据权利要求10所述的热缩管收卷电机温度预测控制方法,其特征在于,所述步骤S11中,将优化后的初始粒子作为CNN-LSTM模型的参数,将均方误差MSE作为IPSO算法的适应度,具体如下:11. The method for predicting and controlling the temperature of a heat shrink tube winding motor according to claim 10, characterized in that, in step S11, the optimized initial particles are used as parameters of the CNN-LSTM model, and the mean square error MSE is used as the fitness of the IPSO algorithm, as follows: 式(27)中:为预测值,yi为实际值,N为样本数。 In formula (27): is the predicted value, yi is the actual value, and N is the number of samples. 12.根据权利要求11所述的热缩管收卷电机温度预测控制方法,其特征在于,所述步骤S17中,引入平均绝对误差MAE阀值MAEth_max、平均绝对百分比误差MAPE阀值MAPEth_max、均方根差RMSE阀值RMSEth_max和决定系数R2阀值R2 th_min评价指标综合评估IPSO-CNN-LSTM模型预测效果。12. The heat shrink tube winding motor temperature prediction and control method according to claim 11 is characterized in that, in the step S17, the mean absolute error MAE threshold MAE th_max , the mean absolute percentage error MAPE threshold MAPE th_max , the root mean square error RMSE threshold RMSE th_max and the determination coefficient R 2 threshold R 2 th_min evaluation indicators are introduced to comprehensively evaluate the prediction effect of the IPSO-CNN-LSTM model. 13.根据权利要求12所述的热缩管收卷电机温度预测控制方法,其特征在于,所述步骤S18中,具体包括以下步骤:13. The method for predicting and controlling the temperature of a heat shrink tube winding motor according to claim 12, characterized in that the step S18 specifically comprises the following steps: 步骤S18.1:评估指标计算,计算评估IPSO-CNN-LSTM模型各类指标,具体如下:Step S18.1: Calculate the evaluation index, calculate and evaluate various indicators of the IPSO-CNN-LSTM model, as follows: 式(28)~(31)中,为预测值,yi为实际值,为均值,N为样本数; In formulas (28) to (31), is the predicted value, yi is the actual value, is the mean, N is the number of samples; 步骤S18.2:阀值范围判断,依次对比IPSO-CNN-LSTM模型各类评估指标是否位于指定的阀值范围,具体如下:Step S18.2: Threshold range judgment, compare the various evaluation indicators of the IPSO-CNN-LSTM model in turn to see if they are within the specified threshold range, as follows: 若满足上述不等式组(32)的要求,则表示IPSO-CNN-LSTM模型预测精度较高,此时记录本模型训练终止时MAEfinal作为预测的残差报警阀值的绝对值上限,继续向下执行;否则,重复操作步骤S14。 If the requirements of the above inequality group (32) are met, it means that the prediction accuracy of the IPSO-CNN-LSTM model is relatively high. At this time, the MAE final at the end of the model training is recorded as the absolute upper limit of the predicted residual alarm threshold, and the execution continues; otherwise, repeat step S14. 14.根据权利要求13所述的热缩管收卷电机温度预测控制方法,其特征在于,所述步骤S19中,具体计算公式如下:14. The method for predicting and controlling the temperature of a heat shrink tube winding motor according to claim 13, characterized in that in step S19, the specific calculation formula is as follows: δ=MAEfinal×σtesttest(33),式(33)中,MAEfinal为最优IPSO-CNN-LSTM预测模型的最终的平均绝对误差MAE,σtest为测试集的标准差,μtest为测试集的均值。δ=MAE final ×σ testtest (33), where MAE final is the final mean absolute error MAE of the optimal IPSO-CNN-LSTM prediction model, σ test is the standard deviation of the test set, and μ test is the mean of the test set.
CN202411401858.9A 2024-10-09 2024-10-09 A method for predicting and controlling the temperature of a heat shrink tube winding motor Pending CN119292059A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202411401858.9A CN119292059A (en) 2024-10-09 2024-10-09 A method for predicting and controlling the temperature of a heat shrink tube winding motor

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202411401858.9A CN119292059A (en) 2024-10-09 2024-10-09 A method for predicting and controlling the temperature of a heat shrink tube winding motor

Publications (1)

Publication Number Publication Date
CN119292059A true CN119292059A (en) 2025-01-10

Family

ID=94152769

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202411401858.9A Pending CN119292059A (en) 2024-10-09 2024-10-09 A method for predicting and controlling the temperature of a heat shrink tube winding motor

Country Status (1)

Country Link
CN (1) CN119292059A (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111738512A (en) * 2020-06-22 2020-10-02 昆明理工大学 A Short-Term Electricity Load Forecasting Method Based on CNN-IPSO-GRU Hybrid Model
CN114117895A (en) * 2021-11-09 2022-03-01 上海应用技术大学 Method for predicting temperature of permanent magnet synchronous motor rotor in real time
CN116029183A (en) * 2023-01-30 2023-04-28 江南大学 A power battery temperature prediction method based on iPSO-LSTM model
CN116701868A (en) * 2023-06-07 2023-09-05 桂林电子科技大学 A Probabilistic Prediction Method for Short-term Wind Power Range

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111738512A (en) * 2020-06-22 2020-10-02 昆明理工大学 A Short-Term Electricity Load Forecasting Method Based on CNN-IPSO-GRU Hybrid Model
CN114117895A (en) * 2021-11-09 2022-03-01 上海应用技术大学 Method for predicting temperature of permanent magnet synchronous motor rotor in real time
CN116029183A (en) * 2023-01-30 2023-04-28 江南大学 A power battery temperature prediction method based on iPSO-LSTM model
CN116701868A (en) * 2023-06-07 2023-09-05 桂林电子科技大学 A Probabilistic Prediction Method for Short-term Wind Power Range

Similar Documents

Publication Publication Date Title
CN109931678B (en) Air conditioner fault diagnosis method based on deep learning LSTM
CN116757534B (en) A reliability analysis method for smart refrigerators based on neural training network
CN117648631B (en) Power battery health state estimation method for electric automobile group
CN112990556A (en) User power consumption prediction method based on Prophet-LSTM model
CN111985719B (en) Power load prediction method based on improved long-term and short-term memory network
CN113989550B (en) Electric vehicle charging pile operation status prediction method based on CNN and LSTM hybrid network
CN113128124B (en) Prediction method for mechanical properties of multi-grade C-Mn steel based on improved neural network
CN114626594A (en) Medium-and-long-term electric quantity prediction method based on cluster analysis and deep learning
CN113868938A (en) Method, device and system for short-term load probability density prediction based on quantile regression
CN114707712A (en) Method for predicting requirement of generator set spare parts
CN111680712B (en) Transformer oil temperature prediction method, device and system based on similar time of day
CN115841278B (en) Evaluation method, system, equipment and medium for operating error state of electric energy metering device
CN117973852A (en) Chemical safety assessment method based on improved radial basis function Bayesian network
CN113191636A (en) Aquatic product safety early warning monitoring method based on deep learning technology
CN114638421A (en) A forecasting method for generator spare parts demand
CN119292059A (en) A method for predicting and controlling the temperature of a heat shrink tube winding motor
CN114784795A (en) Wind power prediction method and device, electronic equipment and storage medium
CN114611768A (en) Power distribution network industry expansion matching capacity time sequence construction scale prediction method
CN118096244B (en) Charging pile sales model training method, device, equipment and storage medium
CN116861256B (en) A method, system, equipment and medium for predicting furnace temperature in solid waste incineration process
CN119005390A (en) Hydrological runoff prediction model self-adaptive selection method based on reinforcement learning
CN117670085A (en) A method and device for early warning and monitoring of substation project cost considering factor fluctuations
CN115270916B (en) A method for identifying abnormality in power supply service work orders based on deep belief network
CN118780637A (en) A method for predicting the delivery time of helicopter assembly materials based on lifelong learning
Yao et al. Time series prediction based on LSTM and modified hybrid breeding optimization algorithm

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination