CN117709536B - A method and system for accurate prediction of industrial processes using deep recursive random configuration networks - Google Patents
A method and system for accurate prediction of industrial processes using deep recursive random configuration networks Download PDFInfo
- Publication number
- CN117709536B CN117709536B CN202311743288.7A CN202311743288A CN117709536B CN 117709536 B CN117709536 B CN 117709536B CN 202311743288 A CN202311743288 A CN 202311743288A CN 117709536 B CN117709536 B CN 117709536B
- Authority
- CN
- China
- Prior art keywords
- output
- neurons
- reserve pool
- network
- layer
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q10/00—Administration; Management
- G06Q10/04—Forecasting or optimisation specially adapted for administrative or management purposes, e.g. linear programming or "cutting stock problem"
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/044—Recurrent networks, e.g. Hopfield networks
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/047—Probabilistic or stochastic networks
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/06—Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
- G06N3/063—Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using electronic means
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/082—Learning methods modifying the architecture, e.g. adding, deleting or silencing nodes or connections
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/09—Supervised learning
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16C—COMPUTATIONAL CHEMISTRY; CHEMOINFORMATICS; COMPUTATIONAL MATERIALS SCIENCE
- G16C20/00—Chemoinformatics, i.e. ICT specially adapted for the handling of physicochemical or structural data of chemical particles, elements, compounds or mixtures
- G16C20/10—Analysis or design of chemical reactions, syntheses or processes
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16C—COMPUTATIONAL CHEMISTRY; CHEMOINFORMATICS; COMPUTATIONAL MATERIALS SCIENCE
- G16C20/00—Chemoinformatics, i.e. ICT specially adapted for the handling of physicochemical or structural data of chemical particles, elements, compounds or mixtures
- G16C20/70—Machine learning, data mining or chemometrics
-
- H—ELECTRICITY
- H02—GENERATION; CONVERSION OR DISTRIBUTION OF ELECTRIC POWER
- H02J—CIRCUIT ARRANGEMENTS OR SYSTEMS FOR SUPPLYING OR DISTRIBUTING ELECTRIC POWER; SYSTEMS FOR STORING ELECTRIC ENERGY
- H02J3/00—Circuit arrangements for AC mains or AC distribution networks
- H02J3/003—Load forecast, e.g. methods or systems for forecasting future load demand
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Health & Medical Sciences (AREA)
- Computing Systems (AREA)
- Software Systems (AREA)
- General Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Evolutionary Computation (AREA)
- Biophysics (AREA)
- Artificial Intelligence (AREA)
- Biomedical Technology (AREA)
- Data Mining & Analysis (AREA)
- Molecular Biology (AREA)
- General Engineering & Computer Science (AREA)
- Mathematical Physics (AREA)
- Computational Linguistics (AREA)
- Chemical & Material Sciences (AREA)
- Business, Economics & Management (AREA)
- Human Resources & Organizations (AREA)
- Bioinformatics & Computational Biology (AREA)
- Strategic Management (AREA)
- Economics (AREA)
- Crystallography & Structural Chemistry (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Development Economics (AREA)
- Chemical Kinetics & Catalysis (AREA)
- Analytical Chemistry (AREA)
- Probability & Statistics with Applications (AREA)
- Databases & Information Systems (AREA)
- Power Engineering (AREA)
- Neurology (AREA)
- Game Theory and Decision Science (AREA)
- Medical Informatics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Entrepreneurship & Innovation (AREA)
- Marketing (AREA)
- Operations Research (AREA)
- Quality & Reliability (AREA)
Abstract
本发明属于动态建模技术领域,尤其涉及一种深度递归随机配置网络工业过程精准预测方法和系统,能够利用储存的历史信息进行建模,仅输入当前时刻的数据,无需获取全部阶次延时数据。在满足神经网络模型输出均方根误差减小的条件下,通过随机配置算法的不等式约束条件来找到合适神经元作为储备池候选神经元。将新增神经元连接权值重新分配,只保留新增神经元对自身及原有神经元的网络连接权值,将原有神经元对新增神经元的网络连接权值设为零,即无连接。确定新增神经元后,利用最小二乘法计算得到输出权值,判断当前层的储备池否构建完成。然后配置下一层储备池,通过最大层数、储备池最大容许神经元数和最大容许输出误差判断网络是否构建完毕。
The present invention belongs to the field of dynamic modeling technology, and in particular, relates to a method and system for accurately predicting industrial processes using a deep recursive random configuration network. The method can use stored historical information for modeling, and only inputs data at the current moment, without the need to obtain all order delay data. Under the condition that the root mean square error of the output of the neural network model is reduced, suitable neurons are found as candidate neurons in the reserve pool through the inequality constraints of the random configuration algorithm. The connection weights of the newly added neurons are redistributed, and only the network connection weights of the newly added neurons to themselves and the original neurons are retained, and the network connection weights of the original neurons to the newly added neurons are set to zero, that is, there is no connection. After determining the newly added neurons, the output weights are calculated using the least squares method to determine whether the reserve pool of the current layer has been completed. Then configure the reserve pool of the next layer, and determine whether the network has been completed by the maximum number of layers, the maximum allowable number of neurons in the reserve pool, and the maximum allowable output error.
Description
技术领域Technical Field
本发明属于动态建模技术领域,尤其涉及一种深度递归随机配置网络工业过程精准预测方法和系统。The present invention belongs to the technical field of dynamic modeling, and in particular relates to a method and system for accurately predicting industrial processes using a deep recursive random configuration network.
背景技术Background technique
工业过程中的复杂系统具有多变量动态演化行为,每一个输入变量都存在一定的延时,即当前时刻的输入对系统的作用会在一段时间后才体现。但因为过程环境,外部扰动等因素的影响,我们很难捕捉到涉及输入变量的所有阶次延时信息,而且不同时刻的输入阶次也在不断变化。如何有效利用已知的变量信息对这类阶次不确定的非线性动态系统进行精确建模是非常重要的。Complex systems in industrial processes have multivariable dynamic evolution behaviors. Each input variable has a certain delay, that is, the effect of the current input on the system will be reflected after a period of time. However, due to the influence of process environment, external disturbances and other factors, it is difficult for us to capture all order delay information involving input variables, and the input order at different times is also constantly changing. How to effectively use known variable information to accurately model such nonlinear dynamic systems with uncertain orders is very important.
通过上述分析,现有技术存在的问题及缺陷为:Through the above analysis, the problems and defects of the prior art are as follows:
(1)递归神经网络和长短时记忆网络因其各神经元之间存在反馈连接,能够储存一部分历史信息,可以用来解决阶次不确定的非线性动态系统建模问题。但这两种网络在训练过程中需要依据误差梯度下降法调整网络连接权值和偏置,因此训练过程计算量大,耗费时间长,而且容易陷入局部最优,产生梯度消失或爆炸等问题,实现效率较低。(1) Recurrent neural networks and long short-term memory networks can store some historical information because of the feedback connections between their neurons, and can be used to solve the problem of modeling nonlinear dynamic systems with uncertain orders. However, these two networks need to adjust the network connection weights and biases according to the error gradient descent method during training. Therefore, the training process is computationally intensive and time-consuming, and it is easy to fall into local optimality, resulting in problems such as gradient disappearance or explosion, and the implementation efficiency is low.
(2)回声状态网络各神经元之间也存在反馈连接,相较于递归神经网络,回声状态网络通过最小二乘法计算输出权值,输入权值和储备池连接权值均随机给定,且不再变化。但回声状态网络存在网络结构的盲目选择,权值参数敏感以及无法保证模型的全局逼近性质等缺点,而且无法根据输入数据自适应地进行调整权值和偏置的范围,进而影响模型的预测精度。(2) There are also feedback connections between neurons in the echo state network. Compared with the recurrent neural network, the echo state network calculates the output weights through the least squares method, and the input weights and the reserve pool connection weights are randomly given and no longer change. However, the echo state network has the disadvantages of blind selection of network structure, sensitive weight parameters, and inability to guarantee the global approximation properties of the model. In addition, it is unable to adaptively adjust the range of weights and biases according to the input data, which affects the prediction accuracy of the model.
(3)递归随机配置网络的储备池也采用随机连接,并引入不等式约束,有监督地增加储备池神经元,可以有效避免上述问题。但相较于多层网络模型,在总神经元数目相近的情况下,单层回声状态网络和递归随机配置网络的计算复杂度较大,模型训练速度缓慢。(3) The reserve pool of the recursive random configuration network also uses random connections, introduces inequality constraints, and adds reserve pool neurons in a supervised manner, which can effectively avoid the above problems. However, compared with the multi-layer network model, when the total number of neurons is similar, the computational complexity of the single-layer echo state network and the recursive random configuration network is larger, and the model training speed is slow.
在工业领域,上述技术的问题和缺陷导致以下问题:In the industrial field, the problems and defects of the above technologies lead to the following problems:
1.实时性不足:由于递归神经网络和长短时记忆网络(LSTM)在训练过程中需要依据误差梯度下降法调整网络连接权值和偏置,因此训练过程计算量大,耗费时间长,不适用于需要快速响应的工业应用,如实时监控和故障检测。1. Lack of real-time performance: Since recursive neural networks and long short-term memory networks (LSTM) need to adjust network connection weights and biases according to the error gradient descent method during training, the training process is computationally intensive and time-consuming, and is not suitable for industrial applications that require fast response, such as real-time monitoring and fault detection.
2.预测精度低:回声状态网络和递归随机配置网络由于存在权值参数敏感以及无法保证模型的全局逼近性质等问题,导致模型的预测精度低,对于需要高精度预测的工业应用,如质量控制、能源系统负荷预测等,无法满足要求。2. Low prediction accuracy: The echo state network and recursive random configuration network have low prediction accuracy due to problems such as sensitive weight parameters and the inability to guarantee the global approximation properties of the model. They cannot meet the requirements of industrial applications that require high-precision predictions, such as quality control and energy system load forecasting.
3.资源使用效率低:与多层神经网络相比,单层递归随机配置网络在总神经元数目相近的情况下,计算复杂度较大,模型训练速度缓慢,导致资源使用效率低。3. Low resource utilization efficiency: Compared with multi-layer neural networks, single-layer recursive random configuration networks have higher computational complexity and slower model training speed when the total number of neurons is similar, resulting in low resource utilization efficiency.
4.稳定性和鲁棒性不足:由于递归神经网络和长短时记忆网络容易陷入局部最优,产生梯度消失或爆炸等问题,导致模型的稳定性和鲁棒性不足,对于需要在各种工况下稳定运行的工业系统,存在风险。4. Insufficient stability and robustness: Since recursive neural networks and long short-term memory networks are prone to falling into local optimality, resulting in problems such as gradient disappearance or explosion, the model's stability and robustness are insufficient, which poses risks for industrial systems that need to operate stably under various working conditions.
5.模型参数调整困难:由于这些模型无法根据输入数据自适应地进行调整权值和偏置的范围,导致模型调整困难,无法快速适应工业系统的变化,影响系统的运行效率和效果。5. Difficulty in adjusting model parameters: Since these models cannot adaptively adjust the range of weights and biases according to input data, it is difficult to adjust the models and they cannot quickly adapt to changes in industrial systems, affecting the operating efficiency and effectiveness of the system.
发明内容Summary of the invention
针对现有技术存在的问题,本发明提供了一种深度递归随机配置网络工业过程精准预测方法和系统,目的在于解决阶次不确定的非线性动态系统建模问题,并提升模型整体的训练速度和预测精度。深度递归随机配置网络能够利用储存的历史信息进行建模,仅输入当前时刻的数据,无需获取全部阶次延时数据。在满足神经网络模型输出均方根误差减小的条件下,通过随机配置算法的不等式约束条件来找到合适神经元作为储备池候选神经元。与原先储备池随机连接不同,深度递归随机配置网络将新增神经元连接权值重新分配,只保留新增神经元对自身及原有神经元的网络连接权值,将原有神经元对新增神经元的网络连接权值设为零,即无连接。确定新增神经元后,利用最小二乘法计算得到输出权值,通过储备池最大容许神经元数和最大容许输出误差判断当前层的储备池否构建完成。然后,相同的步骤配置下一层储备池,不断向更深层的储备池推进,通过最大层数、储备池最大容许神经元数和最大容许输出误差判断网络是否构建完毕。In view of the problems existing in the prior art, the present invention provides a method and system for accurately predicting industrial processes using a deep recursive random configuration network, the purpose of which is to solve the problem of modeling nonlinear dynamic systems with uncertain orders, and to improve the overall training speed and prediction accuracy of the model. The deep recursive random configuration network can use the stored historical information for modeling, and only input the data at the current moment, without obtaining all the order delay data. Under the condition that the root mean square error of the output of the neural network model is reduced, the inequality constraints of the random configuration algorithm are used to find suitable neurons as candidate neurons in the reserve pool. Different from the original random connection of the reserve pool, the deep recursive random configuration network redistributes the connection weights of the newly added neurons, and only retains the network connection weights of the newly added neurons to themselves and the original neurons, and sets the network connection weights of the original neurons to the newly added neurons to zero, that is, no connection. After determining the newly added neurons, the output weights are calculated using the least squares method, and the maximum allowable number of neurons in the reserve pool and the maximum allowable output error are used to determine whether the reserve pool of the current layer is completed. Then, the same steps are followed to configure the next layer of the reserve pool, and the process continues to move forward to deeper reserve pools. The maximum number of layers, the maximum allowable number of neurons in the reserve pool, and the maximum allowable output error are used to determine whether the network has been constructed.
本发明是这样实现的,一种深度递归随机配置网络工业过程精准预测方法,本技术方案采用基于深度递归随机配置网络(DeepRSCN)的精准预测方法,通过数据预处理、网络层次初始化、随机配置权值和偏置,结合不等式约束条件选取和优化储备池神经元,以及利用实时数据更新的投影算法,实现了高自适应性、高预测精度的动态系统建模。该方案减少了对大量标签数据的依赖,显著提升了计算效率,具备良好的扩展性和实时性,适用于需要快速响应和高精度预测的复杂系统动态建模。The present invention is implemented as follows: a method for accurately predicting industrial processes using a deep recursive random configuration network. This technical solution adopts an accurate prediction method based on a deep recursive random configuration network (DeepRSCN). Through data preprocessing, network layer initialization, random configuration weights and biases, selection and optimization of reserve pool neurons in combination with inequality constraints, and a projection algorithm using real-time data updates, dynamic system modeling with high adaptability and high prediction accuracy is achieved. This solution reduces the reliance on a large amount of label data, significantly improves computational efficiency, has good scalability and real-time performance, and is suitable for dynamic modeling of complex systems that require fast response and high-precision prediction.
进一步,包括:Further, including:
S1,数据预处理;S1, data preprocessing;
S2,深度递归随机配置网络初始化;S2, deep recursive random configuration network initialization;
S3,随机配置权值和偏置,并采用全新的方法构造储备池候选神经元;S3, randomly configures weights and biases, and uses a new method to construct candidate neurons in the reserve pool;
S4,根据不等式约束条件有监督地选取候选神经元;S4, supervised selection of candidate neurons based on inequality constraints;
S5,确定最优候选神经元,增加到储备池;S5, determine the best candidate neurons and add them to the reserve pool;
S6,结合目标输出和储备池输出计算当前层网络输出权值;S6, calculate the output weight of the current layer network by combining the target output and the reserve pool output;
S7,根据输出权值和储备池输出计算网络输出与目标输出之间的误差,重复S3~S6,直至满足误差要求或达到当前层储备池神经元最大容许数量;S7, calculating the error between the network output and the target output according to the output weight and the reserve pool output, and repeating S3 to S6 until the error requirement is met or the maximum allowable number of neurons in the reserve pool of the current layer is reached;
S8,当不满足误差要求时,进入下一层,按S3~S7依次配置各层储备池神经元,直至满足误差要求或达到最大层数要求;S8, when the error requirement is not met, enter the next layer and configure the reserve pool neurons of each layer in sequence according to S3 to S7 until the error requirement is met or the maximum number of layers is reached;
S9,基于投影算法,根据实时数据对训练好的输出权值进行在线更新,并计算实时输出。S9, based on the projection algorithm, updates the trained output weights online according to the real-time data and calculates the real-time output.
进一步,S1具体包括:给定一组时序数据,输入样本:{u(1),u(2),...,u(nmax)}={(y(1),u(1)),(y(2),u(2)),...,(y(nmax),u(nmax))},目标输出:T={t(2),t(3),...,t(nmax+1)},其中y,u分别是系统输出和被控输入,本发明仅通过当前时刻输入(y(n),u(n))预测下一时刻输出y(n+1),nmax为样本个数。设置最大层数为S,每一层储备池神经元最大容许数量最大期望输出误差容许值ε,最大候选神经元生成数Gmax,输入权值分布参数γ={λmin,λmin+Δλ,...,λmax}。Further, S1 specifically includes: given a set of time series data, input samples: {u(1), u(2), ..., u(n max )} = {(y(1), u(1)), (y(2), u(2)), ..., (y(n max ), u(n max ))}, target output: T = {t(2), t(3), ..., t(n max +1)}, where y and u are the system output and the controlled input respectively, and the present invention predicts the output y(n+1) at the next moment only through the current moment input (y(n), u(n)), and n max is the number of samples. Set the maximum number of layers to S, and the maximum allowable number of neurons in each layer of the reserve pool The maximum expected output error tolerance ε, the maximum number of candidate neurons generated G max , and the input weight distribution parameter γ = {λ min ,λ min +Δλ,...,λ max }.
进一步,S2具体包括:初始化输出误差向量e0=T,模型输出误差缩放因子0<r<1,假定各层已经存在一个包含N个神经元的储备池,此时不同层数的网络模型为:Furthermore, S2 specifically includes: initializing the output error vector e 0 =T, the model output error scaling factor 0<r<1, assuming that each layer already has a reserve pool containing N neurons, and the network model with different numbers of layers is:
……
其中j=1,2,...,S,和/>是第j层的输入权值,储备池连接权值和输出权值矩阵,/>是偏置矩阵,x(j)(n)为n时刻的储备池状态向量,g是激活函数,X(j)=[x(j)(1),x(j)(2),…x(j)(n)]是储备池状态矩阵,Y(j)是第j层的网络输出。where j = 1, 2, ..., S, and/> is the input weight of the jth layer, the reservoir connection weight and the output weight matrix,/> is the bias matrix, x (j) (n) is the reservoir state vector at time n, g is the activation function, X (j) =[x (j) (1),x (j) (2),…x (j) (n)] is the reservoir state matrix, and Y (j) is the network output of the jth layer.
进一步,S3具体包括:从γ={λmin,λmin+Δλ,...,λmax}中,随机选取输入权值储备池连接权值/>和偏置/>代入激活函数(第一层)或(第j层)得到Gmax组候选神经元 Further, S3 specifically includes: randomly selecting input weights from γ = {λ min ,λ min +Δλ, ...,λ max } Reserve pool connection weight/> and bias/> Substitute the activation function (first floor) or (jth layer) Get G max group of candidate neurons
进一步,S4具体包括:将Gmax组候选神经元代入到随机配置算法的不等式约束条件:Further, S4 specifically includes: substituting the G max group of candidate neurons into the inequality constraints of the random configuration algorithm:
其中是当前各层所有储备池神经元的输出误差,单层结构时:Nsum=N,第j层时:/>m是输出维度,非负实数序列/>满足/>且/>从中筛选出满足不等式约束的候选神经元。in is the output error of all reserve pool neurons in the current layers. For a single layer structure: N sum = N, for the jth layer: /> m is the output dimension, a non-negative real number sequence/> Satisfaction/> And/> Select candidate neurons that satisfy the inequality constraints.
进一步,S5具体包括:定义一组变量:从中筛选出满足不等式约束并使得ξN+1最大的候选神经元,作为最优储备池候选神经元。Further, S5 specifically includes: defining a set of variables: The candidate neurons that satisfy the inequality constraints and maximize ξ N+1 are selected as the optimal reserve pool candidate neurons.
进一步,S6具体包括:将最优储备池候选神经元添加到网络模型中,根据目标输出直接计算神经网络模型的输出权值并计算神经网络模型的输出均方根误差,将均方根误差作为损失函数,更新模型的输出误差/>和储备池神经元数N=N+1。Further, S6 specifically includes: adding the optimal reserve pool candidate neurons to the network model, and directly calculating the output weight of the neural network model according to the target output And calculate the output root mean square error of the neural network model, use the root mean square error as the loss function, and update the output error of the model/> And the number of neurons in the reserve pool N=N+1.
进一步,S7具体包括:如果当前网络模型的输出均方根误差||e0||2大于最大期望输出误差容许值ε,并且储备池神经元数N小于储备池神经元最大容许数量那么重复S3~S6;如果当前网络模型的输出均方根误差||e0||2大于最大期望输出误差容许值ε,且当前层储备池神经元数N等于储备池神经元最大容许数量/>则进入下一层继续训练,更新储备池神经元数N=0和储备池层数j=j+1,重复S3~S6;如果当前网络模型输出的均方根误差||e0||2小于最大期望输出误差容许值ε,或者j=S且当前层储备池神经元数N等于储备池神经元最大容许数量/>则训练结束,得到一个满足随机配置理论约束条件的深度网络模型。Further, S7 specifically includes: if the output root mean square error of the current network model ||e 0 || 2 is greater than the maximum expected output error allowable value ε, and the number of neurons in the reserve pool N is less than the maximum allowable number of neurons in the reserve pool Then repeat S3 to S6; if the output root mean square error of the current network model ||e 0 || 2 is greater than the maximum expected output error tolerance ε, and the number of neurons in the current layer reserve pool N is equal to the maximum allowable number of neurons in the reserve pool/> Then enter the next layer to continue training, update the number of neurons in the reserve pool N = 0 and the number of reserve pool layers j = j + 1, and repeat S3 to S6; if the root mean square error ||e 0 || 2 of the current network model output is less than the maximum expected output error tolerance ε, or j = S and the number of neurons in the current layer reserve pool N is equal to the maximum allowable number of neurons in the reserve pool/> The training is completed and a deep network model that meets the constraints of random configuration theory is obtained.
进一步,S9具体包括:基于投影算法,根据实时数据对训练好的输出权值进行在线更新,并计算实时输出:Furthermore, S9 specifically includes: based on the projection algorithm, updating the trained output weights online according to the real-time data, and calculating the real-time output:
其中0<a<1,c>0是两个常数,g(n)是n时刻的储备池输出,e(n)是校正前的输出误差。Where 0<a<1, c>0 are two constants, g(n) is the output of the reserve pool at time n, and e(n) is the output error before correction.
本发明的另一目的在于提供一种应用所述深度递归随机配置网络工业过程精准预测方法的用于工业过程中的复杂系统的动态建模系统,包括:Another object of the present invention is to provide a dynamic modeling system for complex systems in industrial processes using the deep recursive random configuration network industrial process accurate prediction method, comprising:
预处理模块,用于进行数据预处理;A preprocessing module, used for data preprocessing;
初始化模块,用于深度递归随机配置网络初始化;Initialization module, used for deep recursive random configuration network initialization;
构造神经元模块,用于随机配置权值和偏置,并采用全新的方法构造储备池候选神经元;Construct a neuron module to randomly configure weights and biases, and use a new method to construct candidate neurons in the reserve pool;
候选神经元选取模块,根据不等式约束条件有监督地选取候选神经元;The candidate neuron selection module selects candidate neurons in a supervised manner according to inequality constraints;
最优神经元确定模块,确定最优候选神经元,增加到储备池;The optimal neuron determination module determines the optimal candidate neuron and adds it to the reserve pool;
输出权值计算模块,结合目标输出和储备池输出计算当前层网络输出权值;The output weight calculation module calculates the output weight of the current layer network by combining the target output and the reserve pool output;
在线更新模块,用于基于投影算法,根据实时数据对训练好的输出权值进行在线更新,并计算实时输出。The online update module is used to update the trained output weights online based on the projection algorithm and real-time data, and calculate the real-time output.
结合上述的技术方案和解决的技术问题,本发明所要保护的技术方案所具备的优点及积极效果为:In combination with the above technical solutions and the technical problems solved, the advantages and positive effects of the technical solutions to be protected by the present invention are as follows:
第一,本发明提供一种深度递归随机配置网络工业过程精准预测方法和系统,当输入阶次未知时,利用储备池存储的历史信息进行建模。与递归神经网络和长短时记忆网络相比,在增量式构建过程中引入监督机制随机配置储备池神经元的输入权值和偏置,无需通过反向传播迭代更新,避免了梯度下降更新网络参数所带来的易陷入局部最优、梯度消失或爆炸等问题,并在配置过程中确定了网络结构;与深度回声状态网络相比,深度递归随机配置网络可以根据输入数据自适应地调整权值和偏置,并能从理论上保证模型的全局逼近性质,实现对训练数据的零误差逼近;相较于单层递归随机配置网络,在总节点数相近时,深度版本可以有效降提升模型的训练速度和预测精度。First, the present invention provides a method and system for accurately predicting industrial processes using a deep recursive random configuration network. When the input order is unknown, historical information stored in a reserve pool is used for modeling. Compared with recursive neural networks and long short-term memory networks, a supervisory mechanism is introduced in the incremental construction process to randomly configure the input weights and biases of the reserve pool neurons, without the need for iterative updates through back propagation, thus avoiding the problems of being prone to local optimality, gradient disappearance or explosion caused by updating network parameters by gradient descent, and determining the network structure during the configuration process; compared with a deep echo state network, a deep recursive random configuration network can adaptively adjust weights and biases according to input data, and can theoretically guarantee the global approximation properties of the model, achieving zero error approximation of training data; compared with a single-layer recursive random configuration network, when the total number of nodes is similar, the deep version can effectively reduce and improve the training speed and prediction accuracy of the model.
第二,本发明提供一种深度递归随机配置网络工业过程精准预测方法和系统,能够逐层形成更加合适的特征表示,相较于单层结构,在总结点数相近时,拥有更快的运算速度和更高的预测精度。结合本发明设计的模型框架,能够在保证预测精度的同时,降低算法的整体计算量,适用于对实时性要求高的应用场景,在工业人工智能、智慧医疗、智慧交通、无人驾驶等领域具有良好的应用前景。Second, the present invention provides a method and system for accurately predicting industrial processes using a deep recursive random configuration network, which can form more appropriate feature representations layer by layer. Compared with a single-layer structure, it has a faster computing speed and higher prediction accuracy when the summary points are similar. Combined with the model framework designed by the present invention, it is possible to reduce the overall computational complexity of the algorithm while ensuring the prediction accuracy, and is suitable for application scenarios with high real-time requirements, and has good application prospects in the fields of industrial artificial intelligence, smart medical care, smart transportation, and unmanned driving.
对于动态数据建模问题,人们大都停留在证明模型全局逼近能力的存在性,而没有从构造性的角度出发。本发明提供一种用于工业过程中的复杂系统的动态建模算法和系统,增量式构造一个具有全局逼近能力的动态模型。而且该模型无需阶次辨识,无需梯度下降更新权值和偏置,能够自适应地调整权值和偏置的范围,适用于阶次不确定的动态建模问题。For dynamic data modeling problems, most people stay at proving the existence of the global approximation ability of the model, but do not start from the perspective of constructivity. The present invention provides a dynamic modeling algorithm and system for complex systems in industrial processes, which incrementally constructs a dynamic model with global approximation ability. Moreover, the model does not require order identification, does not require gradient descent to update weights and biases, and can adaptively adjust the range of weights and biases, and is suitable for dynamic modeling problems with uncertain orders.
第三,深度递归随机配置网络工业过程精准预测方法具有以下在工业上显著的技术进步:Third, the deep recursive random configuration network industrial process accurate prediction method has the following significant technical advances in industry:
1.增强模型学习能力1. Enhance model learning capabilities
由于采用深度网络结构,模型能够学习到更复杂的数据分布和动态特性,从而提高预测精度。这在复杂工业过程中尤为重要。Due to the deep network structure, the model can learn more complex data distribution and dynamic characteristics, thereby improving prediction accuracy, which is particularly important in complex industrial processes.
2.动态和自适应特性2. Dynamic and adaptive characteristics
通过循环反馈和储备池,以及在线权值更新,使得模型能够在运行过程中自我调整和优化,以适应工业过程中的动态变化。Through loop feedback and reserve pool, as well as online weight update, the model can self-adjust and optimize during operation to adapt to dynamic changes in industrial processes.
3.误差监控和优化3. Error monitoring and optimization
通过多层储备池和误差反馈,模型能够实时地评估其性能,并在需要时进行自我调整,以满足精度要求。Through multiple layers of pooling and error feedback, the model is able to evaluate its performance in real time and adjust itself when necessary to meet accuracy requirements.
4.节省计算资源4. Save computing resources
该方法是一种随机学习算法,输入权值和偏置均随机给定,无需梯度下降更新权值,相较于传统深度学习方法需要更少的计算资源,这在资源有限或需要快速响应的工业环境中是非常有价值的。This method is a stochastic learning algorithm in which the input weights and biases are randomly given. There is no need for gradient descent to update the weights. Compared with traditional deep learning methods, it requires fewer computing resources, which is very valuable in industrial environments with limited resources or requiring fast response.
5.强化模型的逼近能力5. Strengthen the model’s approximation capability
通过不等式约束条件有监督地选取候选神经元,模型具有万局逼近特性,能够不断逼近目标输出。By selecting candidate neurons in a supervised manner through inequality constraints, the model has a universal approximation characteristic and can continuously approach the target output.
6.层次化建模6. Hierarchical Modeling
由于模型是多层的,它可以进行层次化的建模,这不仅可以捕捉到更复杂的系统动力学特征,还可以在不同层级上进行不同的优化和控制,增加了模型的灵活性。Since the model is multi-layered, it can perform hierarchical modeling, which can not only capture more complex system dynamics characteristics, but also perform different optimizations and controls at different levels, increasing the flexibility of the model.
7.实时性7. Real-time
通过在线更新输出权值,模型能够实时适应新的未知数据和条件,对于需要实时决策和控制的工业领域是非常重要的。By updating the output weights online, the model can adapt to new unknown data and conditions in real time, which is very important for industrial fields that require real-time decision-making and control.
8.降低维护成本8. Reduce maintenance costs
自适应和实时更新特性减少了模型维护的复杂性和成本,因为模型能够自我调整以适应动态变化的工业环境和条件。The adaptive and real-time updating features reduce the complexity and cost of model maintenance as the model is able to self-adjust to dynamically changing industrial environments and conditions.
本发明提供的深度递归随机配置网络工业过程精准预测方法在提高模型精度、节省计算资源、增加自适应能力和实现实时控制等方面具有显著的工业应用潜力。The deep recursive random configuration network industrial process accurate prediction method provided by the present invention has significant industrial application potential in improving model accuracy, saving computing resources, increasing adaptive capabilities and realizing real-time control.
第四,本发明提供的两个实施例中所述的深度递归随机配置网络工业过程精准预测方法带来的显著的技术进步主要包括:Fourth, the significant technical advances brought about by the industrial process accurate prediction method of the deep recursive random configuration network described in the two embodiments provided by the present invention mainly include:
1)增强模型的自适应性:1) Enhance the adaptability of the model:
通过监督机制构造储备池,根据输入数据确定权值和偏置的范围,并利用投影算法实时更输出权值,模型能够动态地适应系统变化,提高对未知或变动环境的建模准确性。By constructing a reserve pool through a supervision mechanism, determining the range of weights and biases based on input data, and using a projection algorithm to update output weights in real time, the model can dynamically adapt to system changes and improve the accuracy of modeling unknown or changing environments.
2)提升预测精度:2) Improve prediction accuracy:
采用数据驱动的权值和偏置选择方法,不是盲目随机选取,而且深度学习结构能够更好地捕捉数据中的非线性特征,相较于传统的线性模型或浅层网络,显著提高了预测的精度。It adopts a data-driven weight and bias selection method instead of blind random selection, and the deep learning structure can better capture the nonlinear characteristics in the data, which significantly improves the prediction accuracy compared with traditional linear models or shallow networks.
3)计算效率的提高:3) Improvement of computational efficiency:
模型在初始化和迭代过程中通过随机配置权值和偏置,减少了权值计算量,提高了计算效率。The model randomly configures weights and biases during initialization and iteration, which reduces the amount of weight calculation and improves computational efficiency.
4)扩展性和灵活性:4) Scalability and flexibility:
网络结构的层次化和模块化设计,使得模型可以根据不同任务轻松扩展或修改网络层和神经元的数量。The hierarchical and modular design of the network structure allows the model to easily expand or modify the number of network layers and neurons according to different tasks.
5)实时性能的改善:5) Improvement of real-time performance:
模型能够实时接收新数据并快速调整模型参数,使得动态建模更加符合实际应用需求,尤其是在实时性要求高的场景中。The model can receive new data in real time and quickly adjust model parameters, making dynamic modeling more in line with actual application needs, especially in scenarios with high real-time requirements.
在化工反应过程预测的实施例中,这些技术进步可以实现更稳定的生产过程和更高的产品质量。在电力系统负荷预测的实施例中,可以优化电力资源的调度和分配,减少能源浪费,提高电网的运行效率。In the case of chemical reaction process prediction, these technological advances can achieve more stable production processes and higher product quality. In the case of power system load prediction, the scheduling and allocation of power resources can be optimized, energy waste can be reduced, and the operating efficiency of the power grid can be improved.
附图说明BRIEF DESCRIPTION OF THE DRAWINGS
为了更清楚地说明本发明实施例的技术方案,下面将对本发明实施例中所需要使用的附图做简单的介绍,显而易见地,下面所描述的附图仅仅是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下还可以根据这些附图获得其他的附图。In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the following briefly introduces the drawings required for use in the embodiments of the present invention. Obviously, the drawings described below are only some embodiments of the present invention. For ordinary technicians in this field, other drawings can be obtained based on these drawings without creative work.
图1是本发明实施例提供的深度递归随机配置网络工业过程精准预测方法流程图;FIG1 is a flow chart of a method for accurately predicting industrial processes using a deep recursive random configuration network provided by an embodiment of the present invention;
图2是本发明实施例提供的用于工业过程中的复杂系统的动态建模系统结构示意图;2 is a schematic diagram of the structure of a dynamic modeling system for complex systems in industrial processes provided by an embodiment of the present invention;
图3是验证本发明所提方法对MG系统的有效性的效果图;FIG3 is a diagram showing the effectiveness of the method proposed in the present invention on the MG system;
图4是各方法对非线性系统辨识数据的拟合效果图;FIG4 is a diagram showing the fitting effects of various methods on nonlinear system identification data;
图5是各方法对脱丁烷塔塔底丁烷浓度数据的拟合效果图;FIG5 is a diagram showing the fitting results of various methods for butane concentration data at the bottom of a debutanizer;
图6是验证本发明所提方法对脱丁烷塔塔底丁烷浓度数据的有效性的效果图。FIG6 is a diagram showing the effectiveness of the method of the present invention on the butane concentration data at the bottom of the debutanizer.
具体实施方式Detailed ways
为了使本发明的目的、技术方案及优点更加清楚明白,以下结合实施例,对本发明进行进一步详细说明。应当理解,此处所描述的具体实施例仅仅用以解释本发明,并不用于限定本发明。In order to make the purpose, technical solution and advantages of the present invention more clearly understood, the present invention is further described in detail below in conjunction with the embodiments. It should be understood that the specific embodiments described herein are only used to explain the present invention and are not used to limit the present invention.
实施例一:Embodiment 1:
本实施例提供了一种深度递归随机配置网络工业过程精准预测方法,在一个复杂的非线性系统动态建模场景中的应用。在这个实施例中,该方法被用于预测一个化工反应过程中的产品浓度。This embodiment provides a method for accurately predicting industrial processes using a deep recursive random configuration network, and its application in a complex nonlinear system dynamic modeling scenario. In this embodiment, the method is used to predict the concentration of a product in a chemical reaction process.
数据预处理:采集反应器的实时操作数据,包括温度、压力、原料投加量等参数,并进行归一化处理。Data preprocessing: Collect the real-time operating data of the reactor, including parameters such as temperature, pressure, and raw material dosage, and perform normalization processing.
初始化网络:设置网络的初始参数,包括储备池神经元的数量、学习率等。Initialize the network: set the initial parameters of the network, including the number of neurons in the reserve pool, learning rate, etc.
随机配置权值和偏置:通过随机方法初始化网络权值和偏置,构建储备池候选神经元。Randomly configure weights and biases: Initialize network weights and biases by random methods to construct candidate neurons in the reserve pool.
候选神经元的选取:根据实时数据和系统性能要求,使用遗传算法优化选取最适合当前数据特征的候选神经元。Selection of candidate neurons: Based on real-time data and system performance requirements, a genetic algorithm is used to optimize and select candidate neurons that best suit the current data characteristics.
增加最优候选神经元:将选定的候选神经元加入储备池,形成新的网络结构。Add the best candidate neurons: Add the selected candidate neurons to the reserve pool to form a new network structure.
输出权值计算:利用最小二乘法计算网络的输出权值。Output weight calculation: Use the least squares method to calculate the output weight of the network.
误差计算与反馈:根据实时产品浓度数据和网络预测值计算误差,进行网络结构和参数的动态调整。Error calculation and feedback: Calculate the error based on the real-time product concentration data and network prediction value, and dynamically adjust the network structure and parameters.
通过不断重复上述过程,直至网络输出的误差达到可接受的范围内,完成化工反应过程的动态建模。By repeating the above process until the error of the network output is within an acceptable range, the dynamic modeling of the chemical reaction process is completed.
实施例二:Embodiment 2:
在另一个实施例中,该精准预测方法用于电力系统负荷预测。In another embodiment, the accurate prediction method is used for power system load prediction.
具体实施步骤如下:The specific implementation steps are as follows:
数据预处理:收集电力系统的历史负荷数据,考虑天气、节假日等因素对电力负荷的影响,进行数据清洗和预处理。Data preprocessing: Collect historical load data of the power system, consider the impact of weather, holidays and other factors on power load, and perform data cleaning and preprocessing.
网络初始化:根据电力系统负荷的特性设置网络层数和每层的神经元数。Network initialization: Set the number of network layers and the number of neurons in each layer according to the characteristics of the power system load.
随机配置权值和偏置:采用随机化方法生成网络初始的权值和偏置。Randomly configure weights and biases: Use randomization methods to generate the initial weights and biases of the network.
候选神经元的筛选:使用滑动窗口法根据过去的负荷数据动态选择合适的候选神经元。Screening of candidate neurons: Use the sliding window method to dynamically select suitable candidate neurons based on past load data.
确定最优候选神经元:通过模拟退火算法筛选出能最大化预测准确性的候选神经元。Determine the optimal candidate neurons: Use the simulated annealing algorithm to screen out candidate neurons that can maximize prediction accuracy.
计算输出权值:采用梯度下降法根据目标输出优化输出权值。Calculate output weights: Use gradient descent to optimize output weights based on target output.
误差计算与调整:根据预测结果和实际负荷的偏差进行误差分析,逐步优化网络结构。Error calculation and adjustment: Perform error analysis based on the deviation between the predicted results and the actual load, and gradually optimize the network structure.
通过实时更新和调整输出权值,该模型能够实现对电力负荷的准确预测,从而提高电网运营的效率和安全性。By updating and adjusting output weights in real time, the model can achieve accurate prediction of power load, thereby improving the efficiency and safety of power grid operations.
本发明主要针对以下现有技术的问题和缺陷进行改进,实现显著的技术进步:The present invention mainly improves the following problems and defects of the prior art and achieves significant technical progress:
预测精度和自适应性不足:现有工业预测技术可能在处理动态、复杂的工业数据时,预测精度和自适应性不足。本发明通过采用深度递归随机配置网络,显著提高了预测的精度和自适应性。Insufficient prediction accuracy and adaptability: Existing industrial prediction technologies may lack prediction accuracy and adaptability when processing dynamic and complex industrial data. The present invention significantly improves the prediction accuracy and adaptability by adopting a deep recursive random configuration network.
权值和偏置配置的随机性和不确定性:在传统的随机配置网络中,权值和偏置的随机配置可能导致网络性能不稳定。本发明通过有监督地选取候选神经元和优化储备池结构,减少了这种随机性带来的不确定性。Randomness and uncertainty of weight and bias configuration: In traditional randomly configured networks, random configuration of weights and biases may lead to unstable network performance. The present invention reduces the uncertainty caused by this randomness by selecting candidate neurons in a supervised manner and optimizing the reserve pool structure.
动态系统建模的局限性:现有技术在对工业过程中的动态系统进行建模时可能存在局限性。本发明利用实时数据更新的投影算法,增强了模型对动态系统的建模能力。Limitations of dynamic system modeling: Existing technologies may have limitations when modeling dynamic systems in industrial processes. The present invention utilizes a projection algorithm with real-time data updates to enhance the modeling capabilities of the model for dynamic systems.
本发明解决现有技术问题所带来的技术效果和显著技术进步:The technical effects and significant technical progress brought about by the present invention in solving the problems of the prior art are as follows:
通过深度递归随机配置网络和精心设计的初始化步骤,本发明显著提高了预测的精度,这对于复杂工业过程的优化和控制至关重要。通过实时数据更新的投影算法,本发明使网络能够适应新的数据和变化,提供了更高的自适应性。通过有监督地选择最优候选神经元,以及自动调整储备池和网络层,本发明减少了模型训练和调整的复杂性。本发明通过在多层网络结构中反复调整和优化储备池神经元,提高了网络的深度和复杂性处理能力。由于其高精度和自适应性,本发明适用于多种复杂的工业场景,提高了其实用性和普适性。Through a deep recursive random configuration network and a carefully designed initialization step, the present invention significantly improves the accuracy of prediction, which is crucial for the optimization and control of complex industrial processes. Through a projection algorithm with real-time data updates, the present invention enables the network to adapt to new data and changes, providing higher adaptability. By supervised selection of optimal candidate neurons and automatic adjustment of the reserve pool and network layers, the present invention reduces the complexity of model training and adjustment. The present invention improves the depth and complexity processing capabilities of the network by repeatedly adjusting and optimizing the reserve pool neurons in a multi-layer network structure. Due to its high accuracy and adaptability, the present invention is applicable to a variety of complex industrial scenarios, improving its practicality and universality.
针对现有技术存在的问题,本发明提供了一种深度递归随机配置网络工业过程精准预测方法和系统,下面结合附图对本发明作详细的描述。In view of the problems existing in the prior art, the present invention provides a method and system for accurately predicting industrial processes using a deep recursive random configuration network. The present invention is described in detail below in conjunction with the accompanying drawings.
如图1所示,本发明提出的方法具体步骤如下:As shown in FIG1 , the specific steps of the method proposed by the present invention are as follows:
步骤1、数据预处理。Step 1: Data preprocessing.
步骤2、深度递归随机配置网络的初始化。Step 2: Initialization of the deep recursive random configuration network.
设置最大层数为S,每一层储备池神经元最大容许数量初始神经元数N=0,最大期望输出误差容许值ε,最大候选神经元生成数Gmax,输入权值分布参数γ={λmin,λmin+Δλ,...,λmax},模型输出误差初始化:e0=T。Set the maximum number of layers to S, and the maximum number of neurons allowed in each layer's reserve pool The initial number of neurons N=0, the maximum expected output error tolerance ε, the maximum number of candidate neurons generated G max , the input weight distribution parameter γ={λ min ,λ min +Δλ,...,λ max }, and the model output error initialization: e 0 =T.
步骤3、从γ={λmin,λmin+Δλ,...,λmax}中,随机选取输入权值储备池连接权值/>和偏置/>代入激活函数/>(第一层)或/>(第j层)得到Gmax组候选神经元 Step 3: Randomly select input weights from γ = {λ min ,λ min +Δλ,...,λ max } Reserve pool connection weight/> and bias/> Substitute the activation function /> (first layer) or/> (jth layer) Get G max group of candidate neurons
步骤4、设置将Gmax组候选神经元代入到随机配置算法的不等式约束条件:Step 4. Settings Substitute the G max group of candidate neurons into the inequality constraints of the random configuration algorithm:
其中是当前各层所有储备池神经元的输出误差(单层结构时:Nsum=N,第j层时:/>从中筛选出满足不等式约束的候选神经元。in is the output error of all the reserve pool neurons in the current layers (for single-layer structure: N sum = N, for the jth layer: /> Select candidate neurons that satisfy the inequality constraints.
步骤5、定义一组变量:将满足不等式约束条件的候选神经元代入,当/>选取ξN+1最大的候选神经元,保留和/>进行下一步;否则,λ=λ+Δλ,当λ<λmax,返回步骤3继续执行。如果上述两个条件都不满足,令τ∈(0,1-r),r=r+τ,λ=λmin,再返回步骤3继续执行。将最优候选神经元添加到储备池中,得到最新的连接权值和偏置/>和/> Step 5. Define a set of variables: Substitute the candidate neurons that satisfy the inequality constraints into the Select the candidate neuron with the largest ξ N+1 and keep and/> Go to the next step; otherwise, λ=λ+Δλ, when λ<λ max , return to step 3 to continue. If the above two conditions are not met, let τ∈(0,1-r), r=r+τ, λ=λ min , and return to step 3 to continue. Add the best candidate neuron to the reserve pool to obtain the latest connection weights and biases/> and/>
步骤6、更新神经网络模型,通过储备池输出与目标输出,利用最小二乘法获得模型的输出权值最后更新模型输出误差/>和储备池神经元数N=N+1。Step 6: Update the neural network model, and use the least squares method to obtain the output weight of the model through the reserve pool output and the target output. Finally update the model output error/> And the number of neurons in the reserve pool N=N+1.
步骤7、当模型输出的均方根误差||e0||2≥ε,且那么返回步骤3执行;如果||e0||2≥ε,且/>则进入下一层继续训练,更新储备池神经元数N=0和储备池层数j=j+1,返回步骤3执行。如果||e0||2<ε,或者j=S且/>深度网络模型训练结束。Step 7: When the root mean square error of the model output ||e 0 || 2 ≥ε, and Then return to step 3; if ||e 0 || 2 ≥ε, and/> Then enter the next layer to continue training, update the number of neurons in the reserve pool N = 0 and the number of reserve pool layers j = j + 1, and return to step 3. If ||e 0 || 2 <ε, or j = S and/> The deep network model training is completed.
步骤8、基于投影算法,根据实时数据对训练好的输出权值在线更新,并计算实时输出,分析模型的泛化性能。Step 8: Based on the projection algorithm, the trained output weights are calculated according to the real-time data. Update online and calculate real-time output to analyze the generalization performance of the model.
如图2所示,本发明实施例提供的用于工业过程中的复杂系统的动态建模系统,包括:As shown in FIG2 , the dynamic modeling system for complex systems in industrial processes provided by an embodiment of the present invention includes:
预处理模块,用于进行数据预处理;A preprocessing module, used for data preprocessing;
初始化模块,用于深度递归随机配置网络初始化;Initialization module, used for deep recursive random configuration network initialization;
构造神经元模块,用于随机配置权值和偏置,并采用全新的方法构造储备池候选神经元;Construct a neuron module to randomly configure weights and biases, and use a new method to construct candidate neurons in the reserve pool;
候选神经元选取模块,根据不等式约束条件有监督地选取候选神经元;The candidate neuron selection module selects candidate neurons in a supervised manner according to inequality constraints;
最优神经元确定模块,确定最优候选神经元,增加到储备池;The optimal neuron determination module determines the optimal candidate neuron and adds it to the reserve pool;
输出权值计算模块,结合目标输出和储备池输出计算当前层网络输出权值;The output weight calculation module calculates the output weight of the current layer network by combining the target output and the reserve pool output;
在线更新模块,用于基于投影算法,根据实时数据对训练好的输出权值进行在线更新,并计算实时输出。The online update module is used to update the trained output weights online based on the projection algorithm and real-time data, and calculate the real-time output.
作为优选,本发明实施例提供的深度递归随机配置网络工业过程精准预测方法,包括:Preferably, the method for accurately predicting industrial processes using a deep recursive random configuration network provided by an embodiment of the present invention includes:
S1,数据预处理;S1, data preprocessing;
S2,深度递归随机配置网络初始化;S2, deep recursive random configuration network initialization;
S3,随机配置权值和偏置,并采用全新的方法构造储备池候选神经元;S3, randomly configures weights and biases, and uses a new method to construct candidate neurons in the reserve pool;
S4,根据不等式约束条件有监督地选取候选神经元;S4, supervised selection of candidate neurons based on inequality constraints;
S5,确定最优候选神经元,增加到储备池;S5, determine the best candidate neurons and add them to the reserve pool;
S6,结合目标输出和储备池输出计算当前层网络输出权值;S6, calculate the output weight of the current layer network by combining the target output and the reserve pool output;
S7,根据输出权值和储备池输出计算网络输出与目标输出之间的误差,重复S3~S6,直至满足误差要求或达到当前层储备池神经元最大容许数量;S7, calculating the error between the network output and the target output according to the output weight and the reserve pool output, and repeating S3 to S6 until the error requirement is met or the maximum allowable number of neurons in the reserve pool of the current layer is reached;
S8,当不满足误差要求时,进入下一层,按S3~S7依次配置各层储备池神经元,直至满足误差要求或达到最大层数要求;S8, when the error requirement is not met, enter the next layer and configure the reserve pool neurons of each layer in sequence according to S3 to S7 until the error requirement is met or the maximum number of layers is reached;
S9,基于投影算法,根据实时数据对训练好的输出权值进行在线更新,并计算实时输出。S9, based on the projection algorithm, updates the trained output weights online according to the real-time data and calculates the real-time output.
S1具体包括:给定一组时序数据,输入样本:{u(1),u(2),...,u(nmax)}={(y(1),u(1)),(y(2),u(2)),...,(y(nmax),u(nmax))},目标输出:T={t(2),t(3),...,t(nmax+1)},其中y,u分别是系统输出和被控输入,本发明仅通过当前时刻输入(y(n),u(n))预测下一时刻输出y(n+1),nmax为样本个数。设置最大层数为S,每一层储备池神经元最大容许数量最大期望输出误差容许值ε,最大候选神经元生成数Gmax,输入权值分布参数γ={λmin,λmin+Δλ,...,λmax}。S1 specifically includes: given a set of time series data, input samples: {u(1),u(2),...,u(n max )}={(y(1),u(1)),(y(2),u(2)),...,(y(n max ),u(n max ))}, target output: T={t(2),t(3),...,t(n max +1)}, where y and u are the system output and the controlled input respectively. The present invention predicts the output y(n+1) at the next moment only through the current moment input (y(n),u(n)), and n max is the number of samples. Set the maximum number of layers to S, and the maximum allowable number of neurons in each layer of the reserve pool The maximum expected output error tolerance ε, the maximum number of candidate neurons generated G max , and the input weight distribution parameter γ = {λ min ,λ min +Δλ,...,λ max }.
S2具体包括:初始化输出误差向量e0=T,模型输出误差缩放因子0<r<1,假定各层已经存在一个包含N个神经元的储备池,此时不同层数的网络模型为:S2 specifically includes: initializing the output error vector e 0 =T, the model output error scaling factor 0<r<1, assuming that each layer already has a reserve pool containing N neurons, and the network model with different numbers of layers is:
……
其中j=1,2,...,S,和/>是第j层的输入权值,储备池连接权值和输出权值,/>是偏置,x(j)(n)为n时刻的储备池状态,g是激活函数,X(j)=[x(j)(1),x(j)(2),...x(j)(n)]是储备池状态矩阵,Y(j)是第j层的网络输出。where j = 1, 2, ..., S, and/> is the input weight, reservoir connection weight and output weight of the jth layer,/> is the bias, x (j) (n) is the reservoir state at time n, g is the activation function, X (j) =[x (j) (1),x (j) (2),...x (j) (n)] is the reservoir state matrix, and Y (j) is the network output of the jth layer.
S3具体包括:从γ={λmin,λmin+Δλ,...,λmax}中,随机选取输入权值矩阵储备池连接权值矩阵/>和偏置矩阵/>代入激活函数(第一层)或(第j层)得到Gmax组候选神经元 S3 specifically includes: randomly selecting an input weight matrix from γ = {λ min ,λ min +Δλ, ...,λ max } Reserve pool connection weight matrix/> and bias matrix/> Substitute the activation function (first floor) or (jth layer) Get G max group of candidate neurons
S4具体包括:将Gmax组候选神经元代入到随机配置算法的不等式约束条件:S4 specifically includes: substituting the candidate neurons of the G max group into the inequality constraints of the random configuration algorithm:
其中是当前各层所有储备池神经元的输出误差,单层结构时:Nsum=N,第j层时:/>m是输出维度,非负实数序列/>满足/>且/>从中筛选出满足不等式约束的候选神经元。in is the output error of all reserve pool neurons in the current layers. For a single layer structure: N sum = N, for the jth layer: /> m is the output dimension, a non-negative real number sequence/> Satisfaction/> And/> Select candidate neurons that satisfy the inequality constraints.
S5具体包括:定义一组变量:从中筛选出满足不等式约束并使得ξN+1最大的候选神经元,作为最优储备池候选神经元。S5 specifically includes: defining a set of variables: The candidate neurons that satisfy the inequality constraints and maximize ξ N+1 are selected as the optimal reserve pool candidate neurons.
S6具体包括:将最优储备池候选神经元添加到网络模型中,根据目标输出直接计算神经网络模型的输出权值并计算神经网络模型的输出均方根误差,将均方根误差作为损失函数,更新模型的输出误差/>和储备池神经元数N=N+1。S6 specifically includes: adding the optimal reserve pool candidate neurons to the network model, and directly calculating the output weights of the neural network model according to the target output And calculate the output root mean square error of the neural network model, use the root mean square error as the loss function, and update the output error of the model/> And the number of neurons in the reserve pool N=N+1.
S7具体包括:如果当前网络模型的输出均方根误差||e0||2大于最大期望输出误差容许值ε,并且储备池神经元数N小于储备池神经元最大容许数量那么重复S3~S6;如果当前网络模型的输出均方根误差||e0||2大于最大期望输出误差容许值ε,且当前层储备池神经元数N等于储备池神经元最大容许数量/>则进入下一层继续训练,更新储备池神经元数N=0和储备池层数j=j+1,重复S3~S6;如果当前网络模型输出的均方根误差||e0||2小于最大期望输出误差容许值ε,或者j=S且当前层储备池神经元数N等于储备池神经元最大容许数量/>则训练结束,得到一个满足随机配置理论约束条件的深度网络模型。S7 specifically includes: if the output root mean square error of the current network model ||e 0 || 2 is greater than the maximum expected output error allowable value ε, and the number of neurons in the reserve pool N is less than the maximum allowable number of neurons in the reserve pool Then repeat S3 to S6; if the output root mean square error of the current network model ||e 0 || 2 is greater than the maximum expected output error tolerance ε, and the number of neurons in the current layer reserve pool N is equal to the maximum allowable number of neurons in the reserve pool/> Then enter the next layer to continue training, update the number of neurons in the reserve pool N = 0 and the number of reserve pool layers j = j + 1, and repeat S3 to S6; if the root mean square error ||e 0 || 2 of the current network model output is less than the maximum expected output error tolerance ε, or j = S and the number of neurons in the current layer reserve pool N is equal to the maximum allowable number of neurons in the reserve pool/> The training is completed and a deep network model that meets the constraints of random configuration theory is obtained.
S9具体包括:基于投影算法,根据实时数据对训练好的输出权值进行在线更新,并计算实时输出:S9 specifically includes: based on the projection algorithm, online updating of the trained output weights according to real-time data, and calculation of real-time output:
其中0<a<1,c>0是两个常数,g(n)是n时刻的储备池输出,e(n)是校正前的输出误差。Where 0<a<1, c>0 are two constants, g(n) is the output of the reserve pool at time n, and e(n) is the output error before correction.
S3~S6是本发明的重点,S3将输入数据与权值参数联系起来,并能够自适应地调整权值分布,加速误差收敛,这种数据依赖的参数选择方法能够有效提升模型的训练速度和预测精度。S3 to S6 are the key points of the present invention. S3 links the input data with the weight parameters and can adaptively adjust the weight distribution to accelerate error convergence. This data-dependent parameter selection method can effectively improve the training speed and prediction accuracy of the model.
S3采用一种全新的储备池连接权构造方法,对于第N+1个新增神经元,只保留新增神经元对原有神经元的网络连接权值以及其自身的网络连接权值原有神经元对新增神经元的连接权值设为零/>即无连接:S3 adopts a new method to construct the connection weight of the reserve pool. For the N+1th newly added neuron, only the network connection weight of the newly added neuron to the original neuron is retained. and its own network connection weight The connection weight of the original neuron to the newly added neuron is set to zero/> That is, no connection:
其中表示第i个神经元对第j个神经元的连接权值,/>这种构造方法可以保证储备池每增加一个神经元时,前N个储备池神经元的输出状态不发生变化,只需配置使得模型训练误差下降的新增神经元,保证网络模型的全局逼近性质。in represents the connection weight of the ith neuron to the jth neuron,/> This construction method can ensure that when a neuron is added to the reserve pool, the output states of the first N reserve pool neurons do not change. It is only necessary to configure the new neurons that reduce the model training error to ensure the global approximation property of the network model.
S4提及的随机配置算法的不等式约束条件所包含的原理转述如下:The principle behind the inequality constraints of the random configuration algorithm mentioned in S4 is paraphrased as follows:
假设在L2空间上的向量空间Γ是稠密的,同时存在0<||g||<bg,给定0<r<1与非负实数序列/>其中/>给定如下公式:Assume that the vector space Γ on L2 space is dense and There exists 0<||g||<b g , Given 0<r<1 and a non-negative real number sequence/> Where/> Given the following formula:
如果随机基函数所构造的输出权值/>满足:If the random basis function The constructed output weights satisfy:
并且满足如下的不等式约束:And the following inequality constraints are satisfied:
则其中T是目标输出值,/>是具有Nsum+1个储备池神经元的模型预测输出值。即所构造的深度神经网络模型具有全局逼近性质。but Where T is the target output value, /> is the predicted output value of the model with N sum +1 reservoir neurons. That is, the constructed deep neural network model has a global approximation property.
S5是为了找到使得模型训练误差下降最快的候选神经元,进而加快收敛速度。S5 is to find the candidate neurons that can reduce the model training error the fastest, thereby accelerating the convergence speed.
S6中所提到的计算输出权值的具体原理转述如下:The calculated output weights mentioned in S6 The specific principle is reproduced as follows:
假设在L2空间上的向量空间Γ是稠密的,同时存在/>给定0<r<1与非负实数序列/>其中/>给定如下公式:Assume that the vector space Γ on L2 space is dense and Existence/> Given 0<r<1 and a non-negative real number sequence/> Where/> Given the following formula:
如果随机基函数所构造的输出权值/>满足:If the random basis function The constructed output weights satisfy:
并且满足如下的不等式约束:And the following inequality constraints are satisfied:
则为最优输出权值。but is the optimal output weight.
以Mackey-Glass(MG)混沌系统和一个非线性动态系统为例,Taking the Mackey-Glass (MG) chaotic system and a nonlinear dynamic system as examples,
例1:Mackey-Glass(MG)混沌系统是一种非常典型的混沌时间序列,由下列含有时滞的微分方程产生:Example 1: The Mackey-Glass (MG) chaotic system is a very typical chaotic time series, which is generated by the following differential equation with time delay:
其中,当τ>16.8时,整个序列是混沌的,无周期,且不收敛也发散。Among them, when τ>16.8, the entire sequence is chaotic, has no period, and neither converges nor diverges.
例2:系统辨识是现代控制论和信号处理的重要内容,系统的动态特性被认为表现在变化的输入输出数据中。而实际系统大都是非线性的,因此非线性系统辨识成为一个重要而复杂的问题。给定如下非线性系统,其中训练阶段u(n)=1.05×sin(n/45),测试阶段数学模型如下:Example 2: System identification is an important part of modern control theory and signal processing. The dynamic characteristics of the system are considered to be reflected in the changing input and output data. However, most actual systems are nonlinear, so nonlinear system identification becomes an important and complex problem. Given the following nonlinear system, in which u(n) = 1.05 × sin(n/45) in the training stage, the mathematical model in the test stage is as follows:
y(n+1)=0.72y(n)+0.025y(n-1)u(n-1)+0.01u2(n-2)+0.2u(n-3)y(n+1)=0.72y(n)+0.025y(n-1)u(n-1)+0.01u 2 (n-2)+0.2u(n-3)
例3:脱丁烷塔过程是石油炼制生产过程中脱硫和石脑油分离装置的重要组成部分,该过程通过7个辅助变量,分别为:塔顶温度u1、塔顶压力u2、塔顶回流量u3、塔顶产品流出量u4、第6层塔板温度u5、塔底温度1u6、塔底温度2u7,实现对塔底丁烷浓度y的预测,其非线性模型可以描述为:Example 3: The debutanizer process is an important part of the desulfurization and naphtha separation unit in the petroleum refining process. This process uses 7 auxiliary variables, namely: tower top temperature u 1 , tower top pressure u 2 , tower top reflux u 3 , tower top product outflow u 4 , sixth layer tower plate temperature u 5 , tower bottom temperature 1u 6 , tower bottom temperature 2u 7 , to predict the butane concentration y at the bottom of the tower. Its nonlinear model can be described as:
y(n)=f(u1(n),u2(n),u3(n),u4(n),u5(n),u5(n-1),y(n)=f(u 1 (n),u 2 (n),u 3 (n),u 4 (n),u 5 (n),u 5 (n-1),
u5(n-2),u5(n-3),(u6(n)+u7(n))/2,u 5 (n-2),u 5 (n-3),(u 6 (n)+u 7 (n))/2,
y(n-1),y(n-2),y(n-3),y(n-4))y(n-1),y(n-2),y(n-3),y(n-4))
考虑阶次未知,我们将上述模型的输入重新设置,即Considering the unknown order, we reset the input of the above model, that is,
y(n)=f(u1(n),u2(n),u3(n),u4(n),u5(n),u6(n),u7(n),y(n-1))y(n)=f(u 1 (n),u 2 (n),u 3 (n),u 4 (n),u 5 (n),u 6 (n),u 7 (n),y(n-1))
实验中,本发明实施例选取了ESN,两层结构的ESN(DeepESN2),三层结构的ESN(DeepESN3),RSCN与本发明所提方法(两层结构的RSCN(DeepRSCN2),三层结构的RSCN(DeepRSCN3))进行对比。选取标准均方根误差(Normalized root-mean square error,NRMSE)作为网络预测性能的评价指标。In the experiment, the embodiment of the present invention selected ESN, two-layer ESN (DeepESN2), three-layer ESN (DeepESN3), RSCN and the proposed method (two-layer RSCN (DeepRSCN2), three-layer RSCN (DeepRSCN3)) for comparison. The normalized root-mean square error (NRMSE) was selected as the evaluation index of network prediction performance.
例1:Mackey-Glass(MG)混沌系统Example 1: Mackey-Glass (MG) chaotic system
分析实验结果,深度递归随机配置网络的训练误差和测试误差均优于单层模型以及传统回声状态网络模型,而且随着层数增加模型预测性能逐步增强。下图是不同储备池规模下各模型训练时间对比图,从图3中可以看出依旧是深度模型的训练速度最快,验证了本发明所提方法的有效性。Analyzing the experimental results, the training error and test error of the deep recursive random configuration network are better than those of the single-layer model and the traditional echo state network model, and the model prediction performance gradually increases with the increase in the number of layers. The following figure is a comparison of the training time of each model under different reserve pool sizes. It can be seen from Figure 3 that the deep model is still the fastest in training speed, which verifies the effectiveness of the method proposed in this invention.
例2:非线性系统辨识Example 2: Nonlinear System Identification
图4是各方法的拟合效果图,深度递归随机配置网络的拟合效果依然是最好的,结合表中结果,我们可以看出DeepRSCN在训练和测试集上的表现均优于其它模型,进一步验证了本发明所提方法在动态非线性辨识上的有效性。Figure 4 is a diagram of the fitting effects of various methods. The fitting effect of the deep recursive random configuration network is still the best. Combined with the results in the table, we can see that DeepRSCN performs better than other models in both training and test sets, further verifying the effectiveness of the method proposed in this invention in dynamic nonlinear identification.
例3:脱丁烷塔塔底丁烷浓度软测量Example 3: Soft measurement of butane concentration at the bottom of debutanizer
图5是各神经网络模型的预测效果,从中可以看出DeepRSCN3的预测误差最小,模型输出与目标值的拟合度更高,图6是不同储备池规模下各模型训练时间对比结果,深度版本有更高的训练速度,验证了本发明所提方法在工业过程参数软测量上的有效性。Figure 5 shows the prediction results of each neural network model, from which it can be seen that DeepRSCN3 has the smallest prediction error and the model output has a higher fit with the target value. Figure 6 shows the comparison results of the training time of each model under different reserve pool sizes. The deep version has a higher training speed, which verifies the effectiveness of the method proposed in the present invention in soft measurement of industrial process parameters.
深度递归随机配置网络工业过程精准预测方法在工业上的应用可以带来显著的技术进步,包括更高的预测精度,更快的训练速度,更好的鲁棒性和更高的资源使用效率。以下是两个具体的应用实施例。The application of the industrial process accurate prediction method of deep recursive random configuration network in industry can bring significant technological progress, including higher prediction accuracy, faster training speed, better robustness and higher resource utilization efficiency. The following are two specific application examples.
应用实施例一:工业机器人故障预测Application Example 1: Industrial Robot Fault Prediction
在工业机器人领域,故障预测是重要的任务。深度递归随机配置网络工业过程精准预测方法可以提高故障预测的精度和实时性,从而避免意外停机,提高生产效率。In the field of industrial robots, fault prediction is an important task. The deep recursive random configuration network industrial process accurate prediction method can improve the accuracy and real-time performance of fault prediction, thereby avoiding unexpected downtime and improving production efficiency.
1.数据预处理:收集工业机器人的各类传感器数据,如温度、压力、振动等。将数据标准化或归一化,以便用于模型训练。1. Data preprocessing: Collect various sensor data of industrial robots, such as temperature, pressure, vibration, etc. Standardize or normalize the data for model training.
2.深度递归随机配置网络初始化:按照步骤S2-S9进行网络初始化,储备池神经元配置,最优神经元确定,网络输出权值计算,以及实时输出的在线更新。2. Deep recursive random configuration network initialization: Follow steps S2-S9 to perform network initialization, reserve pool neuron configuration, optimal neuron determination, network output weight calculation, and online update of real-time output.
3.故障预测:根据训练好的模型,计算工业机器人的实时输出,预测是否出现故障。3. Fault prediction: Based on the trained model, the real-time output of the industrial robot is calculated to predict whether a fault will occur.
这种方法在实时性和预测精度上都有显著提升,可以更早地发现的故障,避免意外停机,提高生产效率。This method has significant improvements in real-time performance and prediction accuracy, can detect faults earlier, avoid unexpected downtime, and improve production efficiency.
应用实施例二:智能电网负荷预测Application Example 2: Smart Grid Load Forecasting
在智能电网领域,负荷预测是关键任务。深度递归随机配置网络工业过程精准预测方法可以提高负荷预测的精度和实时性,从而实现更有效的电力调度,提高电力资源的利用效率。In the field of smart grid, load forecasting is a key task. The deep recursive random configuration network industrial process accurate forecasting method can improve the accuracy and real-time performance of load forecasting, thereby achieving more effective power dispatching and improving the utilization efficiency of power resources.
1.数据预处理:收集智能电网的负荷数据,以及影响负荷的各类因素,如天气情况、节假日信息等。将数据标准化或归一化,以便用于模型训练。1. Data preprocessing: Collect the load data of the smart grid and various factors that affect the load, such as weather conditions, holiday information, etc. Standardize or normalize the data for model training.
2.深度递归随机配置网络初始化:按照步骤S2-S9进行网络初始化,储备池神经元配置,最优神经元确定,网络输出权值计算,以及实时输出的在线更新。2. Deep recursive random configuration network initialization: Follow steps S2-S9 to perform network initialization, reserve pool neuron configuration, optimal neuron determination, network output weight calculation, and online update of real-time output.
3.负荷预测:根据训练好的模型,计算智能电网的实时输出,预测未来的负荷情况。3. Load forecasting: Based on the trained model, the real-time output of the smart grid is calculated to predict future load conditions.
这种方法在实时性和预测精度上都有显著提升,可以实现更有效的电力调度,提高电力资源的利用效率。应当注意,本发明的实施方式可以通过硬件、软件或者软件和硬件的结合来实现。硬件部分可以利用专用逻辑来实现;软件部分可以存储在存储器中,由适当的指令执行系统,例如微处理器或者专用设计硬件来执行。本领域的普通技术人员可以理解上述的设备和方法可以使用计算机可执行指令和/或包含在处理器控制代码中来实现,例如在诸如磁盘、CD或DVD-ROM的载体介质、诸如只读存储器(固件)的可编程的存储器或者诸如光学或电子信号载体的数据载体上提供了这样的代码。本发明的设备及其模块可以由诸如超大规模集成电路或门阵列、诸如逻辑芯片、晶体管等的半导体、或者诸如现场可编程门阵列、可编程逻辑设备等的可编程硬件设备的硬件电路实现,也可以用由各种类型的处理器执行的软件实现,也可以由上述硬件电路和软件的结合例如固件来实现。This method has significant improvements in real-time performance and prediction accuracy, can achieve more effective power dispatch, and improve the utilization efficiency of power resources. It should be noted that the embodiments of the present invention can be implemented by hardware, software, or a combination of software and hardware. The hardware part can be implemented using dedicated logic; the software part can be stored in a memory and executed by an appropriate instruction execution system, such as a microprocessor or dedicated design hardware. A person of ordinary skill in the art can understand that the above-mentioned devices and methods can be implemented using computer executable instructions and/or included in a processor control code, such as a carrier medium such as a disk, CD or DVD-ROM, a programmable memory such as a read-only memory (firmware), or a data carrier such as an optical or electronic signal carrier. Such code is provided on the carrier medium. The device and its modules of the present invention can be implemented by hardware circuits such as very large-scale integrated circuits or gate arrays, semiconductors such as logic chips, transistors, etc., or programmable hardware devices such as field programmable gate arrays, programmable logic devices, etc., can also be implemented by software executed by various types of processors, and can also be implemented by a combination of the above-mentioned hardware circuits and software, such as firmware.
以上所述,仅为本发明的具体实施方式,但本发明的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本发明揭露的技术范围内,凡在本发明的精神和原则之内所作的任何修改、等同替换和改进等,都应涵盖在本发明的保护范围之内。The above description is only a specific implementation mode of the present invention, but the protection scope of the present invention is not limited thereto. Any modifications, equivalent substitutions and improvements made by any technician familiar with the technical field within the technical scope disclosed by the present invention and within the spirit and principle of the present invention should be covered by the protection scope of the present invention.
Claims (4)
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202311743288.7A CN117709536B (en) | 2023-12-18 | 2023-12-18 | A method and system for accurate prediction of industrial processes using deep recursive random configuration networks |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202311743288.7A CN117709536B (en) | 2023-12-18 | 2023-12-18 | A method and system for accurate prediction of industrial processes using deep recursive random configuration networks |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| CN117709536A CN117709536A (en) | 2024-03-15 |
| CN117709536B true CN117709536B (en) | 2024-05-14 |
Family
ID=90154994
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202311743288.7A Active CN117709536B (en) | 2023-12-18 | 2023-12-18 | A method and system for accurate prediction of industrial processes using deep recursive random configuration networks |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN117709536B (en) |
Citations (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN115081484A (en) * | 2022-07-04 | 2022-09-20 | 南京航空航天大学 | Fault diagnosis method of aero-engine sensor based on CRJ-OSELM algorithm |
| WO2023115596A1 (en) * | 2021-12-21 | 2023-06-29 | 浙江工业大学台州研究院 | Truss stress prediction and weight lightening method based on transfer learning fusion model |
| WO2023168916A1 (en) * | 2022-03-08 | 2023-09-14 | 太原理工大学 | Neural network model optimization method based on stainless steel ultra-thin strip annealing process |
Family Cites Families (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20170091615A1 (en) * | 2015-09-28 | 2017-03-30 | Siemens Aktiengesellschaft | System and method for predicting power plant operational parameters utilizing artificial neural network deep learning methodologies |
| US11354747B2 (en) * | 2016-04-16 | 2022-06-07 | Overbond Ltd. | Real-time predictive analytics engine |
| EP4290412A3 (en) * | 2018-09-05 | 2024-01-03 | Sartorius Stedim Data Analytics AB | Computer-implemented method, computer program product and system for data analysis |
| FR3088463A1 (en) * | 2018-11-09 | 2020-05-15 | Adagos | METHOD OF CONSTRUCTING A NEURON ARRAY FOR THE SIMULATION OF REAL SYSTEMS |
-
2023
- 2023-12-18 CN CN202311743288.7A patent/CN117709536B/en active Active
Patent Citations (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2023115596A1 (en) * | 2021-12-21 | 2023-06-29 | 浙江工业大学台州研究院 | Truss stress prediction and weight lightening method based on transfer learning fusion model |
| WO2023168916A1 (en) * | 2022-03-08 | 2023-09-14 | 太原理工大学 | Neural network model optimization method based on stainless steel ultra-thin strip annealing process |
| CN115081484A (en) * | 2022-07-04 | 2022-09-20 | 南京航空航天大学 | Fault diagnosis method of aero-engine sensor based on CRJ-OSELM algorithm |
Non-Patent Citations (1)
| Title |
|---|
| 基于压缩感知的回声状态神经网络在时间序列预测中的应用;李莉;於志勇;黄昉菀;;软件导刊;20200415(第04期);第15-19页 * |
Also Published As
| Publication number | Publication date |
|---|---|
| CN117709536A (en) | 2024-03-15 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| Dong et al. | Task scheduling based on deep reinforcement learning in a cloud manufacturing environment | |
| US11543787B2 (en) | Networked control system time-delay compensation method based on predictive control | |
| Li et al. | Fuzzy stochastic configuration networks for nonlinear system modeling | |
| de Jesús Rubio et al. | Uniformly stable backpropagation algorithm to train a feedforward neural network | |
| Chen et al. | A new cloud computing method for establishing asymmetric cycle time intervals in a wafer fabrication factory | |
| CN118674093B (en) | Port cargo throughput prediction method based on evolutionary seasonal trend decomposition | |
| CN120103803B (en) | Flexible production line self-adaptive scheduling control method integrating multi-agent reinforcement learning | |
| Dang et al. | An improved fuzzy recurrent stochastic configuration network for modeling nonlinear systems | |
| Dotoli et al. | A survey on advanced control approaches in factory automation | |
| CN117724425B (en) | Industrial process real-time prediction method and system based on recursion random configuration network | |
| CN117709536B (en) | A method and system for accurate prediction of industrial processes using deep recursive random configuration networks | |
| CN118195357A (en) | A key performance prediction method and intelligent management platform for intelligent manufacturing workshops under uncertain environments | |
| CN113935600B (en) | An adaptive economic dispatch system and method based on deep learning | |
| Khare et al. | Optimal power generation and power flow control using artificial intelligence techniques | |
| Guan et al. | Modeling uncertain processes with interval random vector functional-link networks | |
| CN118194912B (en) | A dynamic modeling method and system based on regularized recursive random configuration network | |
| Yu et al. | A reinforcement learning algorithm based on neural network for economic dispatch | |
| Huang et al. | A convolutional neural network–back propagation based three-layer combined forecasting method for spare part demand | |
| Joy et al. | Optimal model for effective power scheduling using Levenberg-Marquardt optimization algorithm | |
| Guan et al. | Interval Optimal Controller Design for Uncertain Systems Based on Interval Neural Network | |
| Trong-Dung et al. | Control chart patterns (CCPs) forecasting using probabilistic deep learning | |
| Chencheng | Cargo Volume Forecast and Personnel Scheduling Model of Logistics Network Based on ARIMA, LSTM and MOP | |
| Chen et al. | Extremal optimization combined with LM gradient search for MLP network learning | |
| CN120069615B (en) | Power transaction trend prediction and decision support system and method | |
| Chua et al. | Digital twin-based decision support system for planning and scheduling |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| GR01 | Patent grant | ||
| GR01 | Patent grant |