[go: up one dir, main page]

CN117649902B - Neural network modeling and hysteresis characteristics prediction method for smart material devices - Google Patents

Neural network modeling and hysteresis characteristics prediction method for smart material devices Download PDF

Info

Publication number
CN117649902B
CN117649902B CN202311717246.6A CN202311717246A CN117649902B CN 117649902 B CN117649902 B CN 117649902B CN 202311717246 A CN202311717246 A CN 202311717246A CN 117649902 B CN117649902 B CN 117649902B
Authority
CN
China
Prior art keywords
hidden layer
output data
calculation
inputting
input
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202311717246.6A
Other languages
Chinese (zh)
Other versions
CN117649902A (en
Inventor
王耿
陈国强
倪磊
赵冬梅
廖璇
张兰强
姚纳
周虹
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Wanzhida Technology Co ltd
Original Assignee
Southwest University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Southwest University of Science and Technology filed Critical Southwest University of Science and Technology
Priority to CN202311717246.6A priority Critical patent/CN117649902B/en
Publication of CN117649902A publication Critical patent/CN117649902A/en
Application granted granted Critical
Publication of CN117649902B publication Critical patent/CN117649902B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16CCOMPUTATIONAL CHEMISTRY; CHEMOINFORMATICS; COMPUTATIONAL MATERIALS SCIENCE
    • G16C60/00Computational materials science, i.e. ICT specially adapted for investigating the physical or chemical properties of materials or phenomena associated with their design, synthesis, processing, characterisation or utilisation
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/20Design optimisation, verification or simulation
    • G06F30/27Design optimisation, verification or simulation using machine learning, e.g. artificial intelligence, neural networks, support vector machines [SVM] or training a model

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • Evolutionary Computation (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Computer Hardware Design (AREA)
  • Geometry (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Feedback Control In General (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

本发明公开了一种用于智能材料器件的神经网络建模及其迟滞特性预测方法,其包括以下步骤:根据智能材料器件的物理特性和约束条件,构建JA迟滞模型并进行离散化,得到离散后的JA迟滞模型;根据离散后的JA迟滞模型及其结构关系、离散后的JA迟滞模型输入激励与智能材料器件输出响应之间的信号传递关系,构建神经网络并进行训练;通过训练后的神经网络对迟滞特性进行预测。本发明构建的迟滞神经网络的规模和复杂度小,有利于网络的训练,所需的数据量小、运算时间短、计算成本低;本发明的预测方法利用训练后的神经网络对智能材料器件的迟滞特性进行预测,减少了建模误差,提高了准确度,可有效、精确、快速地描述智能材料器件的非线性特性。

The present invention discloses a neural network modeling and hysteresis characteristic prediction method for smart material devices, which includes the following steps: constructing a JA hysteresis model and discretizing it according to the physical characteristics and constraints of the smart material device to obtain a discretized JA hysteresis model; constructing a neural network and training it according to the discretized JA hysteresis model and its structural relationship, and the signal transmission relationship between the input excitation of the discretized JA hysteresis model and the output response of the smart material device; predicting the hysteresis characteristics through the trained neural network. The hysteresis neural network constructed by the present invention has a small scale and complexity, which is conducive to the training of the network, requires a small amount of data, short operation time, and low calculation cost; the prediction method of the present invention uses the trained neural network to predict the hysteresis characteristics of the smart material device, reduces the modeling error, improves the accuracy, and can effectively, accurately and quickly describe the nonlinear characteristics of the smart material device.

Description

Neural network modeling and hysteresis characteristic prediction method for intelligent material device
Technical Field
The invention relates to the technical field of driving control engineering, in particular to a neural network modeling and hysteresis characteristic prediction method for an intelligent material device.
Background
The Jiles-Atherton model (JA hysteresis model) is a dynamic mathematical model for describing the input-output nonlinear relationships of various intelligent materials. These smart materials include magnetic cores, piezoelectric actuators, shape memory alloys, and the like. Nonlinear dynamics between the input and output of these smart materials are often manifested as dynamic hysteresis characteristics. The universality of dynamic hysteresis causes a plurality of difficulties for intelligent material driving application, for example, a piezoelectric actuator and a shape memory alloy driver can reduce the motion precision and the response speed due to the existence of the dynamic hysteresis; the control response of the transformer core and the magneto-rheological damper is slow or the system is unstable due to the existence of dynamic hysteresis. The existing dynamic hysteresis models can be divided into a physical-based model and a phenomenon-based model, and also can be divided into a white box model with interpretation and a black box model with difficult interpretation. In general, a model based on a differential equation and an integral operator has better interpretability, but has the problems of insufficient modeling capability, lower precision, large calculation consumption and the like. The neural network-based model can approach experimental data with higher accuracy and lower implementation difficulty, but has unexplained black box characteristics. The modeling is performed by adopting a multilayer feedforward neural network or a convolution neural network, and the defects of high modeling precision, high robustness, high calculation cost, large data volume, long running time, unexplaiability and the like are overcome, so that the prediction efficiency of hysteresis characteristics of the intelligent material device is low, and the accuracy is not high.
Disclosure of Invention
Aiming at the defects in the prior art, the neural network modeling and hysteresis characteristic prediction method for the intelligent material device solves the problems of low efficiency and low accuracy in predicting the hysteresis characteristic of the intelligent material device caused by the defects of high calculation cost, large data volume, long running time, unexplained performance and the like in the prior art.
In order to achieve the aim of the invention, the invention adopts the following technical scheme:
Provided is a neural network modeling method for an intelligent material device, comprising the steps of:
S1, constructing a JA hysteresis model according to physical characteristics and constraint conditions of an intelligent material device;
s2, discretizing the JA hysteresis model to obtain a discretized JA hysteresis model;
S3, constructing a corresponding neural network according to the discrete JA hysteresis model and the structural relation thereof and the signal transmission relation between the input excitation of the discrete JA hysteresis model and the output response of the intelligent material device, and completing modeling.
Further, the smart material device in step S1 includes a piezoelectric actuator, a shape memory alloy driver, a transformer core, and a magneto-rheological damper.
Further, the formula of the JA hysteresis model in step S1 is:
Wherein H is an input variable and represents the intensity of a magnetic field applied to the intelligent material; m is an output variable, represents the magnetization degree output by the JA hysteresis model, mrev represents the reversible magnetization degree of the JA hysteresis model, mirr represents the irreversible magnetization degree of the JA hysteresis model, H represents the magnetic field intensity of an input intelligent material device, The irreversible magnetization degree (Mirr) is expressed to derive the input magnetic field intensity (H) of the intelligent material device, k is expressed as a hysteresis energy loss factor, delta is expressed as a parameter of the magnetic field change direction, alpha is expressed as a non-magnetizable shape parameter, man is expressed as a non-magnetizable degree, c is expressed as a magnetization weight coefficient, ms is expressed as a magnetic saturation factor, coth (DEG) is expressed as a hyperbolic cosine function, and a is expressed as a magnetic domain coupling factor.
Further, the formula of the discrete JA hysteresis model in step S2 is:
Where k represents the time, M k represents the output magnetization level at time k of the discrete JA hysteresis model, M k-1 represents the output magnetization level at time k-1 of the discrete JA hysteresis model, sinh (·) represents the hyperbolic sine function, and H k-1 represents the input magnetic field strength at time k-1 of the discrete JA hysteresis model.
Further, the structural relationship in the step S3 is the structural relationship between the parameters of the discrete JA hysteresis model and each item in the formula thereof; the signal transmission relation between the input excitation of the discrete JA hysteresis model and the output response of the intelligent material device in the step S3 is the transmission relation from the input signal to the output signal of the discrete JA hysteresis model.
Further, the neural network model of the smart material device in step S3 includes 8 input layers, 38 hidden layers, and one output layer; the 38 hidden layers and one output layer each include one neuron; the connection weight between layers is 1;
The input layer comprises eight input units which are connected In parallel, namely In1, in2, in3, in4, in5, in6, in7 and In8; the output layer is an hidden layer 39;
the specific design process of the neural network model of the intelligent material device is as follows:
Input voltage sequence H and derivative sequence dH thereof, five input sequences composed of constant 1 and input sequences composed of sampling time T are respectively input to eight input units;
respectively delaying the output data of the input unit In1, the output data of the input unit In2 and the output data of the hidden layer 39 to obtain corresponding delay data; respectively inputting output data of the input units In3 to In7 to the hidden layers 1 to 5 for calculation;
the delay data corresponding to the hidden layer 39 and the output data of the hidden layer 2 are input to the hidden layer 12 for calculation; the delay data of the input unit In1 and the output data of the hidden layer 12 are input to the hidden layer 6 for calculation; inputting the output data of the hidden layer 1 to the hidden layer 37 for calculation; the delay data of the input unit In2 are input to the hidden layer 13 for calculation; inputting output data of the hidden layer 37 and the hidden layer 6 to the hidden layer 7 for calculation;
Respectively inputting the output data of the hidden layer 7 into the hidden layer 8 and the hidden layer 14 for calculation; inputting the output data of the hidden layer 8 into the hidden layer 10 for calculation; inputting the output data of the hidden layer 6 to the hidden layer 9 for calculation; inputting the output data of the hidden layer 10 and the output data of the hidden layer 1 into the hidden layer 11 for calculation;
Inputting the output data of the hidden layer 9 and the hidden layer 11 to the hidden layer 15 for calculation; inputting the output data of the hidden layer 9, the output data of the hidden layer 2, the output data of the hidden layer 3 and the output data of the hidden layer 4 into the hidden layer 16 for calculation; inputting the output data of the hidden layer 11, the output data of the hidden layer 1, the output data of the hidden layer 2, the output data of the hidden layer 3 and the output data of the hidden layer 4 into the hidden layer 17 for calculation; inputting the output data of the hidden layer 6, the output data of the hidden layer 14 and the output data of the hidden layer 4 into the hidden layer 18 for calculation; inputting the output data of the hidden layer 1 and the output data of the hidden layer 4 into the hidden layer 19 for calculation; the delay data corresponding to the hidden layer 39 and the output data of the hidden layer 6 are input to the hidden layer 20 for calculation;
Inputting the output data of the hidden layer 15, the output data of the hidden layer 16 and the output data of the hidden layer 17 into the hidden layer 21 for calculation; inputting the output data of the hidden layer 18, the output data of the hidden layer 19 and the output data of the hidden layer 20 into the hidden layer 22 for calculation; inputting the output data of the hidden layer 3 into the hidden layer 23 for calculation; inputting the output data of the hidden layer 5 and the output data of the hidden layer 13 into the hidden layer 24 for calculation;
Inputting the output data of the hidden layer 6, the output data of the hidden layer 23 and the output of the hidden layer 24 data into the hidden layer 27 for calculation; inputting the output data of the hidden layer 21 to the hidden layer 38 for calculation; inputting the output data of the hidden layer 38 and the output data of the hidden layer 15 into the hidden layer 29 for calculation;
Inputting the output data of the input unit In8, the output data of the hidden layer 23, the output data of the hidden layer 22 and the delay data of the input unit In2 to the hidden layer 25 for calculation; inputting the output of the hidden layer 22 and the output data of the hidden layer 2 into the hidden layer 26 for calculation; inputting the output data of the hidden layer 26 and the output of the hidden layer 27 to the hidden layer 28 for calculation; the output data of the hidden layer 9, the output data of the hidden layer 4, the output data of the hidden layer 3, the delay data of the input unit In2 and the output data of the input unit In8 are input into the hidden layer 31 for calculation; the output data of the hidden layer 11, the output data of the hidden layer 4, the output data of the hidden layer 3, the output data of the hidden layer 1, the delay data of the input unit In2 and the output data of the input unit In8 are input to the hidden layer 32 for calculation;
Inputting the output data of hidden layer 28 to hidden layer 36 for calculation; inputting the output data of the hidden layer 31 and the output data of the hidden layer 38 to the hidden layer 34 for calculation; inputting the output data of the hidden layer 38 and the output data of the hidden layer 32 into the hidden layer 35 for calculation; inputting the output data of the hidden layer 25 and the output data of the hidden layer 36 into the hidden layer 30 for calculation; inputting the output data of the hidden layer 29 and the output data of the hidden layer 30 to the hidden layer 33 for calculation; the output data of the hidden layer 33, the output data of the hidden layer 34, the output data of the hidden layer 35 and the delay data corresponding to the hidden layer 39 are input to the hidden layer 39 for calculation, and the processing is completed.
Further, the activation functions of the hidden layers 1 to 5 are Sa=eτu、Sα=eτu、Sc=eτu、Sms=eτu、Sk=eτu;, and the activation function of the hidden layer 8 is a hyperbolic sine function, i.e. sinh (u); the activation functions of the hidden layer 9 and the hidden layer 10 are square functions, namely u 2; the activation function of the hidden layer 13 is a saturation function, i.e. sat (u); the activation function of the hidden layer 14 is a hyperbolic cosine function, i.e. coth (u); the activation function of the hidden layer 23 is 1-u; hidden layer 36, hidden layer 37, hidden layer 38 are all active functionsThe activation functions of the rest hidden layers are linear activation functions; where u represents the input sequence of the hidden layer and τ represents the search step size parameter.
The method for predicting the hysteresis characteristics of the neural network modeling method for the intelligent material device comprises the following steps:
B1, obtaining a trained voltage sequence data set and a trained displacement sequence data set by adopting amplitude sweep excitation signals;
B2, taking the voltage sequence data set as input of a neural network model of the intelligent material device, taking the displacement sequence data set as a label, and training the neural network model of the intelligent material device to obtain a trained neural network model of the intelligent material device;
and B3, inputting hysteresis characteristic data to be predicted into a neural network model of the trained intelligent material device to obtain predicted displacement output data, and finishing the prediction of the hysteresis characteristic.
Further, step B2 adopts LM algorithm to train the neural network, adjusts the weight of the neural network according to the mean square error objective function until the mean square error objective function smaller than 10 -5 is obtained, and the trained neural network is obtained.
Further, the number of training steps was set to 500 steps.
The beneficial effects of the invention are as follows: the neural network of the modeling method has small scale and complexity, is beneficial to training of the network, reduces data volume and operation time and has low calculation cost; according to the prediction method, the trained neural network is utilized to predict the hysteresis characteristics of the intelligent material device, so that errors are reduced, accuracy is improved, and the nonlinear characteristics of the intelligent material device can be effectively described.
Drawings
FIG. 1 is a flowchart of a neural network modeling method of the present invention;
FIG. 2 is a schematic diagram of a neural network according to the present invention;
FIG. 3 is a flowchart showing a hysteresis characteristic prediction method according to the present invention;
FIG. 4 is a schematic diagram of an input-output dataset test;
FIG. 5 is a graph of input excitation signals;
FIG. 6 is a graph comparing measured output displacement with predicted output displacement;
FIG. 7 is a schematic error diagram of measured output displacement versus predicted output displacement;
FIG. 8 is a schematic diagram of measured and predicted hysteresis curves.
Detailed Description
The following description of the embodiments of the present invention is provided to facilitate understanding of the present invention by those skilled in the art, but it should be understood that the present invention is not limited to the scope of the embodiments, and all the inventions which make use of the inventive concept are protected by the spirit and scope of the present invention as defined and defined in the appended claims to those skilled in the art.
As shown in fig. 1, a neural network modeling method for a smart material device includes the steps of:
S1, constructing a JA hysteresis model according to physical characteristics and constraint conditions of an intelligent material device;
s2, discretizing the JA hysteresis model to obtain a discretized JA hysteresis model;
S3, constructing a corresponding neural network according to the discrete JA hysteresis model and the structural relation thereof and the signal transmission relation between the input excitation of the discrete JA hysteresis model and the output response of the intelligent material device, and completing modeling.
The intelligent material device in the step S1 comprises a piezoelectric actuator, a shape memory alloy driver, a transformer magnetic core and a magneto-rheological damper.
The formula of the JA hysteresis model in step S1 is:
Wherein H is an input variable and represents the intensity of a magnetic field applied to the intelligent material; m is an output variable, represents the magnetization degree output by the JA hysteresis model, M rev represents the reversible magnetization degree of the JA hysteresis model, M irr represents the irreversible magnetization degree of the JA hysteresis model, H represents the magnetic field intensity of an input intelligent material device, The irreversible magnetization degree (M irr) is expressed to derive the input magnetic field intensity (H) of the intelligent material device, k is expressed as a hysteresis energy loss factor, delta is expressed as a parameter of the magnetic field change direction, alpha is expressed as a non-magnetizable shape parameter, M an is expressed as a non-magnetizable degree, c is expressed as a magnetization weight coefficient, M s is expressed as a magnetic saturation factor, coth (DEG) is expressed as a hyperbolic cosine function, and a is expressed as a magnetic domain coupling factor.
The formula of the discrete JA hysteresis model in step S2 is:
Where k represents the time, M k represents the output magnetization level at time k of the discrete JA hysteresis model, M k-1 represents the output magnetization level at time k-1 of the discrete JA hysteresis model, sinh (·) represents the hyperbolic sine function, and H k-1 represents the input magnetic field strength at time k-1 of the discrete JA hysteresis model.
The structural relation in the step S3 is the structural relation between the parameters of the discrete JA hysteresis model and each item in the formula; the signal transmission relation between the input excitation of the discrete JA hysteresis model and the output response of the intelligent material device in the step S3 is the transmission relation from the input signal to the output signal of the discrete JA hysteresis model.
As shown in fig. 2, the neural network model of the smart material device in step S3 includes 8 input layers, 38 hidden layers, and one output layer; the 38 hidden layers and one output layer each include one neuron; the connection weight between layers is 1;
The input layer comprises eight input units which are connected In parallel, namely In1, in2, in3, in4, in5, in6, in7 and In8; the output layer is an hidden layer 39;
the specific design process of the neural network model of the intelligent material device is as follows:
Input voltage sequence H and derivative sequence dH thereof, five input sequences composed of constant 1 and input sequences composed of sampling time T are respectively input to eight input units;
respectively delaying the output data of the input unit In1, the output data of the input unit In2 and the output data of the hidden layer 39 to obtain corresponding delay data; respectively inputting output data of the input units In3 to In7 to the hidden layers 1 to 5 for calculation;
the delay data corresponding to the hidden layer 39 and the output data of the hidden layer 2 are input to the hidden layer 12 for calculation; the delay data of the input unit In1 and the output data of the hidden layer 12 are input to the hidden layer 6 for calculation; inputting the output data of the hidden layer 1 to the hidden layer 37 for calculation; the delay data of the input unit In2 are input to the hidden layer 13 for calculation; inputting output data of the hidden layer 37 and the hidden layer 6 to the hidden layer 7 for calculation;
Respectively inputting the output data of the hidden layer 7 into the hidden layer 8 and the hidden layer 14 for calculation; inputting the output data of the hidden layer 8 into the hidden layer 10 for calculation; inputting the output data of the hidden layer 6 to the hidden layer 9 for calculation; inputting the output data of the hidden layer 10 and the output data of the hidden layer 1 into the hidden layer 11 for calculation;
Inputting the output data of the hidden layer 9 and the hidden layer 11 to the hidden layer 15 for calculation; inputting the output data of the hidden layer 9, the output data of the hidden layer 2, the output data of the hidden layer 3 and the output data of the hidden layer 4 into the hidden layer 16 for calculation; inputting the output data of the hidden layer 11, the output data of the hidden layer 1, the output data of the hidden layer 2, the output data of the hidden layer 3 and the output data of the hidden layer 4 into the hidden layer 17 for calculation; inputting the output data of the hidden layer 6, the output data of the hidden layer 14 and the output data of the hidden layer 4 into the hidden layer 18 for calculation; inputting the output data of the hidden layer 1 and the output data of the hidden layer 4 into the hidden layer 19 for calculation; the delay data corresponding to the hidden layer 39 and the output data of the hidden layer 6 are input to the hidden layer 20 for calculation;
Inputting the output data of the hidden layer 15, the output data of the hidden layer 16 and the output data of the hidden layer 17 into the hidden layer 21 for calculation; inputting the output data of the hidden layer 18, the output data of the hidden layer 19 and the output data of the hidden layer 20 into the hidden layer 22 for calculation; inputting the output data of the hidden layer 3 into the hidden layer 23 for calculation; inputting the output data of the hidden layer 5 and the output data of the hidden layer 13 into the hidden layer 24 for calculation;
Inputting the output data of the hidden layer 6, the output data of the hidden layer 23 and the output of the hidden layer 24 data into the hidden layer 27 for calculation; inputting the output data of the hidden layer 21 to the hidden layer 38 for calculation; inputting the output data of the hidden layer 38 and the output data of the hidden layer 15 into the hidden layer 29 for calculation;
Inputting the output data of the input unit In8, the output data of the hidden layer 23, the output data of the hidden layer 22 and the delay data of the input unit In2 to the hidden layer 25 for calculation; inputting the output of the hidden layer 22 and the output data of the hidden layer 2 into the hidden layer 26 for calculation; inputting the output data of the hidden layer 26 and the output of the hidden layer 27 to the hidden layer 28 for calculation; the output data of the hidden layer 9, the output data of the hidden layer 4, the output data of the hidden layer 3, the delay data of the input unit In2 and the output data of the input unit In8 are input into the hidden layer 31 for calculation; the output data of the hidden layer 11, the output data of the hidden layer 4, the output data of the hidden layer 3, the output data of the hidden layer 1, the delay data of the input unit In2 and the output data of the input unit In8 are input to the hidden layer 32 for calculation;
Inputting the output data of hidden layer 28 to hidden layer 36 for calculation; inputting the output data of the hidden layer 31 and the output data of the hidden layer 38 to the hidden layer 34 for calculation; inputting the output data of the hidden layer 38 and the output data of the hidden layer 32 into the hidden layer 35 for calculation; inputting the output data of the hidden layer 25 and the output data of the hidden layer 36 into the hidden layer 30 for calculation; inputting the output data of the hidden layer 29 and the output data of the hidden layer 30 to the hidden layer 33 for calculation; the output data of the hidden layer 33, the output data of the hidden layer 34, the output data of the hidden layer 35 and the delay data corresponding to the hidden layer 39 are input to the hidden layer 39 for calculation, and the processing is completed.
The activation functions of the hidden layers 1 to 5 are Sa=eτu、Sα=eτu、Sc=eτu、Sms=eτu、Sk=eτu;, and the activation function of the hidden layer 8 is a hyperbolic sine function, namely sinh (u); the activation functions of the hidden layer 9 and the hidden layer 10 are square functions, namely u2; the activation function of the hidden layer 13 is a saturation function, i.e. sat (u); the activation function of the hidden layer 14 is a hyperbolic cosine function, i.e. coth (u); the activation function of the hidden layer 23 is 1-u; hidden layer 36, hidden layer 37, hidden layer 38 are all active functionsThe activation functions of the rest hidden layers are linear activation functions; where u represents the input sequence of the hidden layer and τ represents the search step size parameter.
As shown in fig. 3, a hysteresis characteristic predictor of a neural network modeling method for an intelligent material device includes the following steps:
B1, obtaining a trained voltage sequence data set and a trained displacement sequence data set by adopting amplitude sweep excitation signals;
B2, taking the voltage sequence data set as input of a neural network model of the intelligent material device, taking the displacement sequence data set as a label, and training the neural network model of the intelligent material device to obtain a trained neural network model of the intelligent material device;
and B3, inputting hysteresis characteristic data to be predicted into a neural network model of the trained intelligent material device to obtain predicted displacement output data, and finishing the prediction of the hysteresis characteristic.
And step B2, training the neural network by adopting an LM algorithm, and adjusting the weight of the neural network according to the mean square error objective function until a mean square error objective function smaller than 10 < -5 > is obtained, thereby obtaining the trained neural network.
The number of training steps was set to 500 steps.
In one embodiment of the invention, the JA hysteresis model is built based on the energy balance principle, and can be used for describing the magnetization process of the soft magnetic intelligent material, and the mathematical model is as follows:
the deduction is carried out to obtain:
the final formula is as follows:
Wherein, The magnetic field intensity H of the intelligent material device is expressed as a derivative function of time t,The output magnetization degree H of the JA hysteresis model is calculated on time t, and f (·) is divided in a derived formulaThe other parts, para, represent five constant parameters, namely a magnetic domain coupling factor a, a non-magnetizable shape parameter alpha, a magnetization weight coefficient c, a magnetic saturation factor M s, and a hysteresis energy loss factor k.
According to backward differentiationThe following formula can be obtained:
wherein fk-1, Respectively a function f (·) and a functionDiscrete values at sample time k-1.
The formula in claim 4 is finally obtained.
In fig. 2, z -1 represents a time delay; the length of the eight network inputs is the same; the output data of the input unit In3 to the input unit In7 are respectively five parameters of a JA hysteresis model, namely a magnetic domain coupling factor a, a non-magnetizable shape parameter alpha, a magnetization weight coefficient c, a magnetic saturation factor M s and a hysteresis energy loss factor k, and the input weights are adaptively adjusted along with network training and are respectively w 1、w2、w3、w4、w5; the design of the 5 custom activation functions of the neural network depends on the value range of 5 parameters of the JA hysteresis model. For example, an activation function in the form of an exponent is set.
As shown in fig. 4, the main control computer sends out the excitation input signal, amplified by the power amplifying module, applied to the intelligent driving material device for generating corresponding output signal, and then the sensor measures the specific output signal quantity, and the specific output signal quantity is transmitted to the main control computer for displaying for data processing. The excitation signal in this step should use a sweep signal of varying amplitude to excite the smart material device.
The neural network is not a traditional black box model, but a transparent network with a clear physical meaning of weight parameters, 5 weights obtained after training of the neural network are values of 5 parameters of the JA hysteresis model, only the weights of neurons corresponding to the parameters of the Jile-Athereon hysteresis model can be adaptively adjusted in the network training process, and other weights are kept unchanged in the training process.
As shown in fig. 5 and 6, a test was performed using a piezoelectric actuator; exciting the piezoelectric actuator by the input excitation signal of FIG. 5 to obtain measured displacement output data; the input and output data obtained by the test are trained on the established neural network, the weights of the neural network are obtained, the neural network is used for prediction, the predicted displacement output data is obtained, and as can be seen from fig. 6, the measured and predicted displacement output data completely coincide. As shown in fig. 7, the maximum error of the measured and predicted displacement output data is less than 1.0% of full scale. As shown in FIG. 8, the hysteresis curves of the experimental test and the neural network prediction completely coincide, which shows that the hysteresis characteristics of the intelligent material device can be effectively described.
TABLE 1 modeling Performance index data for multiple models
As can be seen from Table 1, the modeling accuracy and the running time of the present invention are both advantageous over the common rate-related Prandtl-ISHLINSKII model, the rate-related Preisach model, the rate-related Bouc-Wen model, the rate-related Krasnosel' skii-Pokrovkii model, and the Magic Formula hysteresis model.
In conclusion, the neural network of the modeling method has small scale and complexity, is beneficial to training of the network, reduces data volume and operation time and has low calculation cost; according to the prediction method, the trained neural network is utilized to predict the hysteresis characteristics of the intelligent material device, so that errors are reduced, accuracy is improved, and the nonlinear characteristics of the intelligent material device can be effectively described.

Claims (6)

1. A neural network modeling method for an intelligent material device is characterized by comprising the following steps: the method comprises the following steps:
S1, constructing a JA hysteresis model according to physical characteristics and constraint conditions of an intelligent material device;
s2, discretizing the JA hysteresis model to obtain a discretized JA hysteresis model;
s3, constructing a corresponding neural network according to the discrete JA hysteresis model and the structural relation thereof and the signal transmission relation between the input excitation of the discrete JA hysteresis model and the output response of the intelligent material device, and completing modeling;
the formula of the JA hysteresis model in the step S1 is as follows:
Wherein M is an output variable, represents the magnetization degree output by the JA hysteresis model, mrev represents the reversible magnetization degree of the JA hysteresis model, mirr represents the irreversible magnetization degree of the JA hysteresis model, H represents the magnetic field intensity of an input intelligent material device, Deriving the input magnetic field intensity (H) of the intelligent material device by means of the irreversible magnetization degree (Mirr), wherein k represents a hysteresis energy loss factor, delta represents a parameter of the magnetic field change direction, alpha represents a non-magnetizable shape parameter, M an represents the non-magnetizable degree, c represents a magnetization weight coefficient, ms represents a magnetic saturation factor, coth (·) represents a hyperbolic cosine function, and a represents a magnetic domain coupling factor;
The formula of the discrete JA hysteresis model in the step S2 is as follows:
Wherein k represents the moment, M k represents the output magnetization degree of the discrete JA hysteresis model at the moment k, M k-1 represents the output magnetization degree of the discrete JA hysteresis model at the moment k-1, sinh (·) represents the hyperbolic sine function, and Hk-1 represents the input magnetic field strength of the discrete JA hysteresis model at the moment k-1;
The structural relation in the step S3 is the structural relation between the parameters of the discrete JA hysteresis model and each item in the formula; the signal transfer relation between the discrete JA hysteresis model input excitation and the intelligent material device output response in the step S3 is the transfer relation from the discrete JA hysteresis model input signal to the output signal;
the neural network model of the intelligent material device in the step S3 comprises 8 input layers, 38 hidden layers and one output layer; each of the 38 hidden layers and one of the output layers includes one neuron; the connection weight between layers is 1;
the input layer comprises eight input units which are connected In parallel, namely In1, in2, in3, in4, in5, in6, in7 and In8; the output layer is an hidden layer 39;
the specific design process of the neural network model of the intelligent material device is as follows:
Input voltage sequence H and derivative sequence dH thereof, five input sequences composed of constant 1 and input sequences composed of sampling time T are respectively input to eight input units;
respectively delaying the output data of the input unit In1, the output data of the input unit In2 and the output data of the hidden layer 39 to obtain corresponding delay data; respectively inputting output data of the input units In3 to In7 to the hidden layers 1 to 5 for calculation;
the delay data corresponding to the hidden layer 39 and the output data of the hidden layer 2 are input to the hidden layer 12 for calculation; the delay data of the input unit In1 and the output data of the hidden layer 12 are input to the hidden layer 6 for calculation; inputting the output data of the hidden layer 1 to the hidden layer 37 for calculation; the delay data of the input unit In2 are input to the hidden layer 13 for calculation; inputting output data of the hidden layer 37 and the hidden layer 6 to the hidden layer 7 for calculation;
Respectively inputting the output data of the hidden layer 7 into the hidden layer 8 and the hidden layer 14 for calculation; inputting the output data of the hidden layer 8 into the hidden layer 10 for calculation; inputting the output data of the hidden layer 6 to the hidden layer 9 for calculation; inputting the output data of the hidden layer 10 and the output data of the hidden layer 1 into the hidden layer 11 for calculation;
Inputting the output data of the hidden layer 9 and the hidden layer 11 to the hidden layer 15 for calculation; inputting the output data of the hidden layer 9, the output data of the hidden layer 2, the output data of the hidden layer 3 and the output data of the hidden layer 4 into the hidden layer 16 for calculation; inputting the output data of the hidden layer 11, the output data of the hidden layer 1, the output data of the hidden layer 2, the output data of the hidden layer 3 and the output data of the hidden layer 4 into the hidden layer 17 for calculation; inputting the output data of the hidden layer 6, the output data of the hidden layer 14 and the output data of the hidden layer 4 into the hidden layer 18 for calculation; inputting the output data of the hidden layer 1 and the output data of the hidden layer 4 into the hidden layer 19 for calculation; the delay data corresponding to the hidden layer 39 and the output data of the hidden layer 6 are input to the hidden layer 20 for calculation;
Inputting the output data of the hidden layer 15, the output data of the hidden layer 16 and the output data of the hidden layer 17 into the hidden layer 21 for calculation; inputting the output data of the hidden layer 18, the output data of the hidden layer 19 and the output data of the hidden layer 20 into the hidden layer 22 for calculation; inputting the output data of the hidden layer 3 into the hidden layer 23 for calculation; inputting the output data of the hidden layer 5 and the output data of the hidden layer 13 into the hidden layer 24 for calculation;
Inputting the output data of the hidden layer 6, the output data of the hidden layer 23 and the output of the hidden layer 24 data into the hidden layer 27 for calculation; inputting the output data of the hidden layer 21 to the hidden layer 38 for calculation; inputting the output data of the hidden layer 38 and the output data of the hidden layer 15 into the hidden layer 29 for calculation;
Inputting the output data of the input unit In8, the output data of the hidden layer 23, the output data of the hidden layer 22 and the delay data of the input unit In2 to the hidden layer 25 for calculation; inputting the output of the hidden layer 22 and the output data of the hidden layer 2 into the hidden layer 26 for calculation; inputting the output data of the hidden layer 26 and the output of the hidden layer 27 to the hidden layer 28 for calculation; the output data of the hidden layer 9, the output data of the hidden layer 4, the output data of the hidden layer 3, the delay data of the input unit In2 and the output data of the input unit In8 are input into the hidden layer 31 for calculation; the output data of the hidden layer 11, the output data of the hidden layer 4, the output data of the hidden layer 3, the output data of the hidden layer 1, the delay data of the input unit In2 and the output data of the input unit In8 are input to the hidden layer 32 for calculation;
Inputting the output data of hidden layer 28 to hidden layer 36 for calculation; inputting the output data of the hidden layer 31 and the output data of the hidden layer 38 to the hidden layer 34 for calculation; inputting the output data of the hidden layer 38 and the output data of the hidden layer 32 into the hidden layer 35 for calculation; inputting the output data of the hidden layer 25 and the output data of the hidden layer 36 into the hidden layer 30 for calculation; inputting the output data of the hidden layer 29 and the output data of the hidden layer 30 to the hidden layer 33 for calculation; the output data of the hidden layer 33, the output data of the hidden layer 34, the output data of the hidden layer 35 and the delay data corresponding to the hidden layer 39 are input to the hidden layer 39 for calculation, and the processing is completed.
2. The neural network modeling method for smart material devices of claim 1, wherein: the intelligent material device in the step S1 comprises a piezoelectric actuator, a shape memory alloy driver, a transformer magnetic core and a magneto-rheological damper.
3. The neural network modeling method for smart material devices of claim 1, wherein: the activation functions of the hidden layers 1 to 5 are sa=eτu, sα=eτu, sc=eτu, sms=eτu, sk=eτu; the activation function of the hidden layer 8 is a hyperbolic sine function, i.e. sinh (u); the activation functions of the hidden layer 9 and the hidden layer 10 are square functions, namely u2; the activation function of the hidden layer 13 is a saturation function, i.e. sat (u); the activation function of the hidden layer 14 is a hyperbolic cosine function, i.e. coth (u); the activation function of the hidden layer 23 is 1-u; the activation functions of the hidden layers 36, 37 and 38 are allThe activation functions of the rest hidden layers are linear activation functions; where u represents the input sequence of the hidden layer and τ represents the search step size parameter.
4. A hysteresis characteristic prediction method based on the neural network modeling method for an intelligent material device according to any one of claims 1 to 3, characterized in that: the method comprises the following steps:
B1, obtaining a trained voltage sequence data set and a trained displacement sequence data set by adopting amplitude sweep excitation signals;
B2, taking the voltage sequence data set as input of a neural network model of the intelligent material device, taking the displacement sequence data set as a label, and training the neural network model of the intelligent material device to obtain a trained neural network model of the intelligent material device;
and B3, inputting hysteresis characteristic data to be predicted into a neural network model of the trained intelligent material device to obtain predicted displacement output data, and finishing the prediction of the hysteresis characteristic.
5. The hysteresis characteristic prediction method according to claim 4, wherein: and step B2, training the neural network by adopting an LM algorithm, and adjusting the weight of the neural network according to a mean square error objective function until a mean square error objective function smaller than 10 < -5 > is obtained, thereby obtaining the trained neural network.
6. The hysteresis characteristic prediction method according to claim 5, characterized in that: the training step number is set to 500 steps.
CN202311717246.6A 2023-12-13 2023-12-13 Neural network modeling and hysteresis characteristics prediction method for smart material devices Active CN117649902B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311717246.6A CN117649902B (en) 2023-12-13 2023-12-13 Neural network modeling and hysteresis characteristics prediction method for smart material devices

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311717246.6A CN117649902B (en) 2023-12-13 2023-12-13 Neural network modeling and hysteresis characteristics prediction method for smart material devices

Publications (2)

Publication Number Publication Date
CN117649902A CN117649902A (en) 2024-03-05
CN117649902B true CN117649902B (en) 2024-06-28

Family

ID=90049375

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311717246.6A Active CN117649902B (en) 2023-12-13 2023-12-13 Neural network modeling and hysteresis characteristics prediction method for smart material devices

Country Status (1)

Country Link
CN (1) CN117649902B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118862511B (en) * 2024-08-13 2025-08-12 哈尔滨工业大学 Modeling method of piezoelectric fast-tilting mirror

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6047605A (en) * 1997-10-21 2000-04-11 Magna-Lastic Devices, Inc. Collarless circularly magnetized torque transducer having two phase shaft and method for measuring torque using same
CN110245430A (en) * 2019-06-18 2019-09-17 吉林大学 Improved Bouc-Wen Model Hysteresis Modeling Method

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5842194A (en) * 1995-07-28 1998-11-24 Mitsubishi Denki Kabushiki Kaisha Method of recognizing images of faces or general images using fuzzy combination of multiple resolutions
CN111897212B (en) * 2020-06-09 2022-05-31 吉林大学 Multi-model combined modeling method of magnetic control shape memory alloy actuator
CN115587562A (en) * 2022-10-11 2023-01-10 国网甘肃省电力公司电力科学研究院 Iron loss model analysis method considering tidal current reverse transmission under new energy access

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6047605A (en) * 1997-10-21 2000-04-11 Magna-Lastic Devices, Inc. Collarless circularly magnetized torque transducer having two phase shaft and method for measuring torque using same
CN110245430A (en) * 2019-06-18 2019-09-17 吉林大学 Improved Bouc-Wen Model Hysteresis Modeling Method

Also Published As

Publication number Publication date
CN117649902A (en) 2024-03-05

Similar Documents

Publication Publication Date Title
CN113051831B (en) Modeling method of machine tool thermal error self-learning prediction model and thermal error control method
CN104238366B (en) The forecast Control Algorithm and device of piezoelectric ceramic actuator based on neuroid
CN104991997B (en) The broad sense rate correlation P-I hysteresis modeling methods of adaptive differential evolution algorithm optimization
CN117649902B (en) Neural network modeling and hysteresis characteristics prediction method for smart material devices
CN107607105B (en) Nonlinear Error Compensation Method for Fiber Optic Gyroscope Based on Fractional Differential
CN105159081A (en) Intelligent control method of steering engine electro-hydraulic loading system
CN103941589B (en) A kind of nonlinear model predictive control method of piezo actuator
Jiaqiang et al. Design of the H∞ robust control for the piezoelectric actuator based on chaos optimization algorithm
Wu et al. Neural network based adaptive control for a piezoelectric actuator with model uncertainty and unknown external disturbance
Jiang et al. Intelligent feedforward hysteresis compensation and tracking control of dielectric electro-active polymer actuator
Yu et al. Hysteresis nonlinearity modeling and position control for a precision positioning stage based on a giant magnetostrictive actuator
CN112526876B (en) Design method of LQG controller of LPV system based on data driving
CN106682728A (en) Duhem-based piezoelectric actuator neural network parameter identification method
CN117649903B (en) Dynamic hysteresis neural network modeling and prediction method for intelligent material device
CN115408931A (en) Vortex vibration response prediction method based on deep learning
CN115034504A (en) Tool wear state prediction system and method based on cloud-edge collaborative training
CN114879506A (en) Bridge crane model prediction control method based on Koopman operator
CN119759109A (en) A flow control method and device based on particle swarm algorithm optimization of fuzzy PID
CN117908395A (en) A piezoelectric ceramic drive compensation control method
Xu et al. Adaptive parameter estimation with convergence analysis for the Prandtl–Ishlinskii hysteresis operator
Qiao et al. A PID tuning strategy based on a variable weight beetle antennae search algorithm for hydraulic systems
CN119388440B (en) A soft robot control method integrating attention mechanism and physics-inspired neural network
CN112712131A (en) Game theory framework-based neural network model lifelong learning method
Baziyad et al. Generalization Enhancement of Operator-LSSVM-Based Hysteresis Model Using Improved Particle Swarm Optimization for Piezoelectric Actuators
CN120124681B (en) Efficient prediction method and device for hydraulic motor friction torque driven by physical data fusion

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20250606

Address after: 518000 1002, Building A, Zhiyun Industrial Park, No. 13, Huaxing Road, Henglang Community, Longhua District, Shenzhen, Guangdong Province

Patentee after: Shenzhen Wanzhida Technology Co.,Ltd.

Country or region after: China

Address before: 621000 Fucheng District, Mianyang, Sichuan

Patentee before: Southwest University of Science and Technology

Country or region before: China