Disclosure of Invention
Aiming at the defects in the prior art, the neural network modeling and hysteresis characteristic prediction method for the intelligent material device solves the problems of low efficiency and low accuracy in predicting the hysteresis characteristic of the intelligent material device caused by the defects of high calculation cost, large data volume, long running time, unexplained performance and the like in the prior art.
In order to achieve the aim of the invention, the invention adopts the following technical scheme:
Provided is a neural network modeling method for an intelligent material device, comprising the steps of:
S1, constructing a JA hysteresis model according to physical characteristics and constraint conditions of an intelligent material device;
s2, discretizing the JA hysteresis model to obtain a discretized JA hysteresis model;
S3, constructing a corresponding neural network according to the discrete JA hysteresis model and the structural relation thereof and the signal transmission relation between the input excitation of the discrete JA hysteresis model and the output response of the intelligent material device, and completing modeling.
Further, the smart material device in step S1 includes a piezoelectric actuator, a shape memory alloy driver, a transformer core, and a magneto-rheological damper.
Further, the formula of the JA hysteresis model in step S1 is:
Wherein H is an input variable and represents the intensity of a magnetic field applied to the intelligent material; m is an output variable, represents the magnetization degree output by the JA hysteresis model, mrev represents the reversible magnetization degree of the JA hysteresis model, mirr represents the irreversible magnetization degree of the JA hysteresis model, H represents the magnetic field intensity of an input intelligent material device, The irreversible magnetization degree (Mirr) is expressed to derive the input magnetic field intensity (H) of the intelligent material device, k is expressed as a hysteresis energy loss factor, delta is expressed as a parameter of the magnetic field change direction, alpha is expressed as a non-magnetizable shape parameter, man is expressed as a non-magnetizable degree, c is expressed as a magnetization weight coefficient, ms is expressed as a magnetic saturation factor, coth (DEG) is expressed as a hyperbolic cosine function, and a is expressed as a magnetic domain coupling factor.
Further, the formula of the discrete JA hysteresis model in step S2 is:
Where k represents the time, M k represents the output magnetization level at time k of the discrete JA hysteresis model, M k-1 represents the output magnetization level at time k-1 of the discrete JA hysteresis model, sinh (·) represents the hyperbolic sine function, and H k-1 represents the input magnetic field strength at time k-1 of the discrete JA hysteresis model.
Further, the structural relationship in the step S3 is the structural relationship between the parameters of the discrete JA hysteresis model and each item in the formula thereof; the signal transmission relation between the input excitation of the discrete JA hysteresis model and the output response of the intelligent material device in the step S3 is the transmission relation from the input signal to the output signal of the discrete JA hysteresis model.
Further, the neural network model of the smart material device in step S3 includes 8 input layers, 38 hidden layers, and one output layer; the 38 hidden layers and one output layer each include one neuron; the connection weight between layers is 1;
The input layer comprises eight input units which are connected In parallel, namely In1, in2, in3, in4, in5, in6, in7 and In8; the output layer is an hidden layer 39;
the specific design process of the neural network model of the intelligent material device is as follows:
Input voltage sequence H and derivative sequence dH thereof, five input sequences composed of constant 1 and input sequences composed of sampling time T are respectively input to eight input units;
respectively delaying the output data of the input unit In1, the output data of the input unit In2 and the output data of the hidden layer 39 to obtain corresponding delay data; respectively inputting output data of the input units In3 to In7 to the hidden layers 1 to 5 for calculation;
the delay data corresponding to the hidden layer 39 and the output data of the hidden layer 2 are input to the hidden layer 12 for calculation; the delay data of the input unit In1 and the output data of the hidden layer 12 are input to the hidden layer 6 for calculation; inputting the output data of the hidden layer 1 to the hidden layer 37 for calculation; the delay data of the input unit In2 are input to the hidden layer 13 for calculation; inputting output data of the hidden layer 37 and the hidden layer 6 to the hidden layer 7 for calculation;
Respectively inputting the output data of the hidden layer 7 into the hidden layer 8 and the hidden layer 14 for calculation; inputting the output data of the hidden layer 8 into the hidden layer 10 for calculation; inputting the output data of the hidden layer 6 to the hidden layer 9 for calculation; inputting the output data of the hidden layer 10 and the output data of the hidden layer 1 into the hidden layer 11 for calculation;
Inputting the output data of the hidden layer 9 and the hidden layer 11 to the hidden layer 15 for calculation; inputting the output data of the hidden layer 9, the output data of the hidden layer 2, the output data of the hidden layer 3 and the output data of the hidden layer 4 into the hidden layer 16 for calculation; inputting the output data of the hidden layer 11, the output data of the hidden layer 1, the output data of the hidden layer 2, the output data of the hidden layer 3 and the output data of the hidden layer 4 into the hidden layer 17 for calculation; inputting the output data of the hidden layer 6, the output data of the hidden layer 14 and the output data of the hidden layer 4 into the hidden layer 18 for calculation; inputting the output data of the hidden layer 1 and the output data of the hidden layer 4 into the hidden layer 19 for calculation; the delay data corresponding to the hidden layer 39 and the output data of the hidden layer 6 are input to the hidden layer 20 for calculation;
Inputting the output data of the hidden layer 15, the output data of the hidden layer 16 and the output data of the hidden layer 17 into the hidden layer 21 for calculation; inputting the output data of the hidden layer 18, the output data of the hidden layer 19 and the output data of the hidden layer 20 into the hidden layer 22 for calculation; inputting the output data of the hidden layer 3 into the hidden layer 23 for calculation; inputting the output data of the hidden layer 5 and the output data of the hidden layer 13 into the hidden layer 24 for calculation;
Inputting the output data of the hidden layer 6, the output data of the hidden layer 23 and the output of the hidden layer 24 data into the hidden layer 27 for calculation; inputting the output data of the hidden layer 21 to the hidden layer 38 for calculation; inputting the output data of the hidden layer 38 and the output data of the hidden layer 15 into the hidden layer 29 for calculation;
Inputting the output data of the input unit In8, the output data of the hidden layer 23, the output data of the hidden layer 22 and the delay data of the input unit In2 to the hidden layer 25 for calculation; inputting the output of the hidden layer 22 and the output data of the hidden layer 2 into the hidden layer 26 for calculation; inputting the output data of the hidden layer 26 and the output of the hidden layer 27 to the hidden layer 28 for calculation; the output data of the hidden layer 9, the output data of the hidden layer 4, the output data of the hidden layer 3, the delay data of the input unit In2 and the output data of the input unit In8 are input into the hidden layer 31 for calculation; the output data of the hidden layer 11, the output data of the hidden layer 4, the output data of the hidden layer 3, the output data of the hidden layer 1, the delay data of the input unit In2 and the output data of the input unit In8 are input to the hidden layer 32 for calculation;
Inputting the output data of hidden layer 28 to hidden layer 36 for calculation; inputting the output data of the hidden layer 31 and the output data of the hidden layer 38 to the hidden layer 34 for calculation; inputting the output data of the hidden layer 38 and the output data of the hidden layer 32 into the hidden layer 35 for calculation; inputting the output data of the hidden layer 25 and the output data of the hidden layer 36 into the hidden layer 30 for calculation; inputting the output data of the hidden layer 29 and the output data of the hidden layer 30 to the hidden layer 33 for calculation; the output data of the hidden layer 33, the output data of the hidden layer 34, the output data of the hidden layer 35 and the delay data corresponding to the hidden layer 39 are input to the hidden layer 39 for calculation, and the processing is completed.
Further, the activation functions of the hidden layers 1 to 5 are Sa=eτu、Sα=eτu、Sc=eτu、Sms=eτu、Sk=eτu;, and the activation function of the hidden layer 8 is a hyperbolic sine function, i.e. sinh (u); the activation functions of the hidden layer 9 and the hidden layer 10 are square functions, namely u 2; the activation function of the hidden layer 13 is a saturation function, i.e. sat (u); the activation function of the hidden layer 14 is a hyperbolic cosine function, i.e. coth (u); the activation function of the hidden layer 23 is 1-u; hidden layer 36, hidden layer 37, hidden layer 38 are all active functionsThe activation functions of the rest hidden layers are linear activation functions; where u represents the input sequence of the hidden layer and τ represents the search step size parameter.
The method for predicting the hysteresis characteristics of the neural network modeling method for the intelligent material device comprises the following steps:
B1, obtaining a trained voltage sequence data set and a trained displacement sequence data set by adopting amplitude sweep excitation signals;
B2, taking the voltage sequence data set as input of a neural network model of the intelligent material device, taking the displacement sequence data set as a label, and training the neural network model of the intelligent material device to obtain a trained neural network model of the intelligent material device;
and B3, inputting hysteresis characteristic data to be predicted into a neural network model of the trained intelligent material device to obtain predicted displacement output data, and finishing the prediction of the hysteresis characteristic.
Further, step B2 adopts LM algorithm to train the neural network, adjusts the weight of the neural network according to the mean square error objective function until the mean square error objective function smaller than 10 -5 is obtained, and the trained neural network is obtained.
Further, the number of training steps was set to 500 steps.
The beneficial effects of the invention are as follows: the neural network of the modeling method has small scale and complexity, is beneficial to training of the network, reduces data volume and operation time and has low calculation cost; according to the prediction method, the trained neural network is utilized to predict the hysteresis characteristics of the intelligent material device, so that errors are reduced, accuracy is improved, and the nonlinear characteristics of the intelligent material device can be effectively described.
Detailed Description
The following description of the embodiments of the present invention is provided to facilitate understanding of the present invention by those skilled in the art, but it should be understood that the present invention is not limited to the scope of the embodiments, and all the inventions which make use of the inventive concept are protected by the spirit and scope of the present invention as defined and defined in the appended claims to those skilled in the art.
As shown in fig. 1, a neural network modeling method for a smart material device includes the steps of:
S1, constructing a JA hysteresis model according to physical characteristics and constraint conditions of an intelligent material device;
s2, discretizing the JA hysteresis model to obtain a discretized JA hysteresis model;
S3, constructing a corresponding neural network according to the discrete JA hysteresis model and the structural relation thereof and the signal transmission relation between the input excitation of the discrete JA hysteresis model and the output response of the intelligent material device, and completing modeling.
The intelligent material device in the step S1 comprises a piezoelectric actuator, a shape memory alloy driver, a transformer magnetic core and a magneto-rheological damper.
The formula of the JA hysteresis model in step S1 is:
Wherein H is an input variable and represents the intensity of a magnetic field applied to the intelligent material; m is an output variable, represents the magnetization degree output by the JA hysteresis model, M rev represents the reversible magnetization degree of the JA hysteresis model, M irr represents the irreversible magnetization degree of the JA hysteresis model, H represents the magnetic field intensity of an input intelligent material device, The irreversible magnetization degree (M irr) is expressed to derive the input magnetic field intensity (H) of the intelligent material device, k is expressed as a hysteresis energy loss factor, delta is expressed as a parameter of the magnetic field change direction, alpha is expressed as a non-magnetizable shape parameter, M an is expressed as a non-magnetizable degree, c is expressed as a magnetization weight coefficient, M s is expressed as a magnetic saturation factor, coth (DEG) is expressed as a hyperbolic cosine function, and a is expressed as a magnetic domain coupling factor.
The formula of the discrete JA hysteresis model in step S2 is:
Where k represents the time, M k represents the output magnetization level at time k of the discrete JA hysteresis model, M k-1 represents the output magnetization level at time k-1 of the discrete JA hysteresis model, sinh (·) represents the hyperbolic sine function, and H k-1 represents the input magnetic field strength at time k-1 of the discrete JA hysteresis model.
The structural relation in the step S3 is the structural relation between the parameters of the discrete JA hysteresis model and each item in the formula; the signal transmission relation between the input excitation of the discrete JA hysteresis model and the output response of the intelligent material device in the step S3 is the transmission relation from the input signal to the output signal of the discrete JA hysteresis model.
As shown in fig. 2, the neural network model of the smart material device in step S3 includes 8 input layers, 38 hidden layers, and one output layer; the 38 hidden layers and one output layer each include one neuron; the connection weight between layers is 1;
The input layer comprises eight input units which are connected In parallel, namely In1, in2, in3, in4, in5, in6, in7 and In8; the output layer is an hidden layer 39;
the specific design process of the neural network model of the intelligent material device is as follows:
Input voltage sequence H and derivative sequence dH thereof, five input sequences composed of constant 1 and input sequences composed of sampling time T are respectively input to eight input units;
respectively delaying the output data of the input unit In1, the output data of the input unit In2 and the output data of the hidden layer 39 to obtain corresponding delay data; respectively inputting output data of the input units In3 to In7 to the hidden layers 1 to 5 for calculation;
the delay data corresponding to the hidden layer 39 and the output data of the hidden layer 2 are input to the hidden layer 12 for calculation; the delay data of the input unit In1 and the output data of the hidden layer 12 are input to the hidden layer 6 for calculation; inputting the output data of the hidden layer 1 to the hidden layer 37 for calculation; the delay data of the input unit In2 are input to the hidden layer 13 for calculation; inputting output data of the hidden layer 37 and the hidden layer 6 to the hidden layer 7 for calculation;
Respectively inputting the output data of the hidden layer 7 into the hidden layer 8 and the hidden layer 14 for calculation; inputting the output data of the hidden layer 8 into the hidden layer 10 for calculation; inputting the output data of the hidden layer 6 to the hidden layer 9 for calculation; inputting the output data of the hidden layer 10 and the output data of the hidden layer 1 into the hidden layer 11 for calculation;
Inputting the output data of the hidden layer 9 and the hidden layer 11 to the hidden layer 15 for calculation; inputting the output data of the hidden layer 9, the output data of the hidden layer 2, the output data of the hidden layer 3 and the output data of the hidden layer 4 into the hidden layer 16 for calculation; inputting the output data of the hidden layer 11, the output data of the hidden layer 1, the output data of the hidden layer 2, the output data of the hidden layer 3 and the output data of the hidden layer 4 into the hidden layer 17 for calculation; inputting the output data of the hidden layer 6, the output data of the hidden layer 14 and the output data of the hidden layer 4 into the hidden layer 18 for calculation; inputting the output data of the hidden layer 1 and the output data of the hidden layer 4 into the hidden layer 19 for calculation; the delay data corresponding to the hidden layer 39 and the output data of the hidden layer 6 are input to the hidden layer 20 for calculation;
Inputting the output data of the hidden layer 15, the output data of the hidden layer 16 and the output data of the hidden layer 17 into the hidden layer 21 for calculation; inputting the output data of the hidden layer 18, the output data of the hidden layer 19 and the output data of the hidden layer 20 into the hidden layer 22 for calculation; inputting the output data of the hidden layer 3 into the hidden layer 23 for calculation; inputting the output data of the hidden layer 5 and the output data of the hidden layer 13 into the hidden layer 24 for calculation;
Inputting the output data of the hidden layer 6, the output data of the hidden layer 23 and the output of the hidden layer 24 data into the hidden layer 27 for calculation; inputting the output data of the hidden layer 21 to the hidden layer 38 for calculation; inputting the output data of the hidden layer 38 and the output data of the hidden layer 15 into the hidden layer 29 for calculation;
Inputting the output data of the input unit In8, the output data of the hidden layer 23, the output data of the hidden layer 22 and the delay data of the input unit In2 to the hidden layer 25 for calculation; inputting the output of the hidden layer 22 and the output data of the hidden layer 2 into the hidden layer 26 for calculation; inputting the output data of the hidden layer 26 and the output of the hidden layer 27 to the hidden layer 28 for calculation; the output data of the hidden layer 9, the output data of the hidden layer 4, the output data of the hidden layer 3, the delay data of the input unit In2 and the output data of the input unit In8 are input into the hidden layer 31 for calculation; the output data of the hidden layer 11, the output data of the hidden layer 4, the output data of the hidden layer 3, the output data of the hidden layer 1, the delay data of the input unit In2 and the output data of the input unit In8 are input to the hidden layer 32 for calculation;
Inputting the output data of hidden layer 28 to hidden layer 36 for calculation; inputting the output data of the hidden layer 31 and the output data of the hidden layer 38 to the hidden layer 34 for calculation; inputting the output data of the hidden layer 38 and the output data of the hidden layer 32 into the hidden layer 35 for calculation; inputting the output data of the hidden layer 25 and the output data of the hidden layer 36 into the hidden layer 30 for calculation; inputting the output data of the hidden layer 29 and the output data of the hidden layer 30 to the hidden layer 33 for calculation; the output data of the hidden layer 33, the output data of the hidden layer 34, the output data of the hidden layer 35 and the delay data corresponding to the hidden layer 39 are input to the hidden layer 39 for calculation, and the processing is completed.
The activation functions of the hidden layers 1 to 5 are Sa=eτu、Sα=eτu、Sc=eτu、Sms=eτu、Sk=eτu;, and the activation function of the hidden layer 8 is a hyperbolic sine function, namely sinh (u); the activation functions of the hidden layer 9 and the hidden layer 10 are square functions, namely u2; the activation function of the hidden layer 13 is a saturation function, i.e. sat (u); the activation function of the hidden layer 14 is a hyperbolic cosine function, i.e. coth (u); the activation function of the hidden layer 23 is 1-u; hidden layer 36, hidden layer 37, hidden layer 38 are all active functionsThe activation functions of the rest hidden layers are linear activation functions; where u represents the input sequence of the hidden layer and τ represents the search step size parameter.
As shown in fig. 3, a hysteresis characteristic predictor of a neural network modeling method for an intelligent material device includes the following steps:
B1, obtaining a trained voltage sequence data set and a trained displacement sequence data set by adopting amplitude sweep excitation signals;
B2, taking the voltage sequence data set as input of a neural network model of the intelligent material device, taking the displacement sequence data set as a label, and training the neural network model of the intelligent material device to obtain a trained neural network model of the intelligent material device;
and B3, inputting hysteresis characteristic data to be predicted into a neural network model of the trained intelligent material device to obtain predicted displacement output data, and finishing the prediction of the hysteresis characteristic.
And step B2, training the neural network by adopting an LM algorithm, and adjusting the weight of the neural network according to the mean square error objective function until a mean square error objective function smaller than 10 < -5 > is obtained, thereby obtaining the trained neural network.
The number of training steps was set to 500 steps.
In one embodiment of the invention, the JA hysteresis model is built based on the energy balance principle, and can be used for describing the magnetization process of the soft magnetic intelligent material, and the mathematical model is as follows:
the deduction is carried out to obtain:
the final formula is as follows:
Wherein, The magnetic field intensity H of the intelligent material device is expressed as a derivative function of time t,The output magnetization degree H of the JA hysteresis model is calculated on time t, and f (·) is divided in a derived formulaThe other parts, para, represent five constant parameters, namely a magnetic domain coupling factor a, a non-magnetizable shape parameter alpha, a magnetization weight coefficient c, a magnetic saturation factor M s, and a hysteresis energy loss factor k.
According to backward differentiationThe following formula can be obtained:
wherein fk-1, Respectively a function f (·) and a functionDiscrete values at sample time k-1.
The formula in claim 4 is finally obtained.
In fig. 2, z -1 represents a time delay; the length of the eight network inputs is the same; the output data of the input unit In3 to the input unit In7 are respectively five parameters of a JA hysteresis model, namely a magnetic domain coupling factor a, a non-magnetizable shape parameter alpha, a magnetization weight coefficient c, a magnetic saturation factor M s and a hysteresis energy loss factor k, and the input weights are adaptively adjusted along with network training and are respectively w 1、w2、w3、w4、w5; the design of the 5 custom activation functions of the neural network depends on the value range of 5 parameters of the JA hysteresis model. For example, an activation function in the form of an exponent is set.
As shown in fig. 4, the main control computer sends out the excitation input signal, amplified by the power amplifying module, applied to the intelligent driving material device for generating corresponding output signal, and then the sensor measures the specific output signal quantity, and the specific output signal quantity is transmitted to the main control computer for displaying for data processing. The excitation signal in this step should use a sweep signal of varying amplitude to excite the smart material device.
The neural network is not a traditional black box model, but a transparent network with a clear physical meaning of weight parameters, 5 weights obtained after training of the neural network are values of 5 parameters of the JA hysteresis model, only the weights of neurons corresponding to the parameters of the Jile-Athereon hysteresis model can be adaptively adjusted in the network training process, and other weights are kept unchanged in the training process.
As shown in fig. 5 and 6, a test was performed using a piezoelectric actuator; exciting the piezoelectric actuator by the input excitation signal of FIG. 5 to obtain measured displacement output data; the input and output data obtained by the test are trained on the established neural network, the weights of the neural network are obtained, the neural network is used for prediction, the predicted displacement output data is obtained, and as can be seen from fig. 6, the measured and predicted displacement output data completely coincide. As shown in fig. 7, the maximum error of the measured and predicted displacement output data is less than 1.0% of full scale. As shown in FIG. 8, the hysteresis curves of the experimental test and the neural network prediction completely coincide, which shows that the hysteresis characteristics of the intelligent material device can be effectively described.
TABLE 1 modeling Performance index data for multiple models
As can be seen from Table 1, the modeling accuracy and the running time of the present invention are both advantageous over the common rate-related Prandtl-ISHLINSKII model, the rate-related Preisach model, the rate-related Bouc-Wen model, the rate-related Krasnosel' skii-Pokrovkii model, and the Magic Formula hysteresis model.
In conclusion, the neural network of the modeling method has small scale and complexity, is beneficial to training of the network, reduces data volume and operation time and has low calculation cost; according to the prediction method, the trained neural network is utilized to predict the hysteresis characteristics of the intelligent material device, so that errors are reduced, accuracy is improved, and the nonlinear characteristics of the intelligent material device can be effectively described.