Disclosure of Invention
The invention aims to solve the technical problems of providing an accurate prediction method for tree breast diameter volume based on an optimized fuzzy depth network aiming at the defects of the prior art, developing a tree parameter prediction model based on the accurate prediction method for tree breast diameter volume of the optimized fuzzy depth network, establishing a nonlinear relation among tree parameters, providing a self-adaptive algorithm to enhance the generalization capability of the tree parameter prediction model to different tree varieties, embedding an attention mechanism module to enhance the robustness of the network, and integrating a pigeon optimization algorithm to adjust parameters of the fuzzy depth network in real time so as to further improve the prediction precision and learning capability of the model.
In order to achieve the technical purpose, the invention adopts the following technical scheme:
The accurate prediction method for tree breast diameter volume based on the optimized fuzzy depth network comprises the following steps:
Step 1, acquiring forest land point cloud data through an airborne laser radar;
Step 2, denoising the point cloud data, filtering the point cloud data by adopting a point cloud ground point filtering method, and performing single-wood segmentation on the filtered point cloud data;
Step 3, obtaining tree parameters of the single tree according to the segmented single tree point cloud, wherein the tree parameters comprise east-west crown width, north-south crown width, tree height, point cloud density and crown area;
Step 4, acquiring the breast diameter and the volume of the corresponding tree by adopting a manual mapping method;
step 5, taking east-west crown breadth, north-south crown breadth, tree height, point cloud density, crown volume, breast diameter and timber volume of a plurality of trees as a training sample data set;
Step 6, a forest parameter prediction network is established, the forest parameter prediction network comprises a fuzzy depth network and a pigeon cluster optimization module, the forest parameter prediction network is trained by adopting a training sample data set, the input of the forest parameter prediction network is east-west crown width, south-north crown width, tree height, point cloud density and crown product of a tree, the output is breast diameter and timber volume of the tree, the predicted value is output after the training of the fuzzy depth network is completed and is transmitted to the pigeon cluster optimization module, the pigeon cluster optimization module updates parameters of the fuzzy depth network, and when the pigeon cluster optimization module completes the searching of optimal parameters of the fuzzy depth network, the fuzzy depth network completes self-adaptive training according to the optimal parameters, and a final forest parameter prediction model is established;
And 7, collecting cloud data of the forest land to be measured, acquiring east-west crown width, south-north crown width, tree height, point cloud density and crown volume of each tree in the forest land to be measured according to the methods of the steps 2 and 3, and inputting the east-west crown width, the south-north crown width, the tree height, the point cloud density and the crown volume of the tree into a final forest parameter prediction model to obtain predicted values of breast diameter and timber volume of the corresponding tree.
As a further improved technical scheme of the invention, the method for acquiring the forest parameters of the single tree in the step 3 comprises the following steps:
Selecting the maximum distance as east-west crown width in the east-west direction of the single tree crown point cloud, selecting the maximum distance as north-south crown width in the north-south direction of the single tree crown point cloud, wherein the vertical distance between the highest point of the single tree crown point cloud and the horizontal plane is the tree height, dividing the total number of the single tree crown point cloud by the projection area of the tree crown to obtain the point cloud density, and calculating the convex hull volume of the single tree crown point cloud to obtain the crown volume.
As a further improved technical scheme of the invention, the method for acquiring the breast diameter and the volume of the tree in the step4 comprises the following steps:
The circumference of the trunk is obtained through a tape measure at the trunk position of the 1.3m position of the tree from the ground, and the circumference is the breast diameter of the tree;
The diameter of the upper part of the trunk of the selected tree is The position is taken as a shape point, D is the breast diameter of the tree, the length from the measured shape point to the tree tip is H t, the cross-sectional area of the measured breast diameter is S D, the tree height of the tree is measured to be H, and then the volume of the tree is:
Wherein r is a stem shape index, and S vl is the volume of the tree.
As a further improved technical scheme of the invention, the fuzzy depth network in the step 6 sequentially comprises an adaptive fuzzy layer, a fuzzy reasoning layer and a weight updating layer based on attention from input to output.
As a further improved technical scheme of the invention, the calculation process of the self-adaptive fuzzy layer is as follows:
The input attribute x u has k fuzzy subsets in the self-adaptive fuzzy layer, u is {1,2,3,4,5}, wherein x 1、x2、x3、x4、x5 is the input attribute of the self-adaptive fuzzy layer, and is sequentially and respectively east-west crown, north-south crown, tree height, point cloud density and crown product, each input attribute has k fuzzy subsets, the self-adaptive fuzzy layer has 5 xk fuzzy subsets in total, and the training sample is a training sample The j-th fuzzy subset to which the input attribute x u belongs in the input self-adaptive fuzzy layer and outputs membership degreeAs shown in formula (2):
wherein x u is centered on the jth blur subset of the adaptive blur layer The variance of x u at the jth fuzzy subset of the adaptive fuzzy layer isJ=1..k, wherein the total number of fuzzy subsets of the input attribute is k, and k is updated by the pigeon cluster optimization module, i=1..n, n is the number of training samples, namely the total tree number of the training samples;
Solving for n training samples from x u Is of the local density of (2)Sum distanceI=1..n, u e {1,2,3,4,5}, local densityThe calculation formula is as follows:
Wherein: Is that D u is the truncated distance of x u, u e {1,2,3,4,5}, i=1, 2,..n, m=1, 2,..n;
separately calculating local densities greater than All training samples of (2)Wherein the minimum value is taken as the Euclidean distance ofDistance of (2)As shown in formula (4):
Wherein: Is that Is of a local density andm=1,2,...n;Is thatAnd (3) withEuclidean distance between them, u is {1,2,3,4,5}, ifIs the greatest in local densityAnd m is not equal to i;
Calculation of Probability value for clustering initial center as input attribute x u As shown in formula (5):
Wherein max (ρ u) is the maximum value of n local densities in ρ u, min (ρ u) is the minimum value of n local densities in ρ u, max (δ u) is the maximum value of n distances in δ u, min (δ u) is the minimum value of n distances in δ u, ρ u is the local density of n training samples as input attribute x u, including n local densities, and δ u is the distance of n training samples as input attribute x u, including n distances;
Step (a), inputting n training samples of attribute x u According toThe values are ordered in descending order, the first k sample points are taken as initial center points of DPKM clustering of n training samples of x u, the Euclidean distance between n training samples of input attribute x u and the centers of k categories is calculated, the training samples are distributed to the category where the nearest center is located according to the minimum distance distribution principle, the average value of each category is calculated after the distribution of the n training samples is completed once, the average value is taken as the new category center and the category center is updated, the steps (b) and (c) are repeated until the change of the category center is smaller than the set error, DPKM clustering is completed, and the center of the k clusters is the center of the j fuzzy subset of the x u in the self-adaptive fuzzy layerj=1,...,k;
When the input attribute x u is centered in the j-th fuzzy subset of the adaptive fuzzy layerAndThe DPKM clustersAfter establishment, the variance of x u at the jth fuzzy subset of the adaptive fuzzy layer is calculated according to the following adaptive algorithm
First, the center of the k fuzzy subset centers of x u is calculatedAs shown in formula (6):
Then solve for clusters Is an internal sample of (2)And (3) withAverage Euclidean distance of (2)As shown in formula (7):
Wherein: Representation of Is an internal sample of (2)And (3) withEuclidean distance; Is that Is an internal sample of (2)Is the number of (3);
Wherein: Is that Alpha is a variance scaling factor, updated by the pigeon cluster optimization module; The maximum distance between the centers of k fuzzy subsets for the input attribute x u, u e {1,2,3,4,5}, j=1, 2.
As a further improved technical scheme of the invention, the calculation process of the fuzzy inference layer is as follows:
Membership is input to a fuzzy reasoning layer, a fuzzy unit of the fuzzy reasoning layer is established by adopting a product reasoning method, and the output value of the jth unit of the ith training sample is calculated As a blurring unit output, as shown in formula (9):
Wherein: The product of membership degrees of the jth fuzzy subset of the ith training sample in different input attributes is obtained; Normalized to U e {1,2,3,4,5}, i=1,..n, j=1, 2,..k, n is the total number of trees in the training sample, and the total number of fuzzy units is k.
As a further improved technical scheme of the invention, the calculation process of the weight updating layer based on the attention is as follows:
The loss function is shown in equation (10):
Wherein y 1 is a chest diameter predicted value, and y 2 is a volume predicted value; Is the actual measurement value of the chest diameter, Q i is the attention weight of the i-th training sample, i=1, 2,..n;
Output by fuzzy inference layer The connection weight between the chest diameter is The connection weight between the volume isThen for all training samples there are:
Wherein: Represents the j-th output of the i-th training sample from the fuzzy inference layer A value; Represents the predicted chest diameter value of the ith training sample, J=1, 2,., k, i=1, 2,., n, n is the total number of training samples and the total number of fuzzy units is k;
carrying out normalization pretreatment on the training sample, and then, carrying out matrix of parameters of the ith tree in the training sample The method comprises the steps of including 7 actually measured forest parameters, namely, east-west crown width, north-south crown width, tree height, point cloud density, crown volume, timber volume and breast diameter in a forest parameter matrix z i sequentially from left to right, wherein the east-west crown width average value of all training samples is avg 1, the north-south crown width average value of all training samples is avg 2, the tree height average value of all training samples is avg 3, the point cloud density average value of all training samples is avg 4, the crown volume average value of all training samples is avg 5, the timber volume average value of all training samples is avg 6, the breast diameter average value of all training samples is avg 7, and the average value matrix of each forest parameter in the training samples is avg= [ avg 1,…,avg7],Qi ] in a calculation mode shown in a formula (12):
Wherein: Representing p-th forest parameters from left to right in a forest parameter matrix z i, avg p representing p-th average value of an average value matrix avg from left to right, p epsilon {1,2,3,4,5,6,7}, cosine similarity Sim (z i, avg) and Euclidean distance Dist (z i, avg) both representing internal relations between forest parameters of the i-th tree and the average value avg, τ being an attention weight scaling factor updated by a pigeon cluster optimization module, i=1, 2, and..n;
the connection weight based on the attention is continuously updated by a back propagation manner, and the updating manner is shown in a formula (13):
Wherein, eta is 0< eta <1 to represent learning efficiency, t is the current iteration times;
The initial value of the connection weight based on the attention is randomly given, is continuously updated by a back propagation mode, and is terminated when the iteration number reaches the maximum.
As a further improved technical scheme of the invention, the calculation process of the pigeon cluster optimization module is as follows:
The pigeon group optimization module is used for optimizing parameters k, alpha and tau in a fuzzy depth network, the combination of values of k, alpha and tau is called model parameter combination delta [ k, alpha and tau ], in the parameter search space range, k is 0-100 and is necessarily an integer, alpha is 0-30, tau is 0-10, the pigeon group optimization module is initially L groups of model parameters delta l[kl,αl,τl ] (l=1, 2,..L), namely the pigeon group optimization module has L groups of model parameters for optimization and t max iterations;
the first group of model parameters of the t-th iteration are combined to be values of The fitness is shown in formula (14):
Wherein t is the current iteration number of the pigeon group, l=1, 2,..L, i=1, 2,..n, n is the total number of plants of the training sample; for the chest diameter predictor of the fuzzy depth network, To blur the volume prediction value of the depth network,Is the actual measurement value of the chest diameter,Is the actual measurement value of the volume;
The search strategy of the model parameter combination is divided into two stages according to the iteration number, the first stage is started, the second stage is started when the iteration number reaches 80% of the maximum iteration number t max, and the specific search strategy is as follows:
the calculation process of the first stage of the search strategy of the model parameter combination is shown in the formula (15):
Wherein: Is cosine iteration weight item, t max is maximum iteration number of pigeon, epsilon is a very small constant, rand (0, 1) is a random number between [0,1], and array First group of model parameter combinations for the t-th iteration Is an arrayL=1, 2,..l;
when the fitness of the optimal parameter set of the L groups of model parameter combinations is unchanged for a long time, the L groups of model parameter combinations are ordered in descending order according to the fitness, and the fitness is higher Group parameters are subjected to population variation in a manner shown in a formula (16):
Wherein: Is that The updated value of the population variation is used,Performing rounding processing ifWithin the range of 0 to 100,Otherwise The value is updated within the range of 0-30, otherwise, the value is not updated; The value is updated within the range of 0-10, otherwise, the value is not updated; rand (-1, 1) is a random number in the range of [ -1,1 ];
the calculation process of the second stage of the model parameter combination search strategy is shown in the formula (17):
Wherein: Temporary value of the first group of model parameters at t moment, if The first set of model parameters is updated, i.eOtherwise not update, i.el=1,2,...L;
After each iteration, discarding the model parameter combination with higher partial fitness value, updating the model parameter group quantity L, ending the iteration when the iteration number reaches the maximum iteration number t max or only one group of model parameter combination is remained, outputting the parameter combination with the lowest fitness in the L groups of model parameter combinations at the moment, namely the optimal value of the given parameters k, alpha and tau, and transmitting the model parameter combination into a fuzzy depth network to complete training.
The beneficial effects of the invention are as follows:
According to the invention, an optimized fuzzy depth network is provided to develop a rubber tree forest parameter prediction model, a nonlinear relation between forest parameters is established, a self-adaptive algorithm is provided to enhance the generalization capability of the prediction model on different rubber tree varieties, an attention mechanism module is embedded to enhance the robustness of the network, and a Pigeon group optimization algorithm (Pigeon-inspired Optimization) is integrated to adjust the parameters of the fuzzy depth network in real time, so that the prediction precision and learning capability of the model are further improved. The rubber tree parameter prediction model of the invention can accurately invert complex tree parameters and provide quantitative decisions and data support for forestation and growth tending of different varieties of rubber trees.
The invention is based on a fuzzy depth network, can refine a complex prediction model, provides a self-adaptive learning algorithm to determine a network structure, combines a pigeon optimization algorithm to search for optimal parameters, improves the effect of the self-adaptive algorithm, and adds an attention mechanism to judge abnormal data of a training sample. The forest parameter prediction model further improves the accuracy of the prediction result of the key parameters of the rubber forest.
According to the invention, by establishing a forest parameter prediction model, the breast diameter and the volume of a single rubber tree are predicted by partial forest parameters automatically acquired by the onboard laser point cloud. The forest parameter prediction model combines the advantages of a fuzzy depth network, a attention mechanism and a pigeon cluster optimization algorithm, can realize self-adaptive model establishment according to nonlinear relations among forest parameters, weight distribution of abnormal samples and model parameter optimization strategies, is applicable to establishment of most complex forest relations, and has good universality and robustness. Based on the combination of a fuzzy depth network and a plurality of artificial intelligence algorithms, the training of the model is automatically completed for rubber trees of different varieties according to the forest parameter prediction model, and the model is suitable for the prediction of key parameters of rubber trees of the same variety in the same forest land, and is one of the application of the artificial intelligence technology in the forestry neighborhood.
Detailed Description
The following is a further description of embodiments of the invention, with reference to the accompanying drawings:
In recent years, the airborne laser radar has wide application in forest resource investigation and parameter inversion, but is also subjected to the problem that complicated forest parameters such as breast diameter, volume and the like which are difficult to measure are difficult to obtain due to view angle shielding. Aiming at the problem, the embodiment provides a forest breast diameter volume accurate prediction method based on an optimized fuzzy depth network, firstly, an optimized fuzzy learning network fused with an attention mechanism module is constructed, and a multi-parameter autonomous optimization module based on a pigeon optimization algorithm is added. Secondly, a single plant separation algorithm is combined with artificial forest regulation to extract a plurality of growth parameters of four varieties of rubber trees (hot reclamation 628, hot reclamation 525, hot reclamation 72059 and PR 107) from the airborne point clouds of the three forest lands, and the plurality of growth parameters are taken as training sets to be brought into a deep learning network to optimize the training parameters. Finally, the test sets of the four varieties are respectively brought into a trained network to predict key parameters of the forest and compare and analyze the key parameters with the true values, and the result shows that the comparison result of the chest diameter predicted values and the actual measured values of the four rubber trees is that the RMSE is less than 1.75cm, the R 2 is more than 91.42 percent, and the comparison result of the chest diameter predicted values and the actual measured values of the four rubber trees is more than 90.14 percent. Compared with the traditional backward propagation and radial basis function neural network, the correlation of the forest parameter inversion result obtained by the deep learning network is 4-9% higher. The embodiment applies the latest artificial intelligence technology to the forest land airborne laser point cloud to realize the accurate prediction of the breast diameter and the accumulation amount of the forest, and can meet the requirements of large-scale rubber forest parameter inversion and operation investigation. The specific steps are set forth below.
1. Materials and data:
1.1, study area and data acquisition:
The study area is located in the rubber tree plantation of Danzhou city, in the northwest of the southwest island, from which three multi-variety rubber forest-like plots are selected in this example, as shown in fig. 1 (a), (b), and (c) in google map. The terrain of the area is a typical hilly plateau, belongs to tropical monsoon climate, has average annual precipitation of 1815 mm, and in rainy season (5 months to 10 months), the average annual temperature is about 23 ℃ and is more than 89% of the total annual precipitation, so that the growth requirement of rubber trees can be met. The rubber tree varieties of the thermal reclamation 628, the thermal reclamation 525, the thermal reclamation 72059 and the PR107 in the three sample areas are excellent varieties with the characteristics of high yield, stable yield, strong stress resistance, high tree storage rate and the like, and are planted in large scale in the Hainan area. The hot reclamation 628 has strong cold resistance and wind resistance, is a good variety with relatively stable yield and wide adaptability, has fast growth speed of the hot reclamation 525 and the hot reclamation 523, is early maturing and has high yield, and is a good bakelite and good variety. The PR107 has low initial rubber cutting yield, but has high dry rubber content, high stimulation resistance and high frequency rubber cutting resistance, and the final dry rubber yield is continuously increased, so the PR107 is an excellent high-yield variety. Therefore, the four kinds of rubber trees (with different ages) are selected from the multi-variety artificial rubber tree plantation, and are respectively hot reclamation 628, hot reclamation 525, hot reclamation 72059 and PR107 as shown in (d), (e), (f) and (g) in FIG. 1.
The airborne laser radar with the Velodyne HDL-32E laser radar sensor can realize the vertical field of view (FOV) from-30.67 degrees to +10.67 degrees, provide a horizontal field of view of 360 degrees, has the working frequency of 10HZ, has the measuring range of 70m and has the measuring precision of +/-2cm. The shooting mode of the airborne laser radar is set to be continuous shooting, the flying route is a preprogrammed 'round-trip rectangular parallel' route (as a dotted line in (b) of fig. 1), the flying speed, the flying height and the laser scanning overlap are respectively set to be 10m/s, 30m (higher than the takeoff position) and 30%, the aim is to ensure that the acquisition of the parameters of the branches is complete and clear, the vertical structure of the rubber tree is realized, and finally the extracted point cloud is stored in the LAS 1.2 format.
1.2, Training samples and test samples:
After the point cloud data of the rubber tree land is obtained through the airborne laser radar, gaussian filtering is used for denoising, and point cloud ground point filtering (CSF) is adopted for eliminating adverse factors of the terrain. Then, the present embodiment adopts the existing double gaussian filter and energy function minimized single wood segmentation method, and the method has universality in the subtropical forest in China. The experiment proves that the method is suitable for rubber woodlands, has a good segmentation effect at the junction of crowns, and the segmentation results of three rubber tree-like lands are represented by different colors, and are shown in fig. 2. Fig. 2 (a) is a graph of the results of single wood splitting in the rubber woodland 1. Fig. 2 (b) is a graph of the results of single wood splitting in the rubber woodland 2. Fig. 2 (c) is a graph of the results of single wood splitting in the rubber woodland 3.
Three pieces of polyclonal multi-variety rubber tree sample plot share 1364 rubber tree trees, the single plant point cloud data is checked by manual visual inspection according to the principle that single plant branches are as complete as possible, 813 tree trees are selected from the sample plot, four varieties (about 200 varieties each) are included, and the morphological characteristics of specific single plants of different varieties are shown in figure 3. FIG. 3 is a diagram of point cloud data of rubber tree of different clone varieties. In FIG. 3, from top to bottom, the individual rubber trees of the first and second rows are hot-cultivated 628, the individual rubber trees of the third and fourth rows are hot-cultivated 525, the individual rubber trees of the fifth and sixth rows are hot-cultivated 72059, and the individual rubber trees of the seventh and eighth rows are PR107.
The hot reclamation 628 tree body is lifted, the stems are upright, branches are almost not generated, the tree crowns are smaller, the tree is broom-shaped, the leaves are elliptic, hypertrophic and glossy, and the three small leaves are separated. The thermal reclamation 525 tree body is less in deflection, the height of branches generated by the branches is lower, the angle is larger, the number of branches is larger, the crown is larger, and the tree body is multi-headed. The hot-ground 72059 tree body is soft, easy to bend, more in drooping branches, more in branches, larger in branch angle and crown and is in a sector shape. PR107 tree body is straight, and wind resistance is strong, and the branch produces the height of branching higher, and the branch is less, and the branch is more, and the crown is less, takes the form of broom type, and the leaf is oblong, and the leaf margin has regular little wave.
According to the segmented single-plant rubber tree point cloud, the east-west crown width, the north-south crown width, the tree height, the point cloud density and the crown product in the forest parameters are automatically obtained according to the following method, and the method is specifically shown as follows. Selecting the maximum distance as an east-west crown amplitude parameter in the east-west direction of each crown point cloud, extracting similar parameters in the north-south direction, wherein the vertical distance between the highest point of the single plant point cloud and the horizontal plane is a tree height parameter, dividing the total number of the single plant point cloud by the projection area of the crown, namely the point cloud density, and calculating the convex hull volume of the single plant crown point cloud by using an alpha shape method to obtain crown product parameters.
Because tree parameters such as breast diameters and timber volumes of rubber trees are not easy to directly obtain from airborne single-plant point clouds, the breast diameters and timber volumes of rubber trees are obtained in a manual mapping mode, and the specific method is as follows. And acquiring the circumference of the trunk through a tape measure at the trunk position of the rubber tree 1.3m away from the ground, so as to obtain the breast diameter parameter. The tree volume parameter is obtained by using a tree measurement method, and the diameter of the upper part of the rubber trunk is selected as in actual measurementThe position D is the breast diameter of the rubber tree, the length from the breast diameter to the tree tip is measured to be H t, the cross-sectional area S D of the breast diameter is measured, the tree height H of the rubber tree is measured, and the rubber tree volume is obtained by using the following formula.
Wherein r is a dry form index, and S vl is a rubber tree volume. After parameters are automatically obtained from single plant point clouds and manually mapped, forest parameters of about 200 plants of each variety are obtained, a training set and a test set are divided, and the forest parameters and training samples of four rubber trees of thermal reclamation 628, thermal reclamation 525, thermal reclamation 72059 and PR107 are shown in table 1.
Table 1 is a study plot of different varieties of rubber tree parameters and training samples:
2. forest parameter prediction model:
2.1, model overall architecture design:
The parameters of the trees such as breast diameter, timber volume and the like are not easy to obtain from the airborne single plant point cloud, and the parameters such as breast diameter, timber volume and the like are very important to be obtained by searching the corresponding relation among the parameters of the trees and establishing a tree parameter prediction model. The artificial forest of the multi-variety rubber tree has the same soil, climate and other conditions, and the forest parameters of the same rubber tree variety are normally distributed near the average value of the forest parameters of the variety, but the forest parameters of partial individuals are greatly different from other groups due to factors such as rubber tree necrosis, intraspecies competition and the like, so that the prediction model is required to be capable of autonomously identifying different individuals. In the face of the fact that a plurality of varieties exist in the artificial rubber tree plantation, the growth forms of different varieties are different, so that self-adaptive learning is needed during parameter prediction, and in addition, the prediction effect is greatly influenced by parameter optimization of a prediction model.
Comprehensive researches on the common neural network prediction model find that the attention mechanism effectively enhances the anti-interference capability of the prediction model, and the forest parameter attention mechanism is added to improve the robustness of the forest parameter prediction model, so that the accuracy of the prediction model is improved.
Membership functions of the fuzzy subsets in the fuzzy depth network often adopt Gaussian functions, the centers and variances of the fuzzy subsets are determined in a self-adaptive mode, and learning capacity of the fuzzy depth network is reflected. Therefore, the present embodiment proposes a DPKM algorithm combining a density peak clustering algorithm (DENSITY PEAKS clustering, DPC) and a K-Means algorithm to determine the center of the membership function, and proposes an algorithm to adaptively determine the variance according to the euclidean distance between the centers. The Pigeon optimization algorithm (Pigeon-Inspired Optimization, PIO) can effectively solve the problem of parameter optimization of the fuzzy depth network, so that a forest parameter prediction model which fuses the Pigeon optimization module and the fuzzy depth network is designed.
The overall framework of the forest parameter prediction network provided by the embodiment is shown in fig. 4, and is divided into a fuzzy depth network and a pigeon group optimization module. The pigeon optimization module updates parameters of the fuzzy depth network, and after the network is trained, the predicted value is output and returned to the pigeon optimization module to serve as a precondition for solving the parameter fitness value. And after the pigeon cluster module finishes searching the optimal parameters of the fuzzy depth network, the fuzzy depth network finishes self-adaptive training according to the optimal parameters, and a final forest parameter prediction model is established. The fuzzy depth network is from input to output, and is respectively an adaptive fuzzy layer, a fuzzy reasoning layer and a weight updating layer based on attention. x 1、x2、x3、x4、x5 is the input attribute of the self-adaptive fuzzy layer, which is east-west crown, north-south crown, tree height, point cloud density and crown product, each input attribute has the same fuzzy subset quantity, and the center and variance of the membership function of the fuzzy subset are determined by the self-adaptive algorithm. Sample input self-fuzzy subset output membership value of input attribute The membership corresponding to x u isU e {1,2,3,4,5}, j=1..k, total number of fuzzy subsets is k. The product reasoning determines fuzzy units of the fuzzy reasoning layer, the membership degree h is transmitted into the fuzzy units, one unit corresponds to one output, and the output value isJ=1..k, the total number of fuzzy units is k.And (3) transmitting the attention-based weight updating layer, continuously updating the connection weight by a back propagation mode, and finally outputting a predicted value y 1、y2 by weighting operation, wherein y 1 is a chest diameter predicted value and y 2 is a volume predicted value.
2.2, Adaptive blur layer:
The input attribute x u has k fuzzy subsets in the adaptive fuzzy layer, u e {1,2,3,4,5}, and the adaptive fuzzy layer has a total of 5 x k fuzzy subsets. Training sample The j-th fuzzy subset to which the input attribute x u belongs in the input self-adaptive fuzzy layer and outputs membership degreeAs shown in equation (2).
Wherein x u is the center of the jth fuzzy subset of the adaptive fuzzy layer and the variance is respectivelyThe training samples of x u propose DPKM algorithm to complete clustering according to this embodiment,Is a cluster center according toEuclidean distance betweenSample density inside DPKM clusters, and self-adaptive variance determinationJ=1..k, the total number of fuzzy subsets of the input attributes is k, and k is updated by the pigeon farm optimization module, i=1..n, n is the total number of rubber tree plants of the training samples.
DPKM algorithm solves from n training samples of x u Is of the local density of (2)Sum distanceAnd calculate according to the twoProbability value as initial center of clusteringClustering was then performed, as detailed below, i=1,..n, u e {1,2,3,4,5}.
Wherein: Is that D u is the truncated distance of x u, u e {1,2,3,4,5}, i=1, 2. Calculation ofConsider n training samples of x u Is determined by the Euclidean distance d u In the neighborhood of (2), truncating pairs of training samples within the distance-amplified neighborhood in equation (3)Reducing the effect of sample points outside the neighborhood, and d u selecting the x u training sampleThe ratio of the number of adjacent domains to the total number of plants n is 1% -2%. The n training samples of x u are ordered in descending order according to the size of the local density, and the local density is greater thanAs the minimum Euclidean distance between any two samples in the training samples of (a)Distance of (2)As shown in the following formula.
Wherein: Is that Is of a local density andm=1,2,...n;Is thatAnd (3) withThe euclidean distance between the two, u.epsilon.1, 2,3,4, 5. In particular, ifIs used for the treatment of the skin with the composition,And m+.i.
When (when)Is of the local density of (2)Sum distanceAfter determination, calculateLikelihood value of initial center when DPKM clustering is performed as input attribute x u As shown in equation (5).
Wherein max (ρ u)、min(ρu) is the maximum value and the minimum value of n local densities in ρ u, respectively, and max (δ u)、min(δu) is the maximum value and the minimum value of n distances in δ u, respectively. Equation (5) aims to normalize the local density ρ u and the distance δ u for different scales to be the product of the same scale. ρ u refers to the local density of the u-th input attribute, which has n local densities. Delta u refers to the distance of the u-th input attribute, which has n distances.
Up to this point, of the n training samples of x u According toThe values are ordered in descending order, the first k sample points are taken as initial center points of DPKM clustering of n training samples of x u, the Euclidean distance between n training samples of input attribute x u and the centers of k categories is calculated, the training samples are distributed to the category where the nearest center is located according to the minimum distance distribution principle, after the n samples are distributed once, the average value of each category is calculated, the category center is updated, the two steps are repeated (namely, the second step and the second step) until the change of the category center is smaller than the set error, DPKM clustering is completed, and the center of the k clusters is the fuzzy subset center of x u j=1,...,k。
When the fuzzy subset center of the input attribute x u AndThe DPKM clustersAfter establishment, the fuzzy subset variance of x u is calculated according to the following adaptive algorithmFirst, the center of the k fuzzy subset centers of x u is calculatedAs shown in equation (6).
Then solve for clustersIs an internal sample of (2)And (3) withAverage Euclidean distance of (2)Is to calculate the varianceAs shown in formula (7).
Wherein: Representation of Is an internal sample of (2)And (3) withEuclidean distance; Is that Is an internal sample of (2)Is a number of (3).
Wherein: Is that Alpha is a variance scaling factor, updated by the pigeon cluster optimization module; The maximum distance between the centers of k fuzzy subsets for the input attribute x u, u e {1,2,3,4,5}, j=1, 2.
The initial fuzzy subset number k and the variance scaling coefficient alpha are given to the self-adaptive fuzzy layer, and when the pigeon optimization module transmits new values of k and alpha, the model structure of the self-adaptive fuzzy layer is changed.
2.3, Fuzzy reasoning layer:
the fuzzy depth network sequentially performs fuzzy processing on different input attributes so as to facilitate fuzzy reasoning on complex nonlinear relations among tree parameters, and the fuzzy depth network is very effective in processing complex models which are difficult to accurately process, so that the defect of the traditional neural network is overcome.
Membership is input to a fuzzy reasoning layer, a fuzzy unit of the fuzzy reasoning layer is established by adopting a product reasoning method, and the output value of the jth unit of the ith training sample is calculatedAs a blurring unit output.
Wherein: The product of membership degrees of the jth fuzzy subset of the ith training sample in different input attributes is obtained; Normalized to U e {1,2,3,4,5}, i=1,..n, j=1, 2,..k, n is the total number of rubber tree plants in the training sample, and the total number of fuzzy units is k.
2.4, Weight update layer based on attention:
The initial value of the connection weight w based on the attention is given randomly, and is updated continuously by back propagation, as shown in fig. 5, and when the iteration number reaches the maximum, the update is terminated. Fuzzy inference layer The value is taken as an input to the layer,And (3) carrying out weighted operation with the connection weight w to output a predicted value y, so as to realize anti-fuzzy calculation of the fuzzy depth network.
The loss function fuses the predicted value y and the measured valueThe attention weight Q is as shown in formula (10).
Wherein y 1 is a chest diameter predicted value, and y 2 is a volume predicted value; Is that Corresponding measured values, Q i is the attention weight of the i-th training sample, i=1, 2. Fitness valueThe connection weight between the chest diameter is The connection weight between the volume isThen for all training samples there are:
Wherein: Represents the j-th output of the i-th training sample from the fuzzy inference layer A value; Represents the predicted chest diameter value of the ith training sample, The volume predicted value of the i-th training sample is represented, j=1, 2,..k, i=1, 2,..n, n is the total number of training samples, and the total number of fuzzy units is k.
The essence of the attention mechanism is weight distribution, wherein a weight Q is distributed to the rubber tree samples in a loss function, and the weight Q has anti-interference capability to the rubber tree training samples in abnormal growth states. In order to eliminate the influence of dimensions among different forest parameters, the training samples are subjected to normalization pretreatment. Then, the i-th tree parameter matrix in the training sampleThe total 7 actually measured forest parameters are east-west crown width, north-south crown width, tree height, point cloud density, crown volume, timber volume and breast diameter respectively, and the average value matrix of each forest parameter in a training sample is avg= [ avg 1,…,avg7],Qi, which is obtained by the internal relation between z i and avg in the training sample.
Wherein: The method comprises the steps of representing p-th forest parameters from left to right in a forest parameter matrix z i, representing p-th average value from left to right of an average value matrix avg by avg p, representing intrinsic relation between i-th rubber tree parameters and average value avg by p epsilon {1,2,3,4,5,6,7}, representing inter-relation between cosine similarity Sim (z i, avg) and Euclidean distance Dist (z i, avg), updating tau by a pigeon cluster optimization module, and updating i=1, 2.
The connection weight w is continuously updated by a back propagation manner, and the updating manner is shown in a formula (13).
Wherein, eta is 0< eta <1, which represents learning efficiency, and t is the current iteration number.
Meanwhile, given an initial attention weight scaling coefficient tau, the forest parameter prediction effect of the fuzzy depth network is changed along with each time the pigeon optimization module updates tau.
2.5, Pigeon group optimization module:
The module is the optimization of key parameters k, alpha and tau in the fuzzy deep learning network, wherein k is the total number of fuzzy subsets of the adaptive fuzzy layer, alpha is the variance scaling factor of the formula (8), and tau is the attention weight scaling factor tau of the formula (12). The combination of the values of k, alpha and tau is called a model parameter combination delta [ k, alpha and tau ], and in the parameter searching space range, k is 0-100 and is an integer, alpha is 0-30, and tau is 0-10. The pigeon farm optimization module is initially L sets of model parameters delta l[kl,αl,τl ] (l=1, 2,..l), that is, the module has L groups of model parameters for optimization and t max iterations, and the t iteration is performed according to the searching strategy And (3) updating the values of the model parameters, searching the optimal parameter combination delta l in the L groups of model parameters after iteration is ended, and transmitting the optimal parameter combination delta l into the fuzzy depth network to finish training of the fuzzy depth network.
First, the update of parameters k, α of the adaptive blur layer will result in the output value of this layerSecondly, the coefficient tau update of the attention mechanism influences the calculation of the attention weight Q, and further influences the iteration of a loss function, namely the formula (10), on the connection weight w, and finally, the predicted value of the fuzzy deep learning network follows theAnd the connection weight w. The present embodiment calculates the fitness of the model parameter combination by using the predicted value and the measured value output by the already completed fuzzy depth training model, and as shown in formula (14), the lower the fitness value is, the closer the set of parameters is to the optimal model parameter combination.
In the method, the combination value of the first group of model parameters of the t-th iteration isT is the current iteration number of the pigeon group, l=1, 2,..l, i=1, 2,., n, n is the total number of training samples; Is the chest diameter and volume predicted value of the fuzzy depth network, Is in combination withCorresponding measured values.
The search strategy of the model parameter combination is divided into two stages according to the iteration number, the first stage is started, and the second stage is started when the iteration number reaches 80% of the maximum iteration number, and the specific search strategy is shown as follows.
In the first stage of the model parameter combination search strategy, the embodiment proposes cosine iteration weight items and population variation ideas to help individuals jump out of a local optimal solution so as to give an optimal model parameter combination, and the fuzzy depth network is trained. The cosine iteration weight item is added, so that global searching capability is more focused in the initial iteration stage, local searching capability is stronger in the later iteration stage, and the actual iteration requirement is met as shown below.
Wherein: Is cosine iteration weight item, t max is maximum iteration number of pigeon, epsilon is an extremely small constant, rand (0, 1) is a random number between [0,1], and the value of the first group of model parameter combinations of the t-th iteration is The rounding process is needed when the value is updated; Is an array L=1, 2,..l;
In order to further enhance the ability of the PIO algorithm to jump out of the local optimal solution, when the proper fitness of the optimal parameter set of the L-group model parameter combination is unchanged for a long time, it is explained that the forest parameter prediction model of the embodiment falls into a local extremum. At this time, the L groups of model parameter combinations are sorted in descending order according to the fitness, L=1, 2,..l, higher adaptabilityGroup parameters were subjected to population variation in the following manner.
Wherein: Is that The updated value of the population variation is used,Performing rounding processing ifWithin the range of 0 to 100,OtherwiseAccording toIn the update mode of (a),The values are updated within the ranges of 0-30 and 0-10 respectively, otherwise, the values are not updated; rand (-1, 1) is a random number in the range of [ -1,1 ].
In the second stage of the model parameter combination search strategy, an update strategy of the model parameters is adjusted, assuming that δ c (t) is the center position of all model parameter combinations at time t, and the parameter set flies toward the center position.
Wherein: Temporary value of the first group of model parameters at t moment, if Then the first set of model parameters is updatedOtherwise not updateL=1, 2, and. After each iteration, discarding the model parameter combination with higher partial fitness value, and updating the model parameter population quantity L, so that the better model parameter combination is reserved, and meanwhile, the convergence of the algorithm is ensured. And finishing iteration when the iteration number reaches the maximum iteration number t max or only one group of model parameter combinations is left, outputting the parameter combination with the lowest adaptability in the L groups of model parameter combinations at the moment, namely, the optimal value of the given parameters k, alpha and tau, and transmitting the model parameter combination into a fuzzy depth network to finish training.
3. Results and discussion:
training and testing results of the pigeon group optimization module:
Training and testing of the forest parameter prediction model are carried out on a Windows 10-64-bit server which is provided with a AMD Ryzen 7 4800H CPU@2.9GHZ processor and a 16 GB-RAM. In the forest parameter prediction model constructed in the embodiment, the maximum iteration number of the weight updating layer is set to 200, the learning efficiency eta is set to 8, the total number of model parameter combinations L of the pigeon cluster optimization module is set to 32, and the maximum iteration round t max is set to 50.
The 32 groups of model parameters of the initial rounds are randomly valued and uniformly distributed in a parameter search space, the values of k, alpha and tau in the model parameter combination are continuously updated along with continuous iteration of the pigeon cluster module, the optimal model parameter combination results of different varieties of training sets in different stages of iteration are shown in a table 2, and the 32 groups of parameters of each variety gradually converge towards an optimal array, so that the pigeon cluster module can adaptively learn the optimal model parameter combination of different varieties of rubber trees, as shown in fig. 6. Fig. 6 (a) is a diagram of iterative optimization results of parameters of the pigeon optimization module for the hot-grinding 72059 rubber tree network. Fig. 6 (b) is a diagram of iterative optimization results of parameters of the pigeon pool optimization module for the hot reclamation 525 rubber tree network. Fig. 6 (c) is a chart of iterative optimization results of parameters of the pigeon optimization module for the hot reclamation 628 rubber tree network. Fig. 6 (d) is a diagram of iterative optimization results of parameters of the pigeon optimization module for PR107 rubber tree network.
Table 2 shows the optimal results of model parameter combinations of the pigeon optimization module at different stages:
The pigeon cluster module searches the optimal model parameter combination of the training set, and 32 groups of model parameters are arranged in each round in the iterative process, wherein the optimal model parameter combination of the round is the lowest in fitness, and the fitness curves formed by the optimal model parameter combinations of different iterative stages are shown in fig. 7. The adaptability of the optimal model parameter combination shows a descending trend, which indicates that the forest parameter prediction model of the embodiment is a global optimization process. The fitness curves of different varieties of training sets are obviously reduced in the first 30 epochs, which indicates that the parameters of the forest parameter prediction model are rapidly close to the optimal array. The model parameters are sequentially transmitted into corresponding modules of the fuzzy depth network, and the correlation coefficient in the neural network is adjusted on the basis that the fuzzy depth network adaptively builds a network structure according to training samples, so that the optimal parameter set in the initial turn can also reach a better fitness value. After 50 epochs, the fitness values of the training samples of hot reclamation 72059, hot reclamation 525, hot reclamation 628 and PR107 are converged to 0.025, 0.022, 0.016 and 0.015 respectively, which indicates that the forest parameter prediction model constructed in the embodiment has the capability of accurate parameter prediction.
And after the pigeon optimization module confirms the optimal model parameter combinations of different varieties, the pigeon optimization module is transmitted into a fuzzy depth network to complete the training of the forest parameter prediction models of the different varieties. At this time, in the attention-based weight updating layer, the loss value E in the training process is as shown in fig. 8. Fig. 8 (a) is an iterative plot of the loss value of PR107 rubber tree during back propagation. Fig. 8 (b) is an iterative plot of the loss value of the hot-ground 72059 rubber tree during the back propagation. Fig. 8 (c) is an iterative plot of the loss value of a heat-reclaimed 525 rubber tree during back propagation. Fig. 8 (d) is an iterative plot of the loss value of the thermally reclaimed 628 rubber tree during the back propagation. In order to improve the training efficiency of the prediction model in this embodiment, a small-Batch gradient descent (Mini-Batch GRADIENT DESCENT) method is adopted in the attention-based weight update layer to perform back propagation, resulting in local oscillation of the regression loss value. However, as the learning process is iterated continuously, the loss value E generally decreases, which indicates that the fuzzy depth network of this embodiment has better convergence. After 100 iterations, loss values E of PR107, hot grinding 72059, hot reclamation 525 and hot reclamation 628 are converged to 0.00176, 0.00349, 0.00345 and 0.00072 respectively, which indicates that the fuzzy depth network constructed in the embodiment has better tree parameter prediction capability.
3.2, Comparing with the prior method:
Based on the prediction model of the present embodiment and the conventional method, the prediction results of the parameters of the breast diameter and the volume of the rubber tree are shown in table 3. BP (Back Propagation) the neural network is a method based on a multi-layer feedforward neural network, the determination of the network structure depends on experience and trial and error, and the excitation functions are global and mutually interfere, so that the problem of local minimum is easily trapped. RBF (Radial Basis Function) the neural network and BP are suitable for nonlinear model establishment, but the local excitation function of RBF overcomes the mutual interference problem of BP global excitation function, and for a new training set, only the number of hidden layer neuron nodes and the connection weight are required to be changed, the learning speed is greatly improved compared with BP algorithm, the convergence is easier to ensure, so that RBF is easy to obtain better results. GRNN (General Regression Neural Network) is a radial basis function neural network, and compared with a traditional radial basis function neural network, a summation layer is added between an implicit layer and an output layer, so that the radial basis function neural network has advantages over RBFs in the aspects of less sample data, unstable data and the like. However, RBF and GRNN neural networks often determine network structures through trial-and-error methods and empirical formulas, and rely on priori experience, and meanwhile, the method lacks a mechanism for judging abnormal data in training samples, so that the robustness of the neural network is reduced. The method of the embodiment is based on a fuzzy depth network, can accurately predict a complex model, provides a self-adaptive learning algorithm to determine a network structure, combines a pigeon optimization algorithm to search for optimal parameters, improves the effect of the self-adaptive algorithm, and adds an attention mechanism to judge abnormal data of a training sample. Table 3 shows the comparison results of the four methods on chest diameter and volume prediction, and the table shows that the method of the embodiment obtains better quantization results on three indexes of a determination coefficient (R2), a Root Mean Square Error (RMSE) and a Mean Absolute Percentage Error (MAPE), so that the forest parameter prediction model of the embodiment further improves the prediction result precision of the key parameters of the rubber forest.
Table 3 shows the predicted results of the forest parameters for different methods:
3.3, analyzing a forest parameter prediction result:
After the optimal model parameters are established by the fuzzy depth network through the pigeon optimization module, the fuzzy depth network adaptively establishes forest parameter prediction models of all varieties according to different rubber tree variety training sets. Table 4 shows the actual measurements of the chest diameter and volume of four rubber trees, hot reclamation 628, hot reclamation 525, hot grinding 72059 and PR107, and the predicted values of this example. Meanwhile, the effectiveness of the method of this example was quantitatively analyzed by comparing the indexes R 2, RMSE and MAPE, and fig. 9 shows the comparison result of specific parameters.
Table 4 shows the comparison of the forest growth parameters obtained by the method of this example with the actual measured values:
Note that (F) is the actual measurement and (O) is the method herein.
FIG. 9 shows the results of prediction of parameters of chest diameter and volume, wherein the experimental points consisting of the predicted values and the actual measured values of parameters of four rubber trees are uniformly distributed near the 45 DEG regression line, and the predicted values and the actual measured values are in a linear relationship.
Fig. 9 (a) shows the result of comparing the chest diameter predicted values and measured values obtained by the method of the present example for four rubber trees. The comparison results of the hot reclamation 525 and the hot reclamation 72059 are (R 2 = 92.24%, rmse=1.70 cm, mape=5.08%) and (R 2 =91.42%, rmse=1.75 cm, mape=5.10%) respectively. The study model of this example predicted better results for the chest diameters of thermal reclamation 628 and PR107 relative to the first two varieties (R 2 = 94.31%, rmse=1.44 cm, mape=4.87%) and (R 2 = 93.87%, rmse=1.48 cm, mape=5.03%). The method is mainly characterized in that wind damage inclination phenomenon exists in trees in a hot grinding 72059 rubber garden, and the adjacent rubber trees are mutually shielded to cause incomplete acquisition of point cloud data, so that a final parameter prediction result is affected. The thermal reclamation 525 has complex growth form, lower branching generating parts, more and dense branching quantity and larger variation of the diameter parameters of different rubber trees. And the hot reclamation 628 and PR107 have stronger wind resistance, are not easy to lodge, have simpler growth morphology and less branches, obtain more complete branch data, have higher point cloud quality and have better prediction precision.
Fig. 9 (b) shows the comparison result of the estimated values of the volumes of four rubber trees with the actual measured values. Wherein RMSE of hot reclamation 72059 (R 2=91.25%,RMSE=0.050m3, mape=6.06%) and hot reclamation 525 (R 2=90.14%,RMSE=0.052m3, mape=8.19%) is significantly higher than RMSE of hot reclamation 628 (R 2=93.88%,RMSE=0.027m3, mape=5.02%) and PR107 (R 2=93.73%,RMSE=0.028m3, mape=5.33%). The phenomenon can be explained that the number of branches of the thermal reclamation 628 and PR107 is small, the differences of the timber volume parameters of different forest trees in the same forest land and the same climate condition are small under the same variety, the crowns of the thermal reclamation 628 and PR107 are small, the influence of shielding by adjacent forest trees of the same variety is small, and the acquisition of the forest parameters is more accurate.
4. Junction language:
By establishing a forest parameter prediction model, the breast diameter and the volume of a single rubber tree are predicted by partial forest parameters automatically acquired by an onboard laser point cloud. The prediction model combines the advantages of a fuzzy depth network, an attention mechanism and a pigeon cluster optimization algorithm, can realize self-adaptive building of the model according to nonlinear relations among forest parameters, weight distribution of abnormal samples and model parameter optimization strategies, can be suitable for building of most complex forest relations, and has good universality and robustness. Compared with BP, RBF, GRNN prediction methods, the prediction model of the embodiment obtains better results in predicting the chest diameter and the volume, and the RMSE is respectively 1.59+/-0.15 cm and 0.040+/-0.013 m 3. Experimental results show that the forest parameter prediction model of the embodiment can obtain good prediction effects on various rubber trees, the average MAPE of the predicted chest diameters of the four rubber trees is 5.02% disturbance, the average MAPE of the predicted timber volume is 6.15% disturbance, shan Zhulin wood parameters of an artificial forest can be effectively obtained, and the model is superior to the traditional prediction model in experimental samples. In the prediction research of tree parameters, the difference of the performances of different rubber tree varieties can find rules from the influence of natural environment on rubber tree patterns, incomplete acquisition of airborne point clouds caused by mutual shielding among trees and the growth morphological characteristics of different varieties. Based on the combination of a fuzzy depth network and a plurality of artificial intelligence algorithms, the training of the rubber tree automatic completion model of different varieties is completed according to the wood parameter prediction model, and the method is suitable for the prediction of key parameters of rubber trees of the same variety in the same forest land, and is one of the application of the artificial intelligence technology in the forestry neighborhood.