Disclosure of Invention
Aiming at the problems in the prior art, the invention provides the self-evolution optimizing method and the system for the configuration parameters by using perception, which can quickly train an I/O performance prediction model aiming at unknown application, effectively improve the accuracy of the I/O performance prediction model and realize the improvement of the I/O performance of a storage system.
In order to solve the technical problems, the invention adopts the following technical scheme:
An application-aware self-evolution optimization method for configuration parameters comprises the following steps:
Acquiring I/O characteristic parameters of all applications, training an I/O performance prediction model by using the I/O characteristic parameters of the known applications to obtain the I/O performance prediction model of the known applications, and performing secondary training on the I/O performance prediction model of the known applications by using a migration learning method to obtain the I/O performance prediction model of the unknown applications;
Adding I/O characteristic parameters of an application to be optimized into a corresponding parameter search space, iteratively obtaining the current round of parameter configuration of all the I/O characteristic parameters in the parameter search space, transmitting the current round of parameter configuration to a search engine to generate the next round of parameter configuration, and simultaneously transmitting the current round of parameter configuration to a corresponding I/O performance prediction model to obtain a corresponding performance prediction result until the maximum iteration number or the appointed duration is reached;
and selecting an optimal performance prediction result from the performance prediction results of each round, and configuring parameters corresponding to the optimal performance prediction result as a tuning result of the I/O characteristic parameters of the application to be tuned.
Further, the method for training the I/O performance prediction model by using the I/O characteristic parameters of the known application and the method for performing secondary training on the I/O performance prediction model of the known application by using the transfer learning method both comprise the step of data collection, and specifically comprise the steps of:
Selecting appointed I/O characteristic parameters from all the I/O characteristic parameters and adding the appointed I/O characteristic parameters into a corresponding parameter search space;
Sampling the parameter configuration of the I/O characteristic parameters in the parameter search space by using a Latin hypercube sampling algorithm to generate different parameter configurations;
Executing the corresponding application program under different parameter configurations to obtain performance results corresponding to the different parameter configurations, and adding the different parameter configurations and the performance results corresponding to the different parameter configurations into the data set.
Further, when the specified I/O characteristic parameters are selected from all the I/O characteristic parameters, the method specifically comprises the steps of sequentially selecting the I/O characteristic parameters according to the order of the influence on the I/O performance from large to small through expert experience until the number of the selected I/O characteristic parameters meets the requirement.
Further, when the I/O performance prediction model is trained by using the I/O characteristic parameters of the known application, the method comprises the following steps:
Normalizing the I/O characteristic parameters in the data set;
Removing I/O characteristic parameters with influence on the I/O performance smaller than a specified size from the I/O characteristic parameters of the data set by using a filtering method, performing characteristic selection on the remaining I/O characteristic parameters by using recursive characteristic elimination cross validation, and finally selecting a plurality of I/O characteristic parameters with larger influence on the I/O performance;
Training a stacking model by using the selected I/O characteristic parameters, wherein the stacking model comprises an individual learner and a meta learner, the individual learner comprises a random forest 1, a random forest 2, a gradient lifting algorithm 1, a gradient lifting algorithm 2, a KNN and a decision tree, after the meta learner outputs a result, cross verification is used for verifying the result, and if the accuracy accords with expectations, the trained stacking model is saved as an I/O performance prediction model of a known application.
Further, when the migration learning method is used for performing secondary training on the I/O performance prediction model of the known application, the method comprises the following steps:
Normalizing the I/O characteristic parameters in the data set;
Comparing the similarity of the I/O characteristic parameters of the unknown application and the I/O characteristic parameters of all known applications, and selecting an I/O performance prediction model of the known application with the highest similarity;
And taking the selected data set of the known application as a source domain, taking the data set of the unknown application as a target domain, using a transfer learning algorithm, iteratively updating weights of the source domain sample and the target domain sample, and training the I/O performance prediction model of the selected known application by using the source domain sample and the target domain sample after weight updating until the accuracy accords with the expected or reaches the maximum iteration number, thereby obtaining the I/O performance prediction model of the unknown application.
Further, the specific steps of the migration learning algorithm include:
Combining the source domain sample and the target domain sample to obtain a combined sample, and setting initial weights of all samples in the combined sample;
the combined sample is used as a training sample to iteratively train the selected I/O performance prediction model of known application, the weight of a source domain sample in the combined sample is frozen in each iteration, an AdaBoost.R2 algorithm is used for training the model and calculating an estimation error, and then the weights of all samples in the combined sample are updated according to the estimation error;
and selecting the model with the minimum estimation error as an I/O performance prediction model of the unknown application.
Further, when the weights of all samples in the combined sample are updated according to the estimation error, the weight updating formula of the source domain samples in the combined sample is as follows:
the weight updating formula of the target domain sample in the combined sample is as follows:
wherein, Representing the adjustment error of the ith sample in the combined samples of the kth iteration, Z k is the normalization coefficient, and beta k is selected to give the total weight of the target domain samples asS is the iteration number, n s represents the number of source domain samples in the combined sample, and n t represents the number of target domain samples in the combined sample.
Further, after adding the I/O characteristic parameters of the application to be tuned to the parameter search space, the method comprises the step of taking the midpoint values of all the parameters in the whole parameter search space as initial parameter configuration.
Further, when the present round of parameter configuration is transferred to the search engine to generate the next round of parameter configuration, the method includes:
And feeding back the parameter configuration of the round and the corresponding performance prediction result to a search engine, wherein the search engine guides the search process by using a designated heuristic search algorithm, and searches the parameter configuration of the next round according to the parameter configuration of the round and all previous rounds and the corresponding performance prediction result.
The invention also provides an application-aware self-evolution tuning system of the configuration parameters, which comprises a microprocessor and a computer-readable storage medium which are connected with each other, wherein the microprocessor is programmed or configured to execute any application-aware self-evolution tuning method of the configuration parameters.
Compared with the prior art, the invention has the advantages that:
The invention builds an application-aware I/O performance prediction model, performs model pre-training on known applications by using I/O characteristic parameters of the known applications, performs secondary training on I/O performance prediction of the known applications by using a migration learning method on unknown applications, thereby obtaining I/O performance prediction models adapting to different applications, reducing unknown application data volume required in the secondary training by using the migration learning method, and greatly saving model training time and calculation resources.
When the configuration parameter search is carried out, iterative parameter configuration of the next round is generated according to the parameter configuration of the round, the performance evaluation is carried out on the parameter configuration of each round by using the I/O performance prediction model, the I/O performance applied under different I/O parameter configurations is predicted by the I/O performance prediction model, the execution of an application program can be avoided, and the time required by performance tuning is greatly reduced.
Detailed Description
The invention is further described below in connection with the drawings and the specific preferred embodiments, but the scope of protection of the invention is not limited thereby.
Example 1
The embodiment provides an application-aware self-evolution optimization method for configuration parameters, which is oriented to a fixed hierarchy of a parallel storage software stack, namely an application layer, an I/O layer and a parallel file system layer, and all HPC applications can use the method to perform configuration parameter optimization to obtain I/O performance improvement with a certain amplitude.
As shown in fig. 1, the method of the present embodiment includes the steps of:
S1) constructing an application-aware I/O performance prediction model, namely acquiring I/O characteristic parameters of all applications, training the I/O performance prediction model by using the I/O characteristic parameters of known applications to obtain the I/O performance prediction model of the known applications, and performing secondary training on the I/O performance prediction model of the known applications by using a migration learning method to obtain the I/O performance prediction model of unknown applications;
S2) self-evolving configuration parameter searching, namely adding I/O characteristic parameters of an application to be optimized into a corresponding parameter searching space, iteratively obtaining the current round of parameter configuration of all the I/O characteristic parameters in the parameter searching space, transmitting the current round of parameter configuration to a search engine to generate the next round of parameter configuration, and simultaneously transmitting the current round of parameter configuration to a corresponding I/O performance prediction model to obtain a corresponding performance prediction result until the maximum iteration number or the appointed duration is reached;
S3) visual display of the tuning result, namely selecting an optimal performance prediction result from the performance prediction results of each round, taking parameter configuration corresponding to the optimal performance prediction result as the tuning result of the I/O characteristic parameters of the application to be tuned, and performing visual display on the performance and a group of parameter configurations with optimal performance corresponding to all the parameter configurations.
Each step is specifically explained below.
Step S1 of the present embodiment is as shown in fig. 2, and is first model training for applications in the face of a large number of configuration parameters of diversified applications and multi-level storage software stacks. As the HPC application field expands, new unknown applications may appear, requiring a sense of whether the application is a known application or an unknown application. For known applications, constructing a system-level I/O performance prediction model of the application based on multi-level I/O mode parameters, and for unknown applications, performing secondary training based on a training model of the known application and a migration learning method to obtain the I/O performance prediction model of the unknown application. The I/O performance of the application under different I/O parameter configurations is predicted by the I/O performance prediction model, so that the execution of the application program can be avoided, and the time required by performance tuning is greatly reduced.
High performance computing applications are complex and diverse, producing large volumes of data in scientific research and production activities. The existing I/O optimization method based on traditional manual analysis or heuristic strategy does not fully consider the I/O difference of complex diversified applications, especially the perception of new unknown applications. Therefore, step S1 of the present embodiment researches the construction of the I/O performance prediction model of application awareness, and obtains the I/O performance prediction model of unknown application through secondary training based on the I/O performance prediction model of known application and the migration learning method. As shown in fig. 3, the method specifically comprises the following steps:
S11) detecting unknown applications, namely detecting whether input data come from known applications or unknown applications according to input I/O characteristics aiming at detection problems of the unknown applications in the embodiment;
S12) training of I/O performance prediction models of known applications:
If a known application, training the I/O performance prediction model is largely divided into two parts, data collection and model training:
Data collection is very important for model training, and the accuracy of a model is greatly influenced by the quality of data, wherein the data collection comprises the following steps:
S121) selecting specified I/O characteristic parameters from all the I/O characteristic parameters and adding the specified I/O characteristic parameters into a corresponding parameter search space;
In the data collection stage, some parameters with larger influence on the I/O performance in the multi-level storage system are firstly input as an I/O parameter search space, and in this embodiment, through expert experience, I/O characteristic parameters are sequentially selected according to the order of the larger influence on the I/O performance until the number of the selected I/O characteristic parameters meets the requirement, as shown in table 1.
Table 1I/O stack parameter introduction
S122) sampling the parameter configuration of the I/O feature parameter in the parameter search space using a Latin Hypercube Sampling (LHS) algorithm to generate different parameter configurations, how to sample the configuration space using a Latin Hypercube Sampling (LHS) algorithm to generate various parameter configuration sets is well known to those skilled in the art, and the embodiment will not be repeated;
s123), after the sampling is finished, executing the corresponding application program under different parameter configurations to obtain performance results corresponding to the different parameter configurations, in this embodiment, adding the different parameter configurations and the performance results corresponding to the different parameter configurations into the data set by using the bandwidth obtained by executing the corresponding application program under the different parameter configurations as the performance results.
The model training phase comprises the following steps:
s124) carrying out normalization processing on the I/O characteristic parameters in the data set;
In the model training stage, firstly, I/O characteristic parameters in a data set are normalized, because the original data generally comprise characteristics with different ranges, the influence of parameters with smaller value ranges on model results can be reduced due to overlarge value phase differences among the different characteristics, and if the normalization processing is not performed, the model training can be caused to deviate;
S125) removing I/O characteristic parameters with less influence on the I/O performance than a specified size from the I/O characteristic parameters of the data set by using a filtering method, so as to remove the I/O characteristic parameters with less influence on the I/O performance, performing characteristic selection on the rest I/O characteristic parameters by using recursive characteristic elimination cross validation, and finally selecting a plurality of I/O characteristic parameters with greater influence on the I/O performance;
Feature screening work is necessary to improve the accuracy of the trained I/O performance prediction model. Since RFECV (recursive feature elimination+cross verification) has higher complexity and larger calculation amount, the embodiment selects filtering method (chi-square test and variance selection method) to screen out some I/O feature parameters with small influence on I/O performance, then uses RFECV (recursive feature elimination cross verification) to perform feature selection to obtain several parameters with high influence on I/O performance, how to use chi-square test and variance selection method to screen and filter and how to use RFECV to perform feature selection are all well known to those skilled in the art, and the embodiment is not repeated;
S126) training a stacking model by using the selected I/O characteristic parameters;
The stack model framework is shown in fig. 4, and comprises an individual learner and a meta learner, raw data is extracted by RFECV features and then is transmitted to the stack model as input, in order to prevent problems in training the model, data inspection is performed first, and possible NaN values and INF values in the data are deleted. The stacking model is mainly divided into Level 0 and Level 1, level 0 being the individual learner and Level 1 being the meta learner. The stacked model improves overall performance of the model by combining the predicted results of a plurality of individual learners, including random forest 1, random forest 2, gradient lifting algorithm 1, gradient lifting algorithm 2, KNN, and decision tree, using one meta learner (meta-learner), the random forest 1, random forest 2, gradient lifting algorithm 1, and gradient lifting algorithm 2 of the individual learners being set as shown in table 2.
Table 2 random forest and gradient lifting algorithm settings in stacked models
Note that n_ estimators represents the number of decision trees, max_features represents the maximum feature number considered by each decision tree in splitting nodes, max_samples represents the sample ratio used in training each tree, random_state represents the random number seed, and n_jobs represents the number of jobs running in parallel.
Thus, even if the same algorithm is used, different models can be generated through different random number seeds, so that random diversity is increased, the method is beneficial to improving the generalization capability of the models, and the phenomenon of overfitting is reduced. After the meta learner outputs the results, cross-validation is used to verify the results and if accuracy meets expectations, the trained stack model is saved as the I/O performance prediction model for the known application.
S13) training of an I/O performance prediction model of an unknown application:
If an unknown application, training the I/O performance prediction model is also split into two parts, data collection and model training:
The process of data collection of the unknown application is basically the same as that of step S121 and step S123, but because some applications have longer execution time and slower data collection speed, for the unknown application without a large amount of history data, the embodiment uses the migration learning method to perform secondary training on the I/O performance prediction model of the unknown application, so that the collected data of the unknown application only needs to be half of the data volume of the known application, that is, in the process of data collection of the unknown application, the Latin Hypercube Sampling (LHS) algorithm is used to sample the parameter configuration of the I/O feature parameters in the parameter search space to generate different parameter configurations which are half of the number of parameter configurations generated in the same step of the known application.
The model training phase comprises the following steps:
S131), carrying out normalization processing on the I/O characteristic parameters in the data set, and after the data of the unknown application is collected, carrying out normalization processing on the I/O characteristic of the unknown application so as to ensure that the data characteristic of the unknown application has similar range distribution;
S132) comparing the similarity of the I/O characteristic parameters of the unknown application with the I/O characteristic parameters of all known applications, selecting an I/O performance prediction model of the known application with the highest similarity, and according to the I/O characteristics of the unknown application, selecting the known application I/O performance prediction model closest to the I/O characteristic parameters of the unknown application by comparing the similarity of the I/O characteristics of the unknown application with the I/O characteristics of the known application so as to shorten the secondary training time of the I/O performance prediction model of the unknown application;
S133) taking the selected data set of the known application as a source domain, taking the data set of the unknown application as test data (also called a target domain), iteratively updating the weights of the source domain sample and the target domain sample by using a migration learning algorithm, and training the I/O performance prediction model of the selected known application by using the source domain sample and the target domain sample after weight updating until the accuracy meets the expected or reaches the maximum iteration number, thereby obtaining the I/O performance prediction model of the unknown application.
At the beginning of the transfer learning algorithm, the weights of the training data are initialized and the whole iterative process is performed. In each iteration cycle, a regression model may be built on the test data using a base learner (e.g., partial Least Squares Regression (PLSR), support Vector Regression (SVR), decision Tree Regression (DTR)) and weight distribution, the model setting new sample weights based on the last iteration result. The migration learning algorithm iteratively adjusts the weights of instances in the source domain that are different from the target domain distribution, thereby reducing the differences between the data distributions. If the source domain samples differ significantly from the target domain samples, these samples will be given a lower weight (reduced impact relative to other source domain samples), while instances similar to the target domain distribution will be increasingly weighted in iterations (increased impact relative to other source domain samples). After multiple iterations and updates, samples similar to the unknown application will get more weight, and those with larger differences from the unknown application data distribution will get less weight, among the source domain known application samples. Finally, after the accuracy rate meets the expectations or reaches the maximum iteration times, the finally trained I/O performance prediction model can be saved to serve as an I/O performance prediction model of unknown application and used for searching the self-evolving configuration parameters in the step S2.
In this embodiment, the migration learning algorithm gradually reduces the sample weight of the source domain and increases the sample weight of the target domain, so as to gradually improve the adaptability of the model to the target domain, and finally selects the model with the smallest error as the I/O performance prediction model of the unknown application, where the pseudo code of the migration learning algorithm is as follows.
In this embodiment, the specific steps of the migration learning algorithm include:
First define source domain (known application dataset) sample Ds, labeled training set Target domain (unknown application data set) sampleWherein the method comprises the steps ofAndFor the feature vector of the I/O parameter,AndFor I/O performance, n s and n t are the number of source domain and target domain training set samples, respectively;
Then combining the source domain samples and the target domain samples to obtain combined samples D=D s_train+Dt_train with the number of samples (n s+nt), and setting the initial weights of all the samples in the combined samples as
The combined sample is used as a training sample to iteratively train a selected I/O performance prediction model of known application, the weights of the source domain sample and the target domain sample are iteratively adjusted to improve the migration learning effect, in each iteration, the weights of the source domain sample in the combined sample are frozen, an AdaBoost.R2 algorithm is used for training the model and calculating an estimation error k, then the weights of all samples in the combined sample are updated according to the estimation error, and when the weights of all samples in the combined sample are updated according to the estimation error, the weight updating formula of the source domain sample in the combined sample is as follows:
the weight updating formula of the target domain sample in the combined sample is as follows:
wherein, Representing the adjustment error of the ith sample in the combined samples of the kth iteration, Z k is the normalization coefficient, and beta k is selected to give the total weight of the target domain samples asS is the iteration number, n s represents the number of source domain samples in the combined sample, and n t represents the number of target domain samples in the combined sample;
And finally, selecting the model with the minimum estimation error as an I/O performance prediction model of the unknown application.
In summary, step S1 of the present embodiment uses regression technology to build an I/O performance prediction model, designs and extracts the parameter features related to the I/O performance, and trains the model to predict the application performance. Among them, adaboost.r2 is a variant of the AdaBoost algorithm, specifically designed to solve the regression problem. The method is used for solving the migration learning in the regression problem by aiming at the unknown application, improving a migration learning algorithm based on the data characteristics and the I/O performance prediction model of the known application and combining the advantages of TrAdaBoost and AdaBoost.R2, and can establish the I/O performance prediction model aiming at the unknown application by only collecting the data of the small-scale unknown application through the migration learning, thereby greatly saving the model training time and the calculation resources. The algorithm can utilize source domain data (known application data set) (namely data similar to a target domain but not identical to the known application data set) to enhance the learning effect of the target domain data (unknown application data set), and a high-precision I/O performance prediction model can be obtained by using the algorithm.
In step S2 of the present embodiment, as shown in fig. 5, the I/O performance prediction model self-evolves to update the parameter configuration, and learns the structure of the parameter space and the performance of the corresponding parameters. The performance is predicted by the I/O performance prediction model without actually executing an application program, and the performance result is fed back to a search algorithm of heuristic search to guide the search process, so that the time required by performance tuning is greatly reduced. Specifically, the method comprises the following steps:
S21) adding the I/O characteristic parameters of the application to be optimized into a parameter search space, taking the parameter search space as input, and initially taking the midpoint values of all parameters in the whole parameter search space as initial parameter configuration;
S22) inputting the parameter configuration into a search engine for searching, executing the parameter configuration on a corresponding I/O performance prediction model and obtaining a performance prediction result for performance evaluation, generating the parameter configuration to be evaluated in the next round by the search engine according to the input parameter configuration and the corresponding performance prediction result, checking whether the current search condition reaches a stop condition (maximum search time or maximum iteration number) set by a user after one round of execution is finished, and transmitting the parameter configuration searched in the previous round to the search engine and the I/O performance prediction model again for new round of search if the current search condition does not reach the stop condition.
In this embodiment, when the present round of parameter configuration is transferred to the search engine to generate the next round of parameter configuration, the method includes:
The round of parameter configuration and the corresponding performance prediction result are fed back to a search engine, the search engine guides the search process by using a designated heuristic search algorithm, and in the embodiment, in order to improve the search efficiency in a huge parameter space, several parameter search algorithms are compared, including a Bayesian optimization algorithm, a TPE algorithm, a genetic algorithm and random search. By comparison, the Bayesian optimization algorithm is found to have better performance than other algorithms, and then the Bayesian optimization algorithm is selected as a search engine;
searching the parameter configuration of the next round according to the parameter configuration of the round and all previous rounds and the corresponding performance prediction result by a Bayesian optimization algorithm, and specifically comprising the following steps:
S221) updating the data set, and adding the parameter configuration of the present round and the performance prediction result thereof to the history data set. The data set comprises parameter configuration of all previous rounds and corresponding performance prediction results;
s222) training a proxy model, typically a Gaussian Process (GP) model, using the updated data set, the proxy model estimating the objective function value (I/O performance prediction model in this embodiment) and its uncertainty for each point in the parameter space;
S223) defining an acquisition function, wherein an acquisition function is selected, such as an expected improvement (Expected Improvement, EI), a probability improvement (Probability of Improvement, PI) or an upper confidence limit (Upper Confidence Bound, UCB), the input of the acquisition function is a parameter configuration, and the output is an acquisition value;
S224) calculating the parameter configuration of the next round, namely, finding the parameter configuration when the acquisition value of the acquisition function is maximum in the parameter space, and taking the parameter configuration as the parameter configuration of the next round.
In summary, in step S2 of the present embodiment, the I/O feature parameters are selected from the parameter analysis result in the I/O performance prediction model building process, after the parameter search space is determined, the mid-point values of all the parameters in the whole parameter search space are used as initial configurations to start searching, the initial configurations are transferred to the bayesian optimization search engine to generate the parameter configurations of the next round, and the parameter configurations are transferred to the I/O performance prediction model to obtain the prediction bandwidth result corresponding to the parameters of the present round. When the maximum search time or the maximum search round is not reached, the previous generated next round of parameter configuration can be iterated, the I/O performance is predicted by using the I/O performance prediction model, the mode of guiding the search process by a heuristic search algorithm does not need to execute an application program to acquire the real I/O performance when each group of parameter configuration is evaluated, so that the search speed of the optimal parameter is greatly increased, the time for optimizing the I/O stack parameter is greatly reduced, and the iteration of one round can be completed under less than 1s in most cases. When the method is used, the trained I/O stack parameter performance prediction model needs to have higher accuracy, otherwise, deviation is brought when feedback is carried out on the integrated algorithm of heuristic search, and searching of optimal parameter configuration is affected.
In step S3 of this embodiment, in order to facilitate the user to simply and quickly find the tuning result, that is, the parameter configuration with the optimal performance, a visualization module based on PyQt5 is designed. The overall effect is shown in fig. 6, and the specific workflow is as follows:
s31) in the execution process of the step S2, storing each group of parameter configuration and the corresponding predicted bandwidth (namely performance prediction result) into a result.txt file;
S32) after the tuning of the step S2 is finished, starting a visualization module to read all contents of a result.txt file, and then storing parameter configuration (each parameter value) and an I/O bandwidth result corresponding to the parameter configuration (each parameter value) as a key value pair in a variable task_info;
s33) calling a custom display function to display all key value pairs in the task_info on a more visual line graph, wherein the abscissa is named in the form of Set1 and Set2.
S34) finally traverses the task_info variable to find the I/O bandwidth maximum, which is the set of optimal parameter configurations found. For ease of presentation to the user, the visualization module will output the set of parameter configurations into a table.
Example two
The present embodiment proposes an application-aware self-evolution tuning system, including a microprocessor and a computer-readable storage medium connected to each other, where the microprocessor is programmed or configured to execute the application-aware self-evolution tuning method of the first embodiment.
In this embodiment, the microprocessor includes the following functional modules:
The model construction module is used for constructing an application-aware I/O performance prediction model, specifically acquiring I/O characteristic parameters of all applications, training the I/O performance prediction model by using the I/O characteristic parameters of known applications to obtain the I/O performance prediction model of the known applications, and performing secondary training on the I/O performance prediction model of the known applications by using a migration learning method to obtain the I/O performance prediction model of unknown applications;
The automatic tuning module is used for carrying out self-evolution configuration parameter searching, specifically adding I/O characteristic parameters of an application to be tuned into a corresponding parameter searching space, iteratively obtaining the current round of parameter configuration of all the I/O characteristic parameters in the parameter searching space, transmitting the current round of parameter configuration to a search engine to generate the next round of parameter configuration, and simultaneously transmitting the current round of parameter configuration to a corresponding I/O performance prediction model to obtain a corresponding performance prediction result until the maximum iteration number or the appointed duration is reached;
The visual module is used for performing visual display of the tuning result, specifically selecting an optimal performance prediction result from the performance prediction results of each round, taking parameter configuration corresponding to the optimal performance prediction result as the tuning result of the I/O characteristic parameters of the application to be tuned, and performing visual display on the performance corresponding to all the parameter configurations and a group of parameter configurations with optimal performance.
In summary, the parallel storage system has huge configuration parameter space, and the difficulty of optimal configuration search is increased by the I/O difference of complex and diverse applications. Aiming at the challenges of complex and changeable application I/O characteristics and huge configuration parameter space, the invention discloses a self-evolution optimization method and a system for configuration parameters by applying perception, which are used for realizing the construction of an I/O performance prediction model of unknown application based on migration learning, carrying out explanatory analysis on the I/O performance prediction model obtained by training, guiding parameter search optimization, accelerating the search efficiency of huge parameter space based on a Bayesian optimization algorithm, and realizing the self-evolution configuration parameter search technology. The construction of the I/O performance prediction model is crucial to the whole model, although some methods have been proposed at home and abroad to train the I/O performance prediction model, the problem of low accuracy of the I/O performance prediction model still exists, and for the problem, the method for screening RFECV features and training the stacking model is used to improve the accuracy of the I/O performance prediction model, and meanwhile, experiments prove that the stacking model has better performance compared with XGBoost and random forest algorithm, and the trained I/O performance prediction model has higher accuracy. In addition, a great deal of time is required for collecting data used for training the I/O performance prediction model, and the invention can reduce the data quantity required for training the I/O performance prediction model of unknown application by applying a perception and migration learning method, thereby saving model training time and calculation resources.
The above description is only a preferred embodiment of the present invention, and the protection scope of the present invention is not limited to the above examples, and all technical solutions belonging to the concept of the present invention belong to the protection scope of the present invention. It should be noted that modifications and adaptations to the present invention may occur to one skilled in the art without departing from the principles of the present invention and are intended to be within the scope of the present invention.