Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. Based on the embodiments of the present invention, all other embodiments obtained by a person of ordinary skill in the art without making any inventive effort are within the scope of the present invention.
It should be noted that in the description of the present invention, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. The terms "first," "second," and the like in this specification are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order.
The present invention will be further described in detail below with reference to the drawings and detailed description for the purpose of enabling those skilled in the art to better understand the aspects of the present invention.
The embodiment of the invention provides a server performance evaluation method, which is described in detail by combining an execution flow of the server performance evaluation method.
Specifically, fig. 1 is a flowchart of a method for evaluating server performance according to an embodiment of the present invention.
As shown in fig. 1, the method for evaluating the performance of the server includes the following steps:
In step S101, actual operational data of at least one component of the target server in a plurality of environments is collected.
It should be understood that, in the embodiment of the present invention, at least one component of the target server may include, but is not limited to, a processor (Central Processing Unit, abbreviated as CPU), a memory, a hard disk, a network, a computing card, and the present invention is not limited to the specific one.
In addition, it should be noted that, in the embodiment of the present invention, the related tools, such as Zabbix, nagios, etc., may be used, and the present invention is not limited to specific examples, and the actual operation data may be collected by the performance monitor or the command line tool.
In the actual execution process, the embodiment of the invention can acquire the actual operation data of the processor, the memory, the hard disk, the network, the computing card and the like in the target server in a plurality of environments in real time. Wherein a plurality of environments may be understood as the environment temperature at which the target server operates being different.
By way of example, the embodiment of the invention can adjust the environment temperature, and perform the performance test of the target server under different environment temperatures, thereby obtaining different actual operation data.
Optionally, in one embodiment of the present invention, acquiring actual operation data of at least one component of the target server in a plurality of environments includes acquiring at least one of first data of a processor in the target server, second data of a memory, third data of a hard disk, fourth data of a network, fifth data of a computing card, and sixth data of a different environment, and determining the actual operation data based on at least one of the first data, the second data, the third data, the fourth data, the fifth data, and the sixth data.
It should be understood that in the embodiment of the present invention, the actual running data may include, but is not limited to, the first data of the processor, the second data of the memory, the third data of the hard disk, the fourth data of the network, the fifth data of the computing card, the sixth data of the different environments, and the like, and the present invention is not limited to the specific one.
Further, in the embodiment of the present invention, the first data may include, but is not limited to, a core number, a frequency, a thread number, a thread frequency, etc. of the processor, the present invention is not limited to, the second data may include, but is not limited to, a capacity, a transmission rate, a bandwidth, etc. of the memory, the present invention is not limited to, the third data may include, but is not limited to, a capacity, a transmission rate, a bandwidth, etc. of the hard disk, the present invention is not limited to, the fourth data may include, but is not limited to, a transmission rate, etc. of the network, the present invention is not limited to, the fifth data may include, but is not limited to, a calculation force, a transmission rate, etc. of the calculation force card, the present invention is not limited to, the sixth data may include, but is not limited to, an ambient temperature, an ambient humidity, etc., and the present invention is not limited to.
In some embodiments, the embodiment of the present invention may determine the actual running data by acquiring the first data of the processor, the second data of the memory, the third data of the hard disk, the fourth data of the network, the fifth data of the power card, and the sixth data of the different environments in the target server.
In some embodiments, the embodiment of the present invention may determine the actual running data by acquiring the first data of the processor, the second data of the memory, the third data of the hard disk, the fourth data of the network, and the fifth data of the computing card in the target server.
In some embodiments, the embodiment of the present invention may determine the actual running data by acquiring the first data of the processor, the second data of the memory, and the third data of the hard disk in the target server.
The embodiment of the invention can determine the actual operation data by acquiring the number and frequency of the processor, the capacity and transmission rate of the memory, the capacity and transmission rate of the hard disk, the transmission rate of the network, the computing power and transmission rate of the computing card, the environment temperature and the like, and can be specifically set by a person skilled in the art according to the actual situation, and the invention is not particularly limited.
The actual running data in the embodiment of the invention comprises various data, such as the first data of a processor, the second data of a memory, the third data of a hard disk, the fourth data of a network, the fifth data of a computing card, the sixth data of different environments and the like, can realize multidimensional data association analysis, construct full-dimensional performance images, further improve environmental adaptability and predictive maintenance capability.
Optionally, before the actual operation data is input into the pre-trained action model, the method further comprises the steps of determining at least one of first structural information, second structural information with batch normalization and function information of an activation function of a fully-connected neural network in the action model based on the actual operation data, connecting the fully-connected neural network, the batch normalization and the activation function in series based on the at least one of the first structural information, the second structural information and the function information to obtain a first action network in the action model, combining the at least two first action networks and the first action network in series to obtain a first characteristic of the actual operation data, obtaining a second action network in the action model based on the first characteristic and the first action network, and building the action model based on the first action network and the second action network.
It will be appreciated that in the embodiment of the present invention, the action model may include, but is not limited to, a first action network and a second action network, and specifically may be set by those skilled in the art according to the actual situation, and the present invention is not limited thereto.
The first action network FBR network is shown in connection with fig. 2, where the FBR network may include, but is not limited to, a fully connected neural network (Fully Connected Neural Network, abbreviated as FNN), a batch normalization (Batch Normalization, abbreviated as BN), and an activation function Relu, and the FBR network is obtained by connecting the fully connected neural network, the batch normalization, and the activation function Relu in series.
The second action network FC network is shown in connection with fig. 3, where the FC network may combine the features of two FBR networks and one FBR network connected in series to obtain a first feature, and the first feature is processed by the FBR network to obtain the FC network.
Further, the embodiment of the invention can be described with reference to an execution flow of an action network model shown in fig. 4, and the main content of the method can be that actual operation data is acquired, the actual operation data is processed through an FBR network to obtain initial characteristics, the initial characteristics are processed through the FBR network to obtain second characteristics, the second characteristics are processed through an FC network to obtain third characteristics, the third characteristics are processed through the FBR network to obtain fourth characteristics, the fourth characteristics are processed through the FC network to obtain fifth characteristics, the fifth characteristics are processed through the FBR network to obtain characteristics F1, the fifth characteristics are processed through the FBR network to obtain characteristics F2, the actual operation data is processed through the FBR network to obtain characteristics F3, the characteristics F3 are processed through the FBR network to obtain characteristics F4, the characteristics F4 and the characteristics F2 are processed through the FBR network to obtain characteristics F5, the characteristics F5 and the characteristics F1 are processed through the FBR network to obtain characteristics F6, the characteristics F7 and the characteristics F3 are combined, the sixth characteristics are obtained, the sixth characteristics are processed through the FC network, the fifth characteristics are processed through the fifth characteristics and the FBR network to obtain the characteristics F1, the characteristics F2 is processed through the FBR network to obtain the characteristics F9, the characteristics F9 and the characteristics of the eighth characteristics are processed through the FBR network to obtain the characteristics, the characteristics F9, the characteristics is processed through the characteristics and the characteristics of the FBR 9, and the eighth characteristics is processed through the FBR network to obtain the characteristics, and the characteristics.
In some embodiments, the embodiment of the present invention may determine, based on actual operation data, first structural information of a fully connected neural network, second structural information of batch normalization, and function information of an activation function in an action model, and further connect the fully connected neural network, the batch normalization, and the activation function in series to obtain a first action network, and combine at least two first action networks and the first action network that are connected in series to obtain a first feature of the actual operation data, thereby obtaining a second action network, and further construct the action model.
For example, the embodiment of the present invention may construct a first action network in conjunction with fig. 2, construct a second action network in conjunction with fig. 3, and construct an action model in conjunction with fig. 4.
According to the embodiment of the invention, the action model is constructed through network modularization, so that independent optimization or replacement is realized, further, the maintenance cost is reduced, the low-delay reasoning requirement is met through series enhancement feature fusion, the instantaneity is ensured, the model structure is regulated according to actual operation data, the dynamic regulation of the structure is realized, and the decision quality is improved.
Optionally, in one embodiment of the present invention, before inputting the actual operation data into the pre-trained motion model, the method further includes collecting training operation data of at least one component of the server in a plurality of environments, inputting the training operation data into the pre-established motion model to output a second difference value of the training operation data and the corresponding theoretical operation data, calculating a training performance score of the at least one component in the corresponding environment by using the pre-established reward model based on the second difference value, detecting whether the at least one component meets a preset performance condition based on the training performance score, training parameter information of the pre-established motion model by using discrete estimation to obtain a trained motion model, and generating a component meeting the preset performance condition based on the trained motion model.
In some embodiments, the embodiment of the present invention may train a pre-established motion model before inputting actual operation data into the pre-trained motion model, thereby obtaining a trained motion model. The embodiment of the present invention may be described with reference to fig. 5, where the content of training a pre-established action model may be:
Step S501 is to collect training operation data of at least one component of a server in a plurality of environments.
In the embodiment of the invention, the training operation data is that, wherein,Is the number of cores of the processor; Is the frequency of the processor; is the capacity of the memory; is the transmission rate of the memory; Is the capacity of the hard disk; the transmission rate of the hard disk is the transmission rate of the hard disk; The transmission rate of the network port is the transmission rate of the network port; The calculation value is the calculation value of the calculation card; Is the current temperature value.
Step S502, inputting the training operation data into a pre-established action model to output a second difference value between the training operation data and the corresponding theoretical operation data.
Wherein, the embodiment of the invention can utilize the pre-established action modelCalculating the difference value between the training data of processor and the corresponding theoretical running data by usingCalculating the difference value between the memory training data and the corresponding theoretical operation data by usingCalculating the difference value between the hard disk training data and the corresponding theoretical operation data by usingCalculating the difference value between the network training data and the corresponding theoretical operation data by usingAnd calculating the difference value between the power card training data and the corresponding theoretical operation data.
Further, in an embodiment of the present invention,The maximum thread and frequency (such as the turbo frequency, etc.) of the main test processor are not particularly limited by the present invention; the capacity and transmission rate of the memory (such as between the memory and the hard disk, between the memory and the accelerator card, etc.) are mainly tested, and the invention is not limited in particular; The capacity and transmission rate of the hardware are mainly tested (such as between the hardware and the hard disk, between the hard disk and the memory, etc.), and the invention is not limited in particular; the transmission rate of the equipment network is mainly tested; The present invention is not particularly limited, and the calculation force value and the data transmission rate of the calculation force card (such as between the memory and the acceleration card, between the acceleration card and the acceleration card, etc.) are mainly tested.
Step S503, calculating training performance scores by using a pre-established rewarding model.
The training performance scores of the processor, the memory, the hard disk, the network, the power card and the energy consumption under the corresponding environments can be calculated by utilizing the pre-established rewarding model based on the second difference value.
Step S504, judging whether the performance of the server meets a certain performance condition.
In the embodiment of the present invention, if a certain performance condition is satisfied, step S506 is performed, and otherwise, step S505 is performed. Wherein, certain performance conditions can be set by a person skilled in the art according to practical situations, and the invention is not particularly limited.
Step S505, retraining the pre-established motion model with the discrete estimation.
The method and the device can train parameter information of the pre-established action model by utilizing discrete estimation, further obtain a trained action model, and generate a component meeting certain performance conditions based on the trained action model.
And S506, determining a performance evaluation result of the server.
According to the embodiment of the invention, the scene coverage can be improved by collecting training data under different environments, the accuracy of model prediction is quantized by comparing actual and theoretical data, the evaluation and the optimization are facilitated, certain performance conditions are set to ensure that the performance of the component reaches the expected standard, and an optimization flow is triggered when the performance of the component does not reach the standard, so that the model effect is improved.
Optionally, in one embodiment of the invention, the parameter information of the pre-established motion model is trained by using discrete estimation, wherein the parameter information comprises probability distribution of motion selection in training parameter information based on gradient information of the discrete estimation, parameter update amplitude in training parameter information based on learning rate information of the discrete estimation, parameter adjustment sensitivity in training parameter information based on logarithmic information of the discrete estimation, and motion value in training parameter information based on a dominance function of the discrete estimation.
In some embodiments, the embodiments of the present invention may feed back the value of the pre-established reward model to the pre-trained action model through unbiased discrete estimation, and further retrain the parameter information of the pre-established action model, where the expression of the discrete estimation may be, but is not limited to:
,
wherein, the Determining probability distribution of action selection for discrete estimated parameters, and updating by gradient rise to enable the strategy to trend to actions with high dominant values; for the learning rate (step length) of discrete estimation, controlling the updating amplitude of parameters, wherein too high learning rate can cause strategy oscillation, and too low learning rate can be slow in convergence; Reflecting the sensitivity of the current action to parameter adjustment for the gradient of the logarithmic probability of the strategy function, and back-propagating the dominant signal to the strategy network through a chain rule; measuring motion as a dominance function In stateThe following relative values, the expressions of which may be, but are not limited to:
,
,
,
wherein, the Is expressed in state as action cost functionExecute action downwardsThe desired accumulation of the obtained values is obtained,Is in state ofCost function, in stateThe average expectation of the policy to be followed is that,For time sequence difference errors, the instantaneous difference between the function estimated value and the actual value,For the current prize value,Is the parameter of the ultrasonic wave to be used as the ultrasonic wave,And marking the current time.
The embodiment of the invention can control the action selection process more accurately through discrete estimation of training probability distribution, improve the exploration and utilization balance, provide more stable learning rate information, improve the stability of the model in the training process, help the model to adjust parameters more finely, and further help the model to make better decisions.
In step S102, the actual operation data is input into a pre-trained motion model to output a first difference value between the actual operation data and the corresponding theoretical operation data.
In some embodiments, the embodiment of the present invention may input actual operation data into a pre-trained motion model, so as to obtain a first difference value between the actual operation data and corresponding theoretical operation data.
By way of example, the embodiment of the invention can calculate the first difference value between the actual operation data and the corresponding theoretical operation data through a pre-trained action model.
Optionally, in one embodiment of the invention, the actual operation data is input into a pre-trained action model to output a first difference value between the actual operation data and the corresponding theoretical operation data, wherein the first difference value comprises a processor difference value between the actual operation data of the processor and the theoretical operation data of the processor based on at least one of thread information and frequency information of the processor in the actual operation data, the pre-trained action model is utilized to output the actual operation data of the processor and the processor difference value of the theoretical operation data of the processor, at least one of capacity information and transmission rate information of a memory in the actual operation data is utilized to output a memory difference value between the actual operation data of the memory and the theoretical operation data of the memory based on the pre-trained action model, at least one of capacity information and transmission rate information of a hard disk in the actual operation data is utilized to output a hard disk difference value of the actual operation data of the hard disk and the theoretical operation data of the hard disk based on the pre-trained action model, a calculation card in the actual operation data and the transmission rate information of the actual operation data is utilized to output a network difference value of the actual operation data of the network based on the transmission rate information of the actual operation data, and a calculation card in the actual operation data of the hard disk and the theoretical operation data of the network is utilized to calculate a calculation card force and a difference value of the actual operation card in the actual operation data of the hard disk and the theoretical operation data of the actual operation data of the hard operation data.
It may be appreciated that the pre-trained motion model in the embodiment of the present invention increases motion disturbance, such as a processor thread step, a steep increase in memory transmission, and a hard disk read/write rate, which is not particularly limited by the present invention.
Further, in some embodiments, the embodiment of the present invention may output the processor difference value between the actual running data and the theoretical running data of the processor by using the processor thread step action disturbance in the pre-trained action model based on the thread information and the frequency information of the processor in the actual running data, or may be based on other action disturbance, which is not limited in particular.
In some embodiments, the embodiments of the present invention may use the memory transmission steep action disturbance in the pre-trained action model to output the memory difference value between the actual operation data and the theoretical operation data of the memory based on the capacity information and the transmission rate information of the memory in the actual operation data, or may use other action disturbance, and the present invention is not limited in particular.
In some embodiments, the embodiments of the present invention may output a hard disk difference value between actual operation data and theoretical operation data of a hard disk by using a hard disk read-write rate action disturbance in a pre-trained action model based on capacity information and transmission rate information of the hard disk in the actual operation data, or may be based on other action disturbances, which is not particularly limited by the present invention.
In some embodiments, the embodiments of the present invention may output the network difference value between the actual operation data and the theoretical operation data of the network by using a pre-trained motion model based on the transmission rate information of the network in the actual operation data.
In some embodiments, the embodiment of the invention can output the difference value of the calculation card between the actual operation data and the theoretical operation data of the calculation card by utilizing a pre-trained action model based on the calculation data and the transmission rate information of the calculation card in the actual operation data.
Further, the embodiment of the invention can obtain the first difference value based on the processor difference value, the memory difference value, the hard disk difference value, the network difference value, the computing card difference value and the like.
According to the embodiment of the invention, the processor difference value, the memory difference value, the hard disk difference value, the network difference value, the computing power card difference value and the like can be analyzed, so that the capability of diagnosing the fine performance is realized, the resource optimization accuracy is improved, the predictive maintenance is enhanced, and the stability of the model is improved.
Optionally, in one embodiment of the present invention, before the first difference value is input into the pre-established rewards model, the method further comprises the steps of establishing a processor rewards model in the rewards model by using task information, thread information and instruction information in the processor difference value based on the processor difference value in the first difference value, establishing a memory rewards model in the rewards model by using data information and bandwidth information in the memory difference value based on the memory difference value in the first difference value, establishing a hard disk rewards model in the rewards model by using bandwidth information in the hard disk difference value based on the hard disk difference value in the first difference value, establishing a network rewards model in the rewards model by using transmission data information in the network difference value based on the network difference value in the first difference value, establishing a credit card rewards model in the rewards model by using credit information in the credit card difference value based on the calculated difference value, establishing an energy consumption rewards model in the rewards model based on the energy consumption information in the first difference value, establishing a memory rewards model based on the processor rewards model, the hard disk, the network, the card and the energy consumption model in the rewards model. Wherein the expression of the processor reward model may be, but is not limited to,:
,
wherein, the The method comprises the steps of (1) making instruction numbers for tasks; for the number of threads to be theoretical, Is the theoretical thread frequency; the number of the instructions can be processed for each period of the chip actually; Is the actual completion time; is the number of required threads.
The expression of the memory reward model may be, but is not limited to:
,
wherein, the Data total; is the theoretical bandwidth; Is the actual bandwidth; Is the theoretical capacity.
The expression of the hard disk reward model may be, but is not limited to,:
,
wherein, the Data total; is the theoretical bandwidth; Is the actual bandwidth; Is the theoretical capacity.
The expression of the network rewards model may be, but is not limited to,:
,
wherein, the For the pre-transmission data size; Is the actual received data size.
The expression of the card reward model may be, but is not limited to,:
,
wherein, the Calculating the amount for the model; the force value is calculated for the actual.
The expression of the energy consumption rewards model may be, but is not limited to,:
,
wherein, the Is theoretical energy consumption; is the actual energy consumption value.
In some embodiments, the reward model that may be built by the embodiments of the present invention may include, but is not limited to, a processor reward model, a memory reward model, a hard disk reward model, a network reward model, a credit card reward model, and an energy consumption reward model, and the present invention is not limited in particular.
The embodiment of the invention can establish a processor rewarding model by utilizing task information, thread information and instruction information in the processor difference value, and the expression can be but is not limited to:
,
wherein, the The method comprises the steps of (1) making instruction numbers for tasks; for a theoretical/nominal number of threads, Is the theoretical/nominal thread frequency; the number of the instructions can be processed for each period of the chip actually; Is the actual completion time; is the number of required threads.
In some embodiments, the embodiment of the present invention may use the data information and the bandwidth information in the memory discrepancy value to build a memory reward model, and the expression may be, but is not limited to:
,
wherein, the Data total; Is theoretical/nominal bandwidth; Is the actual bandwidth; is theoretical/nominal capacity.
In some embodiments, the embodiments of the present invention may use bandwidth information in the hard disk difference value to build a hard disk reward model, where the expression may be, but is not limited to:
,
wherein, the Data total; Is theoretical/nominal bandwidth; For the actual bandwidth, the value is from the beginning of data transmission to the end of data transmission, and is the data group; is theoretical/nominal capacity.
In some embodiments, the embodiments of the present invention may use the transmission data information in the network discrepancy value to build a network rewards model, and the expression may be, but is not limited to,:
,
wherein, the For the pre-transmission data size; For the size of the data actually received, packet loss is mainly prevented.
In some embodiments, the embodiments of the present invention may use the computing power information in the computing power card difference value to build a computing power card rewards model, and the expression may be, but is not limited to,:
,
wherein, the Calculating the amount for the model; the force value is calculated for the actual.
In some embodiments, the embodiment of the present invention may establish an energy consumption rewards model based on the energy consumption information in the first difference value, and the expression may be, but is not limited to,:
,
wherein, the Is theoretical/nominal energy consumption; is the actual energy consumption value.
According to the embodiment of the invention, the task scheduling and thread management of the processor can be optimized through the processor rewarding model, the processing efficiency is improved, the use and allocation of the memory are optimized through the memory rewarding model, the memory bottleneck is reduced, the read-write operation of the hard disk is optimized through the hard disk rewarding model, the data access speed is improved, the allocation of network resources is optimized through the network rewarding model, the network delay is reduced, the use of the power card is optimized through the power card rewarding model, the computing efficiency is improved, the energy consumption is reduced while the performance is ensured through the energy consumption rewarding model, the green computing is realized, the advantages of the rewarding models of all components are integrated, and the performance, the resource management and the energy efficiency of the server are comprehensively optimized.
In step S103, the first difference value is input into a pre-established bonus model to output a performance score of at least one component in a corresponding environment, and a performance evaluation result of the target server is determined based on the performance score.
In some embodiments, the first difference value may be input into a pre-established reward model, so that performance scores of different components under corresponding environments are determined, thereby determining a performance evaluation result of the server.
The embodiment of the invention can obtain the performance score of the processorMemory performance scoreHard disk performance scorePerformance score of the power cardNetwork performance scoreAnd energy consumption performance scoreEtc., the present invention is not particularly limited.
Further, the embodiment of the invention can determine the performance evaluation result based on the processor performance score, the memory performance score, the hard disk performance score, the network performance score and the power card performance score, and the calculation formula can be but is not limited to:
,
wherein, the 、、、、、Is a super parameter; the performance evaluation result of the server is obtained; a processor performance score; is the memory performance score; the hard disk performance score; Is a network performance score; calculating the performance score of the force card; Is an energy consumption performance fraction.
The embodiment of the invention can comprehensively consider the processor performance score, the memory performance score, the hard disk performance score, the network performance score, the power card performance score and the energy consumption performance score, has more accurate evaluation results and wider application scenes.
Optionally, in one embodiment of the present invention, inputting the first differential value into a pre-established rewards model to output a performance score of the at least one component under the corresponding environment includes calculating a processor performance score of a processor in the server with the processor rewards model in the pre-established rewards model based on the processor differential value in the first differential value, calculating a memory performance score of a memory in the server with the memory rewards model in the pre-established rewards model based on the memory differential value in the first differential value, calculating a hard disk performance score of a hard disk in the server with the hard disk rewards model in the pre-established rewards model based on the hard disk differential value in the first differential value, calculating a network performance score of the network in the server with the network rewards model in the pre-established rewards model based on the network differential value in the first differential value, calculating a power card performance score of a power card in the server with the power card in the pre-established model based on the power card differential value in the first differential value, and calculating a power consumption of the server with the pre-established power consumption model in the power consumption model based on the power consumption differential value in the first differential value.
In some embodiments, the embodiments of the present invention may calculate the processor performance score using a processor reward model of the pre-established reward models based on the processor difference value of the first difference values.
In some embodiments, the memory performance score may be calculated by using a memory reward model in a pre-established reward model based on a memory difference value in the first difference value.
In some embodiments, the hard disk performance score may be calculated by using a hard disk reward model in a pre-established reward model based on the hard disk difference value in the first difference value.
In some embodiments, the embodiments of the present invention may calculate the network performance score using a network rewards model of the pre-established rewards model based on the network discrepancy value of the first discrepancy values.
In some embodiments, the embodiments of the present invention may calculate the power card performance score using a power card reward model of the pre-established reward models based on the power card differential value of the first differential values.
In some embodiments, the energy consumption performance score may be calculated by using an energy consumption rewarding model in a pre-established rewarding model based on the energy consumption difference value in the first difference value.
The embodiment of the invention can utilize each component to evaluate independently, more accurately position the performance bottleneck, facilitate maintenance and expansion, enhance the interpretation of the result, comprehensively consider a plurality of components and energy consumption, help to find the balance between the performance and the energy efficiency, adapt to different loads and environmental changes and make more intelligent decisions.
Alternatively, in one embodiment of the invention, determining the performance evaluation result of the target server based on the performance score includes counting a total number of performance scores based on the performance score, calculating an initial performance evaluation result of the target server based on the performance score, and determining the performance evaluation result based on the total number and the initial performance evaluation result.
It can be understood that, in order to ensure that the performances of the processor, the memory, the hardware and the like meet the requirements, the embodiment of the invention can perform multiple evaluations, such as twice, three times and the like, and the invention is not particularly limited, so as to obtain different performance scores.
As a possible implementation manner, the embodiment of the invention can count the total number of the performance scores, further calculate the initial performance evaluation result of the target server, and determine the performance evaluation result based on the total number and the initial performance evaluation result.
For example, the embodiment of the invention can calculate the initial performance evaluation result by using a calculation formula of the performance evaluation result, and take a flat distance value of the initial performance evaluation result as the performance evaluation result.
According to the embodiment of the invention, the performance of a plurality of components is comprehensively considered by counting the total number of the performance scores, the one-sided performance of single component evaluation is avoided, the overall performance of the server is more accurately reflected, the evaluation result is more in line with the requirements of actual application scenes, the performance bottleneck can be rapidly positioned, a targeted optimization strategy is formulated, the operation and maintenance cost is reduced, and the resource utilization rate is improved.
The working principle of the server performance evaluation method according to the embodiment of the present invention is described below with reference to a specific embodiment.
Fig. 6 is a flowchart of an operation principle of a server performance evaluation method according to an embodiment of the present invention.
And step S601, collecting actual operation data of a target server in different environments.
In the embodiment of the invention, different environments may be different environmental temperatures.
Step S602, inputting actual operation data into a pre-trained action model to obtain a first difference value between the actual operation data and corresponding theoretical operation data.
The embodiment of the invention can calculate the first difference value between the actual operation data and the theoretical operation data of the processor, the memory, the hardware, the network, the power card, the energy consumption and the like by utilizing the pre-trained action model.
And step S603, inputting the first difference value into a pre-established rewarding model to obtain the performance score under the corresponding environment.
The embodiment of the invention can calculate the processor performance score, the memory performance score, the hard disk performance score, the power card performance score, the network performance score, the energy consumption performance score and the like by utilizing a pre-established model, and is not particularly limited.
Step S604, determining an initial performance evaluation result of the server.
The embodiment of the invention can substitute the processor performance score, the memory performance score, the hard disk performance score, the power card performance score, the network performance score, the energy consumption performance score and the like into a calculation formula of the performance evaluation result to calculate the initial performance evaluation result.
And step S605, determining a performance evaluation result of the server.
The embodiment of the invention can calculate the corresponding average value based on the total number of the initial performance evaluation results, so as to determine the performance evaluation result.
From the description of the above embodiments, it will be clear to a person skilled in the art that the method according to the above embodiments may be implemented by means of software plus the necessary general hardware platform, but of course also by means of hardware, but in many cases the former is a preferred embodiment.
According to the evaluation method for the server performance, the collected actual operation data of at least one component of the target server in a plurality of environments can be input into the pre-trained action model, the first difference value between the actual operation data and the corresponding theoretical operation data is output, the first difference value is input into the pre-established rewarding model, and the performance score of at least one component in the corresponding environment is output, so that the performance evaluation result of the target server is determined, and therefore the problems that the static threshold method is poor in dynamic adaptability, single in index and high in misjudgment rate can be solved, the reference test tool is high in resource consumption, weak in scene generalization capability, poor in instantaneity and long in test period, the technical problem of online dynamic evaluation cannot be supported is solved, the overall performance of the server is judged through multi-dimensional integration, the test flow is simplified, the human intervention is reduced, the stability and reliability of the server performance test are improved, the applicable scene is wider, the accuracy is higher, the different models are designed for evaluation calculation, the model expression is simplified, the model is easier to train, the test time is shortened, and the test efficiency is improved.
The embodiment of the invention also provides a server performance evaluation device.
Fig. 7 is a block diagram of an evaluation apparatus for server performance according to an embodiment of the present invention.
As shown in fig. 7, the server performance evaluation apparatus 10 includes a first acquisition module 100, a first output module 200, and a first determination module 300.
Wherein the first acquisition module 100 is configured to acquire actual operation data of at least one component of the target server in a plurality of environments.
The first output module 200 is configured to input actual operation data into a pre-trained motion model, so as to output a first difference value between the actual operation data and corresponding theoretical operation data.
The first determining module 300 is configured to input the first difference value into a pre-established rewards model, to output a performance score of at least one component in a corresponding environment, and determine a performance evaluation result of the target server based on the performance score.
Optionally, in one embodiment of the present invention, the method further comprises a second determining module, a first generating module, a second generating module, a third generating module and a first constructing module.
The second determining module is used for determining at least one of the first structural information of the fully-connected neural network, the second structural information of batch normalization and the function information of the activation function in the action model based on the actual operation data before the actual operation data is input into the pre-trained action model.
The first generation module is used for connecting the fully-connected neural network, the batch normalization and the activation function in series based on at least one of the first structure information, the second structure information and the function information so as to obtain a first action network in the action model.
And the second generation module is used for merging the at least two first action networks connected in series with the first action network so as to obtain the first characteristic of the actual operation data.
And the third generation module is used for obtaining a second action network in the action model based on the first characteristic and the first action network.
The first construction module is used for constructing an action model based on the first action network and the second action network.
Optionally, in one embodiment of the present invention, the system further comprises a second acquisition module, a second output module, a calculation module, a detection module and a fourth generation module.
Wherein the second acquisition module is used for acquiring training operation data of at least one component of the server in a plurality of environments before inputting actual operation data into the pre-trained action model.
And the second output module is used for inputting the training operation data into a pre-established action model so as to output a second difference value of the training operation data and the corresponding theoretical operation data.
And the calculation module is used for calculating training performance scores of at least one component in corresponding environments by using a pre-established rewarding model based on the second difference value.
And the detection module is used for detecting whether at least one component meets the preset performance condition or not based on the training performance score.
And the fourth generation module is used for training the parameter information of the pre-established action model by utilizing discrete estimation under the condition that at least one component does not meet the preset performance condition so as to obtain a trained action model, and generating the component meeting the preset performance condition based on the trained action model.
Optionally, in one embodiment of the present invention, the fourth generating module includes a first training unit, a second training unit, a third training unit, and a fourth training unit.
The first training unit is used for training probability distribution of action selection in the parameter information based on gradient information of discrete estimation.
And the second training unit is used for training the parameter updating amplitude in the parameter information based on the discretely estimated learning rate information.
And a third training unit for training the parameter adjustment sensitivity in the parameter information based on the discrete estimated logarithmic information.
And the fourth training unit is used for training the action value in the parameter information based on the dominance function of the discrete estimation.
Alternatively, in one embodiment of the present invention, the first acquisition module 100 includes an acquisition unit and a first determination unit.
The system comprises an acquisition unit, a storage unit and a storage unit, wherein the acquisition unit is used for acquiring at least one of first data of a processor, second data of a memory, third data of a hard disk, fourth data of a network, fifth data of a computing card and sixth data of different environments in a target server.
And a first determining unit configured to determine actual operation data based on at least one of the first data, the second data, the third data, the fourth data, the fifth data, and the sixth data.
Alternatively, in one embodiment of the present invention, the first output module 200 includes a first output unit, a second output unit, a third output unit, a fourth output unit, a fifth output unit, and a generation unit.
The first output unit is used for outputting processor difference values of the actual operation data of the processor and the theoretical operation data of the processor by utilizing a pre-trained action model based on at least one of the thread information and the frequency information of the processor in the actual operation data.
And the second output unit is used for outputting the memory difference value of the actual operation data of the memory and the theoretical operation data of the memory by utilizing a pre-trained action model based on at least one of capacity information and transmission rate information of the memory in the actual operation data.
And the third output unit is used for outputting hard disk difference values of the actual operation data of the hard disk and the theoretical operation data of the hard disk by utilizing a pre-trained action model based on at least one of capacity information and transmission rate information of the hard disk in the actual operation data.
And the fourth output unit is used for outputting the network difference value of the actual operation data of the network and the theoretical operation data of the network by utilizing the pre-trained action model based on the transmission rate information of the network in the actual operation data.
And a fifth output unit for outputting a calculation card difference value of actual operation data of the calculation card and theoretical operation data of the calculation card by using a pre-trained action model based on at least one of calculation data and transmission rate information of the calculation card in the actual operation data.
The generating unit is used for obtaining a first difference value based on at least one of the processor difference value, the memory difference value, the hard disk difference value, the network difference value and the power card difference value.
Optionally, in one embodiment of the present invention, the method further comprises a second building module, a third building module, a fourth building module, a fifth building module, a sixth building module, a seventh building module, and an eighth building module.
The second construction module is used for constructing the processor rewarding model in the rewarding model by utilizing the task information, the thread information and the instruction information in the processor discrepancy value based on the processor discrepancy value in the first discrepancy value before the first discrepancy value is input into the pre-established rewarding model.
And the third construction module is used for establishing a memory rewarding model in the rewarding model by utilizing the data information and the bandwidth information in the memory difference value based on the memory difference value in the first difference value.
And the fourth construction module is used for constructing a hard disk rewarding model in the rewarding model by utilizing the bandwidth information in the hard disk difference value based on the hard disk difference value in the first difference value.
And a fifth construction module, configured to establish a network rewarding model in the rewarding model by using the transmission data information in the network discrepancy value based on the network discrepancy value in the first discrepancy value.
And a sixth construction module, configured to establish a card rewarding model in the rewarding model by using the power information in the card difference value based on the power card difference value in the first difference value.
And a seventh construction module, configured to establish an energy consumption rewarding model in the rewarding models based on the energy consumption information in the first difference value.
And an eighth building module for building a rewards model based on at least one of a processor rewards model, a memory rewards model, a hard disk rewards model, a network rewards model, a power card rewards model and an energy consumption rewards model.
Alternatively, in one embodiment of the present invention, the first determining module 300 includes a first calculating unit, a second calculating unit, a third calculating unit, a fourth calculating unit, a fifth calculating unit, and a sixth calculating unit.
The first calculating unit is used for calculating the processor performance score of the processor in the server by using the processor rewarding model in the pre-established rewarding model based on the processor difference value in the first difference value.
And the second calculation unit is used for calculating the memory performance score of the memory in the server by using the memory reward model in the pre-established reward model based on the memory difference value in the first difference value.
And a third calculation unit for calculating the hard disk performance score of the hard disk in the server by using the hard disk rewarding model in the pre-established rewarding model based on the hard disk difference value in the first difference value.
And a fourth calculation unit for calculating a network performance score of the network in the server using a network rewards model in the pre-established rewards model based on the network difference value in the first difference value.
And a fifth calculation unit, configured to calculate a power card performance score of the power card in the server using the power card rewarding model in the pre-established rewarding model based on the power card difference value in the first difference value.
And a sixth calculation unit for calculating the energy consumption performance score of the server by using the energy consumption rewarding model in the pre-established rewarding model based on the energy consumption difference value in the first difference value.
Alternatively, in one embodiment of the present invention, wherein,
The expression of the processor reward model may be, but is not limited to,:
,
wherein, the The method comprises the steps of (1) making instruction numbers for tasks; for the number of threads to be theoretical, Is the theoretical thread frequency; the number of the instructions can be processed for each period of the chip actually; Is the actual completion time; is the number of required threads.
The expression of the memory reward model may be, but is not limited to:
,
wherein, the Data total; is the theoretical bandwidth; Is the actual bandwidth; Is the theoretical capacity.
The expression of the hard disk reward model may be, but is not limited to,:
,
wherein, the Data total; is the theoretical bandwidth; Is the actual bandwidth; Is the theoretical capacity.
The expression of the network rewards model may be, but is not limited to,:
,
wherein, the For the pre-transmission data size; Is the actual received data size.
The expression of the card reward model may be, but is not limited to,:
,
wherein, the Calculating the amount for the model; the force value is calculated for the actual.
The expression of the energy consumption rewards model may be, but is not limited to,:
,
wherein, the Is theoretical energy consumption; is the actual energy consumption value.
Optionally, in one embodiment of the present invention, the first determining module 300 includes a statistics unit, a seventh calculation unit, and a second determining unit.
Wherein, the statistics unit is used for counting the total number of the performance scores based on the performance scores.
And a seventh calculation unit for calculating an initial performance evaluation result of the target server based on the performance score.
And a second determining unit configured to determine a performance evaluation result based on the total number and the initial performance evaluation result.
The description of the features in the embodiment corresponding to the evaluation device of the server performance may refer to the related description of the embodiment corresponding to the evaluation method of the server performance, which is not described herein in detail.
According to the evaluation device for the server performance, which is provided by the embodiment of the invention, the collected actual operation data of at least one component of the target server in a plurality of environments can be input into a pre-trained action model, the first difference value between the actual operation data and the corresponding theoretical operation data is output, the first difference value is input into a pre-established rewarding model, and the performance score of at least one component in the corresponding environment is output, so that the performance evaluation result of the target server is determined, therefore, the problems of poor dynamic adaptability, single index and higher misjudgment rate of a static threshold method can be solved, the reference test tool has the advantages of high resource consumption, weak scene generalization capability, poor instantaneity and longer test period, the technical problem of on-line dynamic evaluation cannot be supported, the problem of integrating through multiple dimensions, the judgment of the overall performance of the unified server is achieved, the test flow is simplified, the human intervention is reduced, the stability and reliability of the server performance test are improved, the applicable scene is wider, the accuracy is higher, the evaluation calculation is performed by designing different models, the model expression is simplified, the model is easier to train, and the test efficiency is improved.
An embodiment of the invention also provides a server comprising a memory in which a computer program is stored and a processor arranged to run the computer program to perform the steps of any of the above described embodiments of the method of evaluating server performance.
An embodiment of the present invention also provides a computer readable storage medium having a computer program stored therein, wherein the computer program is configured to perform, when run, the steps of any of the above embodiments of the method for evaluating server performance.
In an exemplary embodiment, the computer readable storage medium may include, but is not limited to, a U disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a removable hard disk, a magnetic disk, or an optical disk, etc. various media in which a computer program may be stored.
The embodiment of the invention also provides a computer program product, which comprises a computer program, and the computer program is executed by a processor to implement the steps in the embodiment of any one of the server performance evaluation methods.
Embodiments of the present invention also provide another computer program product comprising a non-volatile computer readable storage medium storing a computer program which, when executed by a processor, implements the steps of any of the above embodiments of the server performance assessment method.
Those of skill would further appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that the various illustrative elements and steps are described above generally in terms of functionality in order to clearly illustrate the interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
The method for evaluating the server performance provided by the invention is described in detail above. The principles and embodiments of the present invention have been described herein with reference to specific examples, the description of which is intended only to facilitate an understanding of the method of the present invention and its core ideas. It should be noted that it will be apparent to those skilled in the art that various modifications and adaptations of the invention can be made without departing from the principles of the invention and these modifications and adaptations are intended to be within the scope of the invention as defined in the following claims.