CN119336516B - Super fusion calculation force scheduling method and system based on prediction model - Google Patents
Super fusion calculation force scheduling method and system based on prediction model Download PDFInfo
- Publication number
- CN119336516B CN119336516B CN202411884270.3A CN202411884270A CN119336516B CN 119336516 B CN119336516 B CN 119336516B CN 202411884270 A CN202411884270 A CN 202411884270A CN 119336516 B CN119336516 B CN 119336516B
- Authority
- CN
- China
- Prior art keywords
- resource
- neural network
- time
- prediction
- data
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5005—Allocation of resources, e.g. of the central processing unit [CPU] to service a request
- G06F9/5027—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/10—Pre-processing; Data cleansing
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5005—Allocation of resources, e.g. of the central processing unit [CPU] to service a request
- G06F9/5027—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
- G06F9/5044—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering hardware capabilities
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5005—Allocation of resources, e.g. of the central processing unit [CPU] to service a request
- G06F9/5027—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
- G06F9/505—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering the load
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/044—Recurrent networks, e.g. Hopfield networks
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/084—Backpropagation, e.g. using gradient descent
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2123/00—Data types
- G06F2123/02—Data types in the time domain, e.g. time-series data
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2209/00—Indexing scheme relating to G06F9/00
- G06F2209/50—Indexing scheme relating to G06F9/50
- G06F2209/508—Monitor
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/048—Activation functions
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Software Systems (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Computation (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
Abstract
The invention relates to the technical field of resource scheduling, in particular to a super fusion calculation power scheduling method and system based on a prediction model, comprising the following steps: and collecting and arranging real-time data in the super-fusion environment, including GPU utilization rate and network bandwidth information, removing noise from the data by utilizing a digital filtering technology, extracting key performance indexes, carrying out numerical standardization on the key performance indexes, and establishing standardized resource characteristic values. In the invention, the quality of data processing is improved by the application of the real-time data collection and digital filtering technology, the accuracy of resource prediction is ensured, the short-term resource demand can be accurately predicted by utilizing standardized key performance indexes and time sequence analysis, the accurate matching and optimized distribution of resources are realized, in addition, the resource configuration can be dynamically adjusted by utilizing a neural network model constructed by a feedback loop and a nonlinear activation function, the real-time change of the resource demand is effectively realized, and the adaptability of the system to the workload and the overall data processing capacity are improved.
Description
Technical Field
The invention relates to the technical field of resource scheduling, in particular to a super fusion cost-effective power scheduling method and system based on a prediction model.
Background
Resource scheduling is a critical technical field in computer science, and mainly focuses on how to effectively allocate and manage computing resources, including processor time, memory, storage space, network bandwidth, and the like. Research and applications in this area have ensured that system resources can be used fairly and efficiently in a multi-tasking and multi-user environment. The resource scheduling technology is widely applied to an operating system, distributed computing, cloud computing and big data processing, and aims to optimize the resource utilization rate, reduce the task execution time and improve the overall performance and user satisfaction of the system. Through complex algorithms and strategies, resource scheduling can predict resource demands, avoid resource conflicts and bottlenecks, and simultaneously support the expandability and elasticity of the system.
The super fusion power scheduling method refers to a resource scheduling strategy adopted in the super fusion infrastructure and is used for optimizing and managing calculation, storage and network resource allocation. This approach allows for more centralized and automated scheduling of resources by integrating software-defined storage, computing, and networking into a single system management platform. The super-fusion computing power scheduling is used for improving the operation efficiency of the data center, reducing the complexity and the cost and enhancing the response capability and the adaptability of the system to different workloads. The resource demand is predicted through an intelligent algorithm, and the super-convergence algorithm power scheduling can dynamically adjust the resource allocation so as to support service continuity and performance optimization.
The prior art is insufficient in real-time data analysis and dynamic resource scheduling, which often causes mismatching of resource allocation and actual requirements, and affects system efficiency and performance. Due to the lack of fast response capability to immediate data changes, traditional resource scheduling may not be able to accommodate fluctuations in workload in time, resulting in resource idling or overload. This static resource management approach limits the flexibility and expansion capabilities of the system and may sometimes affect user experience and business continuity due to response delays.
Disclosure of Invention
The invention aims to solve the defects in the prior art, and provides a super fusion cost-effective force scheduling method and system based on a prediction model.
In order to achieve the purpose, the invention adopts the following technical scheme that the super fusion cost-effective force scheduling method based on the prediction model comprises the following steps:
s1, collecting and arranging real-time data in a super fusion environment, including GPU utilization rate and network bandwidth information, removing noise from the data by utilizing a digital filtering technology, extracting key performance indexes, carrying out numerical standardization on the key performance indexes, and establishing standardized resource characteristic values;
s2, carrying out time sequence analysis on the standardized resource characteristic values, constructing a prediction model to analyze the use trend of resources in a short period, and calculating the GPU and the stored demand growth rate in a time period to obtain a resource demand predicted value;
S3, inputting the resource demand predicted value, constructing a primary neural network comprising a feedback loop and a nonlinear activation function, performing preliminary adjustment of weights and parameters, matching a current resource use mode, and generating a preliminary neural network model;
S4, optimizing the preliminary neural network model, selecting an activation function by adjusting the depth and the width of a network layer, adjusting the weight parameter of a loss function, performing cyclic training by a small-batch gradient descent method, and matching future resource demand changes to generate an optimized neural network model;
And S5, applying the optimized neural network model, carrying out real-time prediction of resource use and network load, and adjusting resource allocation and a data processing path according to a prediction result to obtain an adjusted resource allocation strategy.
The standardized resource characteristic values are specifically GPU utilization rate and network bandwidth information, the resource demand predicted values comprise GPU demand growth rate and storage demand growth rate, the preliminary neural network model specifically refers to a neural network structure comprising a feedback loop and a nonlinear activation function, the optimized neural network model specifically comprises an adjusted network layer depth and width, a selected activation function and an adjusted loss function weight parameter, and the adjusted resource configuration strategy specifically comprises real-time resource utilization prediction and network load prediction.
As a further aspect of the present invention, the step of obtaining the normalized resource attribute value specifically includes:
s111, collecting GPU utilization rate and network bandwidth information from a super fusion environment, and meanwhile, finishing data to generate a preliminary data set;
S112, applying a digital filtering technology to the preliminary data set, and obtaining a cleaned performance index data set by eliminating statistical noise and abnormal values;
S113, carrying out numerical standardization on the cleaned performance index data set, and converting the GPU utilization rate and the network bandwidth into uniform proportions and dimensions by adopting a Z-score method to obtain a standardized performance index data set;
s114, selecting key performance indexes based on the standardized performance index data set, calculating index weights, and adopting a formula:
;
obtaining a standardized resource characteristic value;
Wherein, A normalized value representing a single performance metric,Represents the business criticality of the performance index,Representing the total number of performance indicators,Representing the integrated normalized resource attribute value.
As a further aspect of the present invention, the step of obtaining the resource demand predicted value specifically includes:
s211, dividing the resource characteristic values based on continuous time periods by utilizing the standardized resource characteristic values, rearranging and marking time stamps according to a time sequence, and establishing time sequence data;
S212, analyzing the change trend of each continuous time period according to the resource characteristic value of each continuous time period based on the time sequence data, calculating the growth rate of the resource characteristic value in a plurality of time periods, and adopting the formula:
;
obtaining GPU and stored growth rate;
Wherein, Indicating the rate of increase of the resource,For the characteristic value of the resource within the target time period,As a weighting parameter for the time slices,In order to calculate the adjustment factor,In order to shift the adjustment parameters,As a noise correction term, a noise correction term is used,Representing the number of time periods referenced in the calculation;
s213, based on the GPU and the stored growth rate, carrying out collective operation on the GPU and the resource use conditions of the adjacent time periods, and comprehensively calculating to obtain the resource demand growth rate in the time periods;
s214, calling the resource demand growth ratio, and carrying out accumulated calculation on the resource growth ratio of all the fragments based on the resource use condition of each time fragment to obtain a resource demand predicted value.
As a further aspect of the present invention, the step of obtaining the preliminary neural network model specifically includes:
S311, calling the resource demand predicted value, inputting the resource demand predicted value into a primary neural network, initializing a feedback loop structure, setting a nonlinear activation function, carrying out randomized distribution and preliminary setting on a plurality of connection weights, and generating an initial weight matrix;
S312, executing feedback loop operation, adjusting weight parameters in the initial weight matrix based on feedback output, and adopting a formula by combining dynamic output of nonlinear activation function response:
;
Adjusting each connection weight and updating the current weight matrix to obtain the initially adjusted weight and parameters;
Wherein, Representing the weight value after the next iteration,As the current weight value of the current weight,In order to learn the rate of the learning,As an output value of the feedback loop,The intensity of activation for the input of the connection,In order to adjust the deviation factor,As a noise correction term, a noise correction term is used,Representing the total number of elements involved in the summation operation;
And S313, combining the preliminarily adjusted weights and parameters with a resource demand predicted value, performing repeated iterative computation on the weights and parameters, and matching the current resource use mode to generate a preliminary neural network model.
As a further aspect of the present invention, the step of obtaining the optimized neural network model specifically includes:
s411, based on the preliminary neural network model, increasing and reducing the number of layers and the number of nodes of each layer of the network, and optimizing and configuring the number of hidden layers and the number of nodes based on resource demand prediction to obtain an adjusted neural network architecture;
S412, based on the adjusted neural network architecture, selecting matched activation functions for a plurality of hidden layers, performing combination test on each layer by using a differential activation function, and adopting the formula:
;
Calculating response output of the plurality of nodes to obtain an optimized activation function combination;
Wherein, The output representing the neural network layer is the total output of the weighted sum of the plurality of node activation functions processed,The function is activated for the node and,To activate the weighting coefficients of the function, for adjusting the strength of the node input signal,In order to input a signal to the device,For the bias parameter, for adjusting the activation threshold of the activation function,Representing the total number of nodes, and representing the number of nodes in the current layer;
and S413, adjusting weight parameters of the loss function based on the optimized activation function combination, carrying out small-batch gradient descent training by combining with the current weight matrix, and iteratively calculating the neural network error to generate an optimized neural network model.
As a further aspect of the present invention, the step of obtaining the adjusted resource allocation policy specifically includes:
S511, processing the real-time monitoring data based on the optimized neural network model to obtain the current resource use condition, and performing predictive analysis based on the real-time resource use amount and the network load condition to obtain a real-time resource prediction result;
s512, calculating the resource demand change trend based on the real-time resource prediction result, adjusting the current resource configuration, smoothing the prediction error by adopting a weighted parameter, and adopting a formula:
;
acquiring an adjusted resource proportion;
Wherein, The resource proportion after the adjustment is represented,As the weighting coefficient(s),For the real-time use of resources,In order to predict the load,For the current configuration of the capacity,As a noise correction term, a noise correction term is used,Representing the total resource types or the node quantity participating in calculation;
and S513, reallocating the data processing paths and the resource allocation among the plurality of network nodes based on the adjusted resource proportion and by combining a real-time resource prediction result to obtain an adjusted resource allocation strategy.
The super-fusion calculation power scheduling system based on the prediction model is used for executing the super-fusion calculation power scheduling method based on the prediction model, and the system comprises the following components:
The data preprocessing module collects GPU utilization rate and network bandwidth data in a super fusion environment, removes random noise in the data by utilizing a digital filtering technology, screens key performance indexes of the GPU utilization rate and the network bandwidth, and converts the indexes into standardized resource characteristic values through linear transformation;
the trend prediction model module utilizes the standardized resource characteristic value to construct a time sequence analysis model, evaluates the dynamic change of the use of recent resources, calculates and predicts the GPU and the stored demand growth rate in a short period, and thus generates a resource demand prediction value;
the neural network construction module takes the predicted value of the resource demand as input, sets a feedback loop in the neural network, selects a nonlinear activation function, preliminarily adjusts network parameters, matches the current resource use mode, creates a preliminary neural network model, adjusts the depth, width and loss function weight of a network layer, adopts a small-batch gradient descent method to train and optimize the network, adapts to the change of future resource demands and obtains an optimized neural network model;
The resource allocation strategy module deploys the optimized neural network model to conduct real-time resource use and network load prediction, dynamically adjusts resource allocation and data processing paths according to prediction results, and forms and implements an adjusted resource allocation strategy.
Compared with the prior art, the invention has the advantages and positive effects that:
According to the invention, through the application of real-time data collection and digital filtering technology, the quality of data processing is improved, and the accuracy of resource prediction is ensured. By utilizing standardized key performance indexes and time sequence analysis, short-term resource requirements can be accurately predicted, and accurate matching and optimal allocation of resources are realized. In addition, the neural network model constructed by the feedback loop and the nonlinear activation function can dynamically adjust the resource configuration, effectively changes the resource demand in real time, and improves the adaptability of the system to the workload and the overall data processing capacity.
Drawings
FIG. 1 is a schematic workflow diagram of the present invention;
FIG. 2 is a flowchart showing the steps for obtaining the normalized resource attribute values according to the present invention;
FIG. 3 is a flowchart showing the steps for obtaining a predicted value of a resource demand according to the present invention;
FIG. 4 is a flowchart of the steps for obtaining a preliminary neural network model according to the present invention;
FIG. 5 is a flowchart of the steps for obtaining an optimized neural network model according to the present invention;
fig. 6 is a flowchart of the steps for obtaining the resource allocation policy after adjustment according to the present invention.
Detailed Description
The present invention will be described in further detail with reference to the drawings and examples, in order to make the objects, technical solutions and advantages of the present invention more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
In the description of the present invention, it should be understood that the terms "length," "width," "upper," "lower," "front," "rear," "left," "right," "vertical," "horizontal," "top," "bottom," "inner," "outer," and the like indicate orientations or positional relationships based on the orientation or positional relationships shown in the drawings, merely to facilitate describing the present invention and simplify the description, and do not indicate or imply that the devices or elements referred to must have a specific orientation, be configured and operated in a specific orientation, and therefore should not be construed as limiting the present invention. Furthermore, in the description of the present invention, the meaning of "a plurality" is two or more, unless explicitly defined otherwise.
Referring to fig. 1, the invention provides a method for super fusion force scheduling based on a prediction model, comprising the following steps:
s1, collecting and arranging real-time data in a super fusion environment, including GPU utilization rate and network bandwidth information, removing noise from the data by utilizing a digital filtering technology, extracting key performance indexes, carrying out numerical standardization on the key performance indexes, and establishing standardized resource characteristic values;
s2, carrying out time sequence analysis on the standardized resource characteristic values, constructing a prediction model to analyze the use trend of resources in a short period, and calculating the GPU and the stored demand growth rate in a time period to obtain a resource demand predicted value;
s3, inputting a resource demand predicted value, constructing a primary neural network comprising a feedback loop and a nonlinear activation function, performing preliminary adjustment of weights and parameters, matching a current resource use mode, and generating a preliminary neural network model;
S4, optimizing the preliminary neural network model, selecting an activation function by adjusting the depth and the width of a network layer, adjusting the weight parameter of a loss function, performing cyclic training by a small-batch gradient descent method, matching future resource demand changes, and generating an optimized neural network model;
And S5, applying the optimized neural network model, carrying out real-time prediction of resource use and network load, and adjusting resource allocation and a data processing path according to a prediction result to obtain an adjusted resource allocation strategy.
The standardized resource characteristic values are GPU utilization rate and network bandwidth information, the resource demand predicted values comprise GPU demand growth rate and storage demand growth rate, the preliminary neural network model specifically refers to a neural network structure comprising a feedback loop and a nonlinear activation function, the optimized neural network model specifically comprises adjusted network layer depth and width, selected activation functions and adjusted loss function weight parameters, and the adjusted resource configuration strategy specifically comprises real-time resource utilization prediction and network load prediction.
Referring to fig. 2, the steps for obtaining the normalized resource attribute value specifically include:
s111, collecting GPU utilization rate and network bandwidth information from a super fusion environment, and meanwhile, finishing data to generate a preliminary data set;
The method comprises the steps of recording the use conditions of the GPU and the network bandwidth in real time, arranging and marking collected data according to time sequence, ensuring the integrity and traceability of the data, removing data points under abnormal working conditions due to system maintenance through preliminary data screening, and reserving a data set under normal working conditions, wherein the purity of the data set and the accuracy of subsequent processing are ensured, and the preliminary data set is obtained.
S112, applying a digital filtering technology to the preliminary data set, and obtaining a cleaned performance index data set by eliminating statistical noise and abnormal values;
a digital filtering technology is applied to the preliminary data set, a low-pass filter is used for removing high-frequency noise, meanwhile, median filtering processing is applied for removing sporadic extreme values, the most representative performance indexes are reserved in the data through the technology, the processing step is particularly critical to subsequent performance evaluation, and the accuracy and reliability of the performance evaluation are directly influenced, so that the cleaned performance index data set is obtained.
S113, carrying out numerical standardization on the cleaned performance index data set, and converting the GPU utilization rate and the network bandwidth into uniform proportions and dimensions by adopting a Z-score method to obtain a standardized performance index data set;
The numerical standardization is carried out on the cleaned performance index data set, the standard deviation of each data point and the average value is calculated by using a Z scoring method, in the step, each performance index is converted into a dimensionless form, the comparison and analysis between different types of data are facilitated, the standardized data are easier to use in multivariate analysis, the fairness and consistency of comparison among multiple performance indexes are ensured, and the standardized performance index data set is obtained.
S114, selecting key performance indexes based on the standardized performance index data set, calculating index weights, and adopting the formula:
;
obtaining a standardized resource characteristic value;
Wherein, A normalized value representing a single performance metric,Represents the business criticality of the performance index,Representing the total number of performance indicators,Representing the integrated normalized resource attribute value.
The formula:
;
The formula has the advantages that the performance indexes are weighted, so that the differentiation indexes obtain various influences according to the business criticality of the differentiation indexes, and the overall performance of the system is estimated more accurately.
Formula details and formula calculation derivation process:
three performance indexes are provided, and the standardized values are respectively ,,And sets the business weights of the indexes as,,And calculating the sum of absolute values of weight denominators as:
;
The weighted sum of the weight molecules is:
;
weighted composite performance score:
Is that ;
The result shows that the weighted value of the overall performance index after the business criticality is referred to is 0.37, the performance level of the system under the current evaluation is displayed, and the correlation between the numerical result and the step result is that the calculation basis and the numerical value of the standardized resource characteristic value are directly indicated.
Referring to fig. 3, the steps for obtaining the resource demand predicted value specifically include:
S211, dividing the resource characteristic values based on continuous time periods by using the standardized resource characteristic values, rearranging and marking time stamps according to a time sequence, and establishing time sequence data;
By systematic monitoring, the raw time series data collected exhibit the volatility of GPU usage and network bandwidth over a plurality of time periods, these data points are marked by time stamps, the calculation of standardized resource feature values is based on these data, the data are first segmented, ensuring that each segment includes continuous time series information, such data processing method ensures the accuracy and reliability of time series analysis, by which dynamic changes in resource usage can be effectively monitored, providing input data for subsequent analysis, the result of this step is a complete time series data set which will be directly used for the next trend analysis and establishment of predictive models.
S212, analyzing the change trend of each continuous time period according to the resource characteristic value of each continuous time period based on the time sequence data, calculating the growth rate of the resource characteristic value in a plurality of time periods, and adopting the formula:
;
obtaining GPU and stored growth rate;
Wherein, Indicating the rate of increase of the resource,For the characteristic value of the resource within the target time period,As a weighting parameter for the time slices,In order to calculate the adjustment factor,In order to shift the adjustment parameters,As a noise correction term, a noise correction term is used,Representing the number of time periods referenced in the calculation;
The formula:
;
The formula has the advantages that the flexibility and the adaptability of the model are improved by introducing the adjustment factors and the offset parameters, and the resource demand change under the differential condition can be predicted more accurately.
Formula details and formula calculation derivation process:
The set values are: Wherein Is the average GPU usage or storage usage over a target period of time,Is a weighting parameter of the time slices, the criticality of the differentiated time slices,Is a fixed value set to adjust the baseline wander,AndThe adjustment factors and noise correction terms are used for processing random fluctuations and outliers in the data. The calculation process comprises the following steps:
;
;
;
;
the result shows that under the condition of reference time weight and noise correction, the predicted resource growth rate is 134.49, which means that the resource demand has a significant increasing trend, and matched resource allocation and adjustment are needed.
S213, based on the GPU and the stored growth rate, carrying out collective operation on the GPU and the resource use conditions of the adjacent time periods, and comprehensively calculating to obtain the resource demand growth rate in the time periods;
When the resource demand growth ratio of each time period is determined, the GPU and the stored growth rate are used, the resource use condition of the continuous time period is compared with the adjacent time period through a set operation mode, so that the growth ratio of the resource demand is calculated, the dynamic change of the resource use of a plurality of time periods can be analyzed, the future demand trend is predicted, the calculation process comprises the whole process from the actual use condition of the resource to the demand prediction, and the accuracy and the practicability of a prediction result are ensured, so that the resource management is more efficient and accurate.
S214, calling a resource demand growth rate, and carrying out accumulated calculation on the resource growth rate of all the fragments based on the resource use condition of each time fragment to obtain a resource demand predicted value.
Based on the accumulated calculation result of the resource demand increase ratio, the step involves summarizing the resource use condition of each time segment in a short period, and the integral resource demand predicted value is obtained by carrying out accumulated calculation on the resource increase ratio of all segments.
Referring to fig. 4, the preliminary neural network model obtaining steps specifically include:
S311, calling a resource demand predicted value, inputting the resource demand predicted value into a primary neural network, initializing a feedback loop structure, setting a nonlinear activation function, carrying out randomized distribution and preliminary setting on a plurality of connection weights, and generating an initial weight matrix;
Invoking resource demand predictors and inputting them into the primary neural network as initial inputs, the critically dependent upon the accuracy of the resource demand predictors, which are obtained by time series analysis based on previous data and current resource usage patterns, the input process comprising initializing the neural network structure, including setting activation functions and randomly assigned initial weights, and determining the feedback loop structure, which ensure that the network can begin self-tuning based on the input data, each neuron of the neural network is initialized, ready to accept input data during training, through which the neural network model will be able to conduct basic training and learning.
S312, executing feedback loop operation, adjusting weight parameters in the initial weight matrix based on feedback output, and adopting a formula by combining dynamic output of nonlinear activation function response:
;
Adjusting each connection weight and updating the current weight matrix to obtain the initially adjusted weight and parameters;
Wherein, Representing the weight value after the next iteration,As the current weight value of the current weight,In order to learn the rate of the learning,As an output value of the feedback loop,The intensity of activation for the input of the connection,In order to adjust the deviation factor,As a noise correction term, a noise correction term is used,Representing the total number of elements involved in the summation operation;
The formula:
;
The formula has the advantages that the weight can be dynamically adjusted in the actual training by introducing the adjustment deviation factor and the noise correction term, so that the matching property of the model is enhanced, and the prediction accuracy is improved. The formula detailed description and formula calculation deduction process is characterized by that three input nodes are set, and the weight is Learning rateThe feedback loop outputs asThe input activation strength of the connection isAdjusting deviation factorNoise correction termThe calculation is performed according to the formula:
;
;
;
the result shows that the weight after feedback adjustment can be more suitable for the current training data, and the response sensitivity and accuracy of the model are improved.
S313, combining the preliminarily adjusted weights and parameters with the predicted value of the resource demand, performing repeated iterative computation on the weights and parameters, and matching the current resource use mode to generate a preliminary neural network model.
The method is characterized in that the initially adjusted weights and parameters are obtained through training of a neural network through multiple iterations, the network receives predicted values of resource demands through circulation in the training process, pattern matching and adjustment are carried out according to the predicted values, each iteration carries out weight adjustment based on a current resource use mode and predicted output, the method comprises multiple forward propagation and backward propagation, the weights are adjusted, the output is as close to an expected result as possible, the aim of each iteration is to reduce prediction errors, the generalization capability of the network and the capability of matching new data are improved, each update of the weights is based on the current error rate and the previous weight values, the neural network gradually approximates to optimal parameter configuration in the mode, and the generated initial neural network model can effectively reflect and match the actual resource use mode, so that the model has better prediction accuracy and stability in practical application.
Referring to fig. 5, the steps for obtaining the optimized neural network model specifically include:
S411, based on a preliminary neural network model, increasing and reducing the number of layers and the number of nodes of each layer of the network, and optimizing and configuring the number of hidden layers and the number of nodes based on resource demand prediction to obtain an adjusted neural network architecture;
When the architecture of the neural network model is adjusted, firstly, the influence of the depth and the width of a network layer on the performance of the model is required to be analyzed, the learning capacity of the model is improved by increasing the number of layers, the feature information is captured by increasing the number of nodes, the depth and the width of the network are adjusted, an optimal structure is determined through experiments, then the number of hidden layers and the number of nodes of each layer are configured based on resource demand prediction data, the determination of the number of each node is required to be driven through actual data, and the performance improvement effect of the model is verified by experimental multiple groups of settings, so that the neural network architecture is gradually determined.
S412, based on the adjusted neural network architecture, selecting matched activation functions for a plurality of hidden layers, performing combination test on each layer by using a differential activation function, and adopting the formula:
;
Calculating response output of the plurality of nodes to obtain an optimized activation function combination;
Wherein, The output representing the neural network layer is the total output of the weighted sum of the plurality of node activation functions processed,The function is activated for the node and,To activate the weighting coefficients of the function, for adjusting the strength of the node input signal,In order to input a signal to the device,For the bias parameter, for adjusting the activation threshold of the activation function,Representing the total number of nodes, and representing the number of nodes in the current layer;
The formula:
;
the formula is beneficial in that by dynamically adjusting the weighting coefficients of the activation function The output of each layer can be optimized according to the characteristics of the layers, so that the learning and matching performance of the whole network are improved.
Formula details and formula calculation derivation process:
the network is provided with three layers, each layer has the same selected activation function, and is used for each input Assigning differentiated weightsThese weights are adjusted during the course of the network training. Input of the first layerRespectively 0.50.30.9 WeightInitial set to 1.20.81.0. Through actual operation and adjustment, the weight is updated to be 1.15 according to gradient descent0.851.05. The output of the first layer is calculated as follows:
;
Wherein, Is a bias term, and is a small constant such as 0.05. If Sigmoid function is used as the activation functionThe output of each term will be compressed between 0 and 1.
The result shows that the output of each layer of the neural network can be controlled by adjusting the weight and the bias of the differentiation layer, and the processing capacity of the model on data is optimized.
And S413, adjusting weight parameters of the loss function based on the optimized activation function combination, performing small-batch gradient descent training by combining the current weight matrix, and iteratively calculating the neural network error to generate an optimized neural network model.
The method comprises the steps of adjusting the weight parameters of a loss function, namely optimizing a neural network model, enabling the model to better match with target type data by adjusting the weight parameters of the loss function, increasing the weight of a few classes when unbalanced data are processed, determining the adjustment of the weight parameters according to the distribution and characteristics of the data, calculating the loss of each batch and updating the model parameters according to the small batch gradient descent method, and gradually reducing the loss function value through multiple iterations according to the calculated gradient of the loss function, so that the model parameters gradually approach to an optimal solution.
Referring to fig. 6, the steps for acquiring the adjusted resource allocation policy specifically include:
s511, processing the real-time monitoring data based on the optimized neural network model to obtain the current resource use condition, and performing predictive analysis based on the real-time resource use amount and the network load condition to obtain a real-time resource prediction result;
Based on the input of the existing architecture and real-time monitoring data of the optimized neural network model, firstly, cleaning and preprocessing the real-time monitoring data, removing abnormal values and noise, ensuring the data quality, then, adjusting the parameters of a hidden layer and the quantity of neurons of each layer by adjusting the parameter configuration of the network layer, adjusting the parameters in real time according to the network flow and the resource use condition, matching the dynamically-changed network load, carrying out the prediction of the resource use and the network load, and improving the accuracy and the response speed of the prediction through continuous data training and model optimization to obtain the real-time resource prediction result.
S512, calculating a resource demand change trend based on a real-time resource prediction result, adjusting the current resource configuration, smoothing a prediction error by adopting a weighted parameter, and adopting a formula:
;
acquiring an adjusted resource proportion;
Wherein, The resource proportion after the adjustment is represented,As the weighting coefficient(s),For the real-time use of resources,In order to predict the load,For the current configuration of the capacity,As a noise correction term, a noise correction term is used,Representing the total resource types or the node quantity participating in calculation;
;
The formula has the advantages that the formula can dynamically adjust the resource proportion, match the cyclically-changed network condition and optimize the system performance by integrating the real-time resource usage amount and the predicted load and referring to the current configuration capacity and the noise correction after weighted average.
Formula details and formula calculation derivation process:
3 resources (GPU, storage and bandwidth) are set, the weights are 0.5,0.3,0.2 respectively, the current use amount is 70%,50%,60%, the prediction increase is 10%,15%,5%, the current configuration capacity is 100,200,300, and the noise correction term is 0.1,0.1,0.1.
1. Calculating a weighted sum of the use of multiple resources:
;
2. Calculating an adjustment sum of configuration capacity:
;
3. the resource ratio is as follows:
;
The result shows that the resource ratio after adjustment is about 2.92 of the amount of the resource which can be supported by each unit configuration, the value guides the dynamic adjustment of the resource, and the system resource is ensured not to be overloaded or idle in response to the change of the future resource demand.
And S513, reallocating the data processing paths and the resource allocation among the plurality of network nodes based on the adjusted resource proportion and by combining the real-time resource prediction result to obtain an adjusted resource allocation strategy.
Based on the real-time prediction result and the data of the current network load, an advanced resource management strategy is adopted to evaluate the existing resource configuration one by one, the peak period and the idle period of the resource use are analyzed, the dynamic adjustment of the resource is implemented aiming at the differentiated application and service requirements, the data transmission delay and the blockage are reduced by optimizing the data processing path, the data processing efficiency is improved, the resource configuration strategy after adjustment is obtained, the strategy refers to the resource requirement prediction in the future time period and the current resource use condition, the maximization of the resource utilization is ensured, the resource waste is avoided, and the overall performance of the system and the service experience of the user are improved.
The super-fusion calculation power dispatching system based on the prediction model is used for executing the super-fusion calculation power dispatching method based on the prediction model, and comprises the following steps:
The data preprocessing module collects GPU utilization rate and network bandwidth data in a super fusion environment, removes random noise in the data by utilizing a digital filtering technology, screens key performance indexes of the GPU utilization rate and the network bandwidth, and converts the indexes into standardized resource characteristic values through linear transformation;
the trend prediction model module utilizes the standardized resource characteristic value to construct a time sequence analysis model, evaluates the dynamic change of the use of recent resources, calculates and predicts the GPU and the stored demand growth rate in a short period, and thus generates a resource demand prediction value;
The neural network construction module takes a predicted value of a resource demand as input, sets a feedback loop in the neural network, selects a nonlinear activation function, preliminarily adjusts network parameters, matches a current resource use mode, creates a preliminary neural network model, adjusts the depth, width and loss function weight of a network layer, carries out training and optimization of the network by adopting a small-batch gradient descent method, and adapts to the change of future resource demands to obtain an optimized neural network model;
The resource allocation strategy module deploys the optimized neural network model to conduct real-time resource use and network load prediction, dynamically adjusts resource allocation and data processing paths according to prediction results, and forms and implements an adjusted resource allocation strategy.
The present invention is not limited to the above embodiments, and any equivalent embodiments which can be changed or modified by the technical disclosure described above can be applied to other fields, but any simple modification, equivalent changes and modification made to the above embodiments according to the technical matter of the present invention will still fall within the scope of the technical disclosure.
Claims (5)
1. The super fusion calculation force scheduling method based on the prediction model is characterized by comprising the following steps of:
Collecting and arranging real-time data in a super-fusion environment, including GPU utilization rate and network bandwidth information, removing noise from the data by utilizing a digital filtering technology, extracting key performance indexes, carrying out numerical standardization on the key performance indexes, and establishing standardized resource characteristic values;
Carrying out time sequence analysis on the standardized resource characteristic values, constructing a prediction model to analyze the use trend of resources in a short period, and calculating the GPU and the stored demand growth rate in a time period to obtain a resource demand predicted value;
The resource demand predicted value obtaining step specifically includes:
Dividing the resource characteristic values based on continuous time periods by utilizing the standardized resource characteristic values, rearranging and marking time stamps according to a time sequence, and establishing time sequence data;
Based on the time series data, carrying out change trend analysis in time periods aiming at the resource characteristic value of each continuous time period, calculating the growth rate of the resource characteristic value in a plurality of time periods, and adopting the formula:
;
obtaining GPU and stored growth rate;
Wherein, Indicating the rate of increase of the resource,For the characteristic value of the resource within the target time period,As a weighting parameter for the time slices,In order to calculate the adjustment factor,In order to shift the adjustment parameters,As a noise correction term, a noise correction term is used,Representing the number of time periods referenced in the calculation;
based on the GPU and the stored growth rate, carrying out collective operation on the GPU and the resource use conditions of adjacent time periods, and comprehensively calculating to obtain the resource demand growth rate in the time periods;
Invoking the resource demand increase ratio, and performing accumulated calculation on the resource increase ratio of all the fragments based on the resource use condition of each time fragment to obtain a resource demand predicted value;
inputting the resource demand predicted value, constructing a primary neural network comprising a feedback loop and a nonlinear activation function, performing preliminary adjustment of weights and parameters, matching the current resource use mode, and generating a preliminary neural network model;
The preliminary neural network model obtaining step specifically comprises the following steps:
Invoking the resource demand predicted value, inputting the resource demand predicted value into a primary neural network, initializing a feedback loop structure, setting a nonlinear activation function, carrying out randomized distribution and preliminary setting on a plurality of connection weights, and generating an initial weight matrix;
Executing feedback loop operation, adjusting weight parameters in the initial weight matrix based on feedback output, and adopting a formula by combining dynamic output of nonlinear activation function response:
;
Adjusting each connection weight and updating the current weight matrix to obtain the initially adjusted weight and parameters;
Wherein, Representing the weight value after the next iteration,As the current weight value of the current weight,In order to learn the rate of the learning,As an output value of the feedback loop,The intensity of activation for the input of the connection,In order to adjust the deviation factor,As a noise correction term, a noise correction term is used,Representing the total number of elements involved in the summation operation;
Combining the preliminarily adjusted weights and parameters with a resource demand predicted value, performing repeated iterative computation on the weights and parameters, and matching the current resource use mode to generate a preliminary neural network model;
optimizing the preliminary neural network model, selecting an activation function by adjusting the depth and the width of a network layer, adjusting the weight parameters of a loss function, performing cyclic training by a small-batch gradient descent method, matching future resource demand changes, and generating an optimized neural network model;
The obtaining step of the optimized neural network model specifically comprises the following steps:
based on the preliminary neural network model, increasing and reducing the number of layers and the number of nodes of each layer of the network, and optimizing and configuring the number of hidden layers and the number of nodes based on resource demand prediction to obtain an adjusted neural network architecture;
Based on the adjusted neural network architecture, selecting matched activation functions for a plurality of hidden layers, performing combination test on each layer by using a differential activation function, and adopting the formula:
;
Calculating response output of the plurality of nodes to obtain an optimized activation function combination;
Wherein, The output representing the neural network layer is the total output of the weighted sum of the plurality of node activation functions processed,The function is activated for the node and,To activate the weighting coefficients of the function, for adjusting the strength of the node input signal,In order to input a signal to the device,For the bias parameter, for adjusting the activation threshold of the activation function,Representing the total number of nodes, and representing the number of nodes in the current layer;
based on the optimized activation function combination, weight parameters of the loss function are adjusted, a small batch of gradient descent training is implemented by combining the current weight matrix, neural network errors are calculated in an iterative mode, and an optimized neural network model is generated;
and applying the optimized neural network model, carrying out real-time prediction of resource use and network load, and adjusting resource allocation and a data processing path according to a prediction result to obtain an adjusted resource allocation strategy.
2. The method for super-fusion power scheduling based on a prediction model according to claim 1, wherein the standardized resource characteristic values are GPU usage rate and network bandwidth information, the resource demand predicted values include GPU demand growth rate and storage demand growth rate, the preliminary neural network model is a neural network structure including a feedback loop and a nonlinear activation function, the optimized neural network model is an adjusted network layer depth and width, a selected activation function and an adjusted loss function weight parameter, and the adjusted resource allocation strategy is real-time resource usage prediction and network load prediction.
3. The method for super fusion power scheduling based on a prediction model according to claim 2, wherein the step of obtaining the normalized resource feature value specifically comprises the following steps:
collecting GPU utilization rate and network bandwidth information from a super fusion environment, and meanwhile, finishing data to generate a preliminary data set;
applying a digital filtering technology to the preliminary data set, and obtaining a cleaned performance index data set by eliminating statistical noise and abnormal values;
Performing numerical standardization on the cleaned performance index data set, and converting the GPU utilization rate and the network bandwidth into uniform proportions and dimensions by adopting a Z-score method to obtain a standardized performance index data set;
based on the standardized performance index data set, selecting a key performance index, calculating index weight, and adopting a formula:
;
obtaining a standardized resource characteristic value;
Wherein, A normalized value representing a single performance metric,Represents the business criticality of the performance index,Representing the total number of performance indicators,Representing the integrated normalized resource attribute value.
4. The method for super-fusion power scheduling based on a prediction model according to claim 1, wherein the step of obtaining the adjusted resource allocation policy specifically comprises:
processing the real-time monitoring data based on the optimized neural network model to obtain the current resource use condition, and performing predictive analysis based on the real-time resource use amount and the network load condition to obtain a real-time resource prediction result;
based on the real-time resource prediction result, calculating the resource demand change trend, adjusting the current resource configuration, smoothing the prediction error by adopting a weighted parameter, and adopting a formula:
;
acquiring an adjusted resource proportion;
Wherein, The resource proportion after the adjustment is represented,As the weighting coefficient(s),For the real-time use of resources,In order to predict the load,For the current configuration of the capacity,As a noise correction term, a noise correction term is used,Representing the total resource types or the node quantity participating in calculation;
And reallocating the data processing paths and the resource allocation among the plurality of network nodes based on the adjusted resource proportion and by combining a real-time resource prediction result to obtain an adjusted resource allocation strategy.
5. A super-fusion computing power dispatching system based on a prediction model is characterized in that, a method of super fusion force scheduling based on a predictive model as defined in any one of claims 1-4, the system comprising:
The data preprocessing module collects GPU utilization rate and network bandwidth data in a super fusion environment, removes random noise in the data by utilizing a digital filtering technology, screens key performance indexes of the GPU utilization rate and the network bandwidth, and converts the indexes into standardized resource characteristic values through linear transformation;
the trend prediction model module utilizes the standardized resource characteristic value to construct a time sequence analysis model, evaluates the dynamic change of the use of recent resources, calculates and predicts the GPU and the stored demand growth rate in a short period, and thus generates a resource demand prediction value;
the neural network construction module takes the predicted value of the resource demand as input, sets a feedback loop in the neural network, selects a nonlinear activation function, preliminarily adjusts network parameters, matches the current resource use mode, creates a preliminary neural network model, adjusts the depth, width and loss function weight of a network layer, adopts a small-batch gradient descent method to train and optimize the network, adapts to the change of future resource demands and obtains an optimized neural network model;
The resource allocation strategy module deploys the optimized neural network model to conduct real-time resource use and network load prediction, dynamically adjusts resource allocation and data processing paths according to prediction results, and forms and implements an adjusted resource allocation strategy.
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202411884270.3A CN119336516B (en) | 2024-12-20 | 2024-12-20 | Super fusion calculation force scheduling method and system based on prediction model |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202411884270.3A CN119336516B (en) | 2024-12-20 | 2024-12-20 | Super fusion calculation force scheduling method and system based on prediction model |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| CN119336516A CN119336516A (en) | 2025-01-21 |
| CN119336516B true CN119336516B (en) | 2025-04-01 |
Family
ID=94268075
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202411884270.3A Active CN119336516B (en) | 2024-12-20 | 2024-12-20 | Super fusion calculation force scheduling method and system based on prediction model |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN119336516B (en) |
Families Citing this family (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN119621520B (en) * | 2025-02-14 | 2025-07-22 | 广州七喜电脑有限公司 | Intelligent optimization method and system for server hardware resources based on adaptive neural network |
| CN120162161B (en) * | 2025-05-19 | 2025-07-15 | 广东联想懂的通信有限公司 | AI-based internet of things back-end calculation power demand prediction and intelligent response method |
| CN120371544B (en) * | 2025-06-27 | 2025-09-26 | 天津津能电力科学研究有限公司 | Dynamic allocation method for data processing resources of data center station |
Citations (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN117768469A (en) * | 2024-02-22 | 2024-03-26 | 广州宇中网络科技有限公司 | Cloud service management method and system based on big data |
| CN118590430A (en) * | 2024-06-29 | 2024-09-03 | 深圳市飞铃智能系统集成有限公司 | A network integrated dynamic resource routing management system and method |
Family Cites Families (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US11341396B2 (en) * | 2015-12-31 | 2022-05-24 | Vito Nv | Methods, controllers and systems for the control of distribution systems using a neural network architecture |
| WO2022161599A1 (en) * | 2021-01-26 | 2022-08-04 | Telefonaktiebolaget Lm Ericsson (Publ) | Training and using a neural network for managing an environment in a communication network |
-
2024
- 2024-12-20 CN CN202411884270.3A patent/CN119336516B/en active Active
Patent Citations (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN117768469A (en) * | 2024-02-22 | 2024-03-26 | 广州宇中网络科技有限公司 | Cloud service management method and system based on big data |
| CN118590430A (en) * | 2024-06-29 | 2024-09-03 | 深圳市飞铃智能系统集成有限公司 | A network integrated dynamic resource routing management system and method |
Also Published As
| Publication number | Publication date |
|---|---|
| CN119336516A (en) | 2025-01-21 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN119336516B (en) | Super fusion calculation force scheduling method and system based on prediction model | |
| CN110389820B (en) | A private cloud task scheduling method based on v-TGRU model for resource prediction | |
| CN119597493A (en) | Distributed computing resource intelligent evolution method and system based on digital twin | |
| CN118939438A (en) | Intelligent scheduling method and system for heterogeneous equipment system | |
| CN118521139B (en) | System resource demand planning method and system based on artificial intelligence | |
| CN119149209B (en) | GPU cluster data sharing method for AI model training | |
| CN119578835B (en) | Emergency material reserve dynamic optimization method and system based on reinforcement learning | |
| CN109558248A (en) | A kind of method and system for the determining resource allocation parameters calculated towards ocean model | |
| CN119149231A (en) | Cloud computing resource scheduling method and system based on big data | |
| CN117827617A (en) | Container cloud resource prediction method based on ARIMA-LSTM | |
| CN120066790A (en) | Intelligent computing center resource allocation method and system based on dynamic load adjustment | |
| CN119003138A (en) | Heterogeneous multi-core collaborative management and automatic operation and maintenance system | |
| CN119759592A (en) | Dynamic load balancing method and system for multi-core heterogeneous ASIC computing main board | |
| CN119473631A (en) | Computer and cloud computing power optimization method based on multi-task collaboration | |
| Chen et al. | A combined trend virtual machine consolidation strategy for cloud data centers | |
| CN119557088B (en) | Edge computing resource allocation method and system | |
| CN120218539A (en) | Intelligent collaborative management method and system for network security operation and maintenance | |
| CN120104308A (en) | Cloud platform resource scheduling optimization method and system based on multi-chip architecture | |
| Cheng et al. | Optimizing load scheduling and data distribution in heterogeneous cloud environments using fuzzy-logic based two-level framework | |
| Bi et al. | Adaptive prediction of resources and workloads for cloud computing systems with attention-based and hybrid LSTM | |
| CN117632517A (en) | Dynamic scheduling management system and scheduling optimization method of IMA system based on performance evaluation | |
| CN120634205B (en) | A dynamic control system for engineering cost based on cloud computing | |
| CN120994570B (en) | Automatic test method and device based on product value distribution in system gray level release | |
| CN119473611B (en) | A high-efficiency business collaboration platform system | |
| CN119561829B (en) | Multi-agent-based deep reinforcement learning method and system |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| GR01 | Patent grant | ||
| GR01 | Patent grant |