[go: up one dir, main page]

CN116401056A - Method, device and equipment for determining nodes to be scheduled applied to server cluster - Google Patents

Method, device and equipment for determining nodes to be scheduled applied to server cluster Download PDF

Info

Publication number
CN116401056A
CN116401056A CN202310372191.3A CN202310372191A CN116401056A CN 116401056 A CN116401056 A CN 116401056A CN 202310372191 A CN202310372191 A CN 202310372191A CN 116401056 A CN116401056 A CN 116401056A
Authority
CN
China
Prior art keywords
node
nodes
determining
scheduled
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310372191.3A
Other languages
Chinese (zh)
Inventor
蔡中原
孙政清
沈震宇
白佳乐
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Industrial and Commercial Bank of China Ltd ICBC
Original Assignee
Industrial and Commercial Bank of China Ltd ICBC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Industrial and Commercial Bank of China Ltd ICBC filed Critical Industrial and Commercial Bank of China Ltd ICBC
Priority to CN202310372191.3A priority Critical patent/CN116401056A/en
Publication of CN116401056A publication Critical patent/CN116401056A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5061Partitioning or combining of resources
    • G06F9/5072Grid computing
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5061Partitioning or combining of resources
    • G06F9/5077Logical partitioning of resources; Management or configuration of virtualized resources
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The disclosure provides a method, a device and equipment for determining nodes to be scheduled, which are applied to a server cluster, and can be applied to the technical field of cloud computing. The method comprises the following steps: analyzing a building request from a calling party to obtain static demand information; determining a node list according to the static demand information, wherein the node list comprises M nodes to be selected, and M is more than or equal to 2; determining dynamic resource use information corresponding to the node to be selected according to the node list, wherein the dynamic resource use information characterizes the real-time use condition of the resource of the node to be selected; determining nodes to be scheduled from M nodes to be selected according to the dynamic resource use information; and determining a scheduling result according to the node to be scheduled so as to return the scheduling result to the calling party.

Description

Method, device and equipment for determining nodes to be scheduled applied to server cluster
Technical Field
The disclosure relates to the technical field of cloud computing, and in particular relates to a method, a device and equipment for determining nodes to be scheduled, which are applied to a server cluster.
Background
The container is a standard unit of software for packaging information such as code for the application software to run between different computing environments. Containers in MySQL databases are uniformly managed by Kubernetes clusters.
In an application scene of container building, an operation and maintenance person selects a Kubernetes cluster, and then a node to be scheduled is selected according to a native scheduling strategy of the Kubernetes cluster for container building. The native scheduling policy can determine nodes to be scheduled according to static information such as container space and the like.
However, in the actual use process, the resources of the server cluster are dynamically transformed, and determining the node to be scheduled only according to static information can cause low utilization rate of the resources in the server cluster.
Disclosure of Invention
In view of the above problems, the present disclosure provides a method, an apparatus, and a device for determining a node to be scheduled, which are applied to a server cluster.
According to a first aspect of the present disclosure, there is provided a method for determining nodes to be scheduled applied to a server cluster, including:
analyzing a building request from a calling party to obtain static demand information;
determining a node list according to the static demand information, wherein the node list comprises M nodes to be selected, and M is more than or equal to 2;
determining dynamic resource use information corresponding to the node to be selected according to the node list, wherein the dynamic resource use information characterizes the real-time use condition of the resource of the node to be selected;
determining nodes to be scheduled from M nodes to be selected according to the dynamic resource use information;
And determining a scheduling result according to the node to be scheduled so as to return the scheduling result to the calling party.
According to an embodiment of the present disclosure, wherein the dynamic resource usage information includes central processor usage information;
according to the node list, determining dynamic resource usage information corresponding to the node to be selected, including:
based on the node list, acquiring the total resource quantity and the real-time use quantity of the central processing units of M nodes to be selected; and
and determining the CPU usage information corresponding to the M nodes to be selected according to the total resource amount and the real-time usage amount.
According to an embodiment of the present disclosure, determining a node to be scheduled from M nodes to be selected according to dynamic resource usage information includes:
according to the CPU usage information and the resource threshold value corresponding to the M nodes to be selected, R nodes to be selected are obtained from the M nodes to be selected in a screening mode, wherein R is more than or equal to 2, and R is less than or equal to M;
determining evaluation values of R nodes to be selected based on the CPU usage information of the R nodes to be selected; and
and determining the node to be scheduled from the R nodes to be selected according to the evaluation values of the R nodes to be selected.
According to an embodiment of the present disclosure, determining, based on central processor usage information of R nodes to be selected, evaluation values of the R nodes to be selected includes:
Based on the CPU usage information, sequencing R nodes to be selected to obtain a node sequence; and
based on the node sequence, evaluation values with the R candidate nodes are determined.
According to the embodiment of the disclosure, a node to be scheduled comprises N containers, the containers are used for packaging execution codes, the dynamic resource use information comprises N container resource use information, and N is greater than or equal to 1;
determining a scheduling result according to the node to be scheduled, including:
determining a container to be scheduled from N containers of the node to be scheduled according to the container resource use information; and
and generating a scheduling result according to the container to be scheduled.
According to an embodiment of the present disclosure, determining, according to a node list, dynamic resource usage information corresponding to a node to be selected includes:
determining a scheduling strategy of a target server cluster, wherein the target server cluster comprises M nodes to be selected;
and under the condition that the scheduling strategy is determined to be the dynamic scheduling strategy, determining the dynamic resource use information corresponding to the node to be selected according to the node list.
According to an embodiment of the disclosure, the static demand information includes first hardware specification information including hardware specification information of a central processor of the required node;
Determining a list of available nodes based on static demand information includes:
acquiring second hardware specification information, wherein the second hardware specification information comprises hardware specification information of central processing units of S nodes, the S nodes belong to a target server cluster, and S is greater than or equal to M; and
and screening M candidate nodes from the S nodes according to the first hardware specification information and the second hardware specification information to obtain a node list.
A second aspect of the present disclosure provides a node to be scheduled determining apparatus applied to a server cluster, including:
the analysis module is used for analyzing the construction request from the calling party to obtain static demand information;
the candidate node determining module is used for determining a usable node list according to the static demand information, wherein the node list comprises M nodes to be selected, and M is more than or equal to 2;
the dynamic resource determining module is used for determining dynamic resource use information corresponding to the node to be selected according to the node list, and the dynamic resource use information characterizes the real-time use condition of the node to be selected;
the node to be scheduled determining module is used for determining the node to be scheduled from M nodes to be selected according to the dynamic resource use information;
and the scheduling result determining module is used for determining a scheduling result according to the node to be scheduled so as to return the scheduling result to the calling party.
A third aspect of the present disclosure provides an electronic device, comprising: one or more processors; and a memory for storing one or more programs, wherein the one or more programs, when executed by the one or more processors, cause the one or more processors to perform the method for determining nodes to be scheduled for a server cluster.
A fourth aspect of the present disclosure also provides a computer-readable storage medium having stored thereon executable instructions that, when executed by a processor, cause the processor to perform the above-described method of determining nodes to be scheduled for a server cluster.
The fifth aspect of the present disclosure also provides a computer program product, comprising a computer program which, when executed by a processor, implements the above-mentioned method for determining nodes to be scheduled applied to a server cluster.
According to the embodiment of the disclosure, working nodes in a server cluster are subjected to primary screening according to static demand information, and a node list comprising a plurality of nodes to be selected is obtained; determining the most suitable current node to be scheduled from a plurality of nodes to be selected according to the dynamic resource use information of the nodes to be selected; and then, generating a scheduling result according to the node to be scheduled, and returning the scheduling result to the calling party, so that the scheduling node can be determined according to the actual resource use condition under the current condition, and the optimal scheduling of the container and the working node is realized. The embodiment of the disclosure can improve the resource utilization rate of the working node and the service operation efficiency while ensuring that the working node normally executes the construction request.
Drawings
The foregoing and other objects, features and advantages of the disclosure will be more apparent from the following description of embodiments of the disclosure with reference to the accompanying drawings, in which:
fig. 1 schematically illustrates an application scenario of a method for determining nodes to be scheduled applied to a server cluster according to an embodiment of the present disclosure;
FIG. 2 schematically illustrates a flow chart of a method of determining nodes to be scheduled for application to a server cluster in accordance with an embodiment of the disclosure;
FIG. 3 schematically illustrates a flow chart of a method of CPU usage information determination according to an embodiment of the present disclosure;
FIG. 4 schematically illustrates a flow chart of a scheduling result generating method according to an embodiment of the disclosure;
fig. 5 schematically illustrates an application scenario of a method for determining nodes to be scheduled applied to a server cluster according to a specific embodiment of the present disclosure;
FIG. 6 schematically illustrates a scheduling scenario for a server cluster according to an embodiment of the disclosure;
fig. 7 schematically illustrates a block diagram of a node to be scheduled determination apparatus applied to a server cluster according to an embodiment of the present disclosure; and
fig. 8 schematically illustrates a block diagram of an electronic device adapted to be applied to a method of determining nodes to be scheduled of a server cluster according to an embodiment of the present disclosure.
Detailed Description
Hereinafter, embodiments of the present disclosure will be described with reference to the accompanying drawings. It should be understood that the description is only exemplary and is not intended to limit the scope of the present disclosure. In the following detailed description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the embodiments of the present disclosure. It may be evident, however, that one or more embodiments may be practiced without these specific details. In addition, in the following description, descriptions of well-known structures and techniques are omitted so as not to unnecessarily obscure the concepts of the present disclosure.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the disclosure. The terms "comprises," "comprising," and/or the like, as used herein, specify the presence of stated features, steps, operations, and/or components, but do not preclude the presence or addition of one or more other features, steps, operations, or components.
All terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art unless otherwise defined. It should be noted that the terms used herein should be construed to have meanings consistent with the context of the present specification and should not be construed in an idealized or overly formal manner.
Where expressions like at least one of "A, B and C, etc. are used, the expressions should generally be interpreted in accordance with the meaning as commonly understood by those skilled in the art (e.g.," a system having at least one of A, B and C "shall include, but not be limited to, a system having a alone, B alone, C alone, a and B together, a and C together, B and C together, and/or A, B, C together, etc.).
In the technical scheme of the disclosure, the related data (such as including but not limited to personal information of a user) are collected, stored, used, processed, transmitted, provided, disclosed, applied and the like, all conform to the regulations of related laws and regulations, necessary security measures are adopted, and the public welcome is not violated.
For MySQL database, under the condition of newly added container building request, operation and maintenance personnel determine one Kubernetes cluster from a plurality of server clusters according to the selection rules of deployment parks, high-availability architectures and the like, and then determine nodes to be scheduled from the Kubernetes cluster by utilizing the original scheduling strategy of the Kubernetes cluster. The native scheduling strategy determines the working node according to static information such as the memory space of the container. For example, if the margin of a working node can meet the container build request, determining the working node as a node to be scheduled; otherwise, if the allowance of the working node cannot meet the container building request, scheduling is not performed.
However, the computing resources of the server cluster are dynamically transformed. The use of computing resources in a server cluster varies during peaks and peaks of traffic usage. The original scheduling strategy only determines the node to be scheduled according to static information, and the node to be scheduled cannot be determined according to the specific use condition of the computing resource in the use period, so that the technical problem of low utilization rate of the server cluster resources is caused.
The embodiment of the disclosure provides a method for determining nodes to be scheduled, which is applied to a server cluster, and comprises the following steps: analyzing a building request from a calling party to obtain static demand information; determining a node list according to the static demand information, wherein the node list comprises M nodes to be selected, and M is more than or equal to 2; determining dynamic resource use information corresponding to the node to be selected according to the node list, wherein the dynamic resource use information characterizes the real-time use condition of the resource of the node to be selected; determining nodes to be scheduled from M nodes to be selected according to the dynamic resource use information; and determining a scheduling result according to the node to be scheduled so as to return the scheduling result to the calling party.
Fig. 1 schematically illustrates an application scenario of a method for determining nodes to be scheduled applied to a server cluster according to an embodiment of the present disclosure.
As shown in fig. 1, an application scenario 100 according to this embodiment may include a first terminal device 101, a second terminal device 102, a third terminal device 103, a network 104, and a server cluster 105. The network 104 is a medium used to provide a communication link between the first terminal device 101, the second terminal device 102, the third terminal device 103 and the server cluster 105. The network 104 may include various connection types, such as wired, wireless communication links, or fiber optic cables, among others.
A user may interact with server cluster 105 via network 104 using at least one of first terminal device 101, second terminal device 102, third terminal device 103, to receive or send messages, etc. Various communication client applications, such as a shopping class application, a web browser application, a search class application, an instant messaging tool, a mailbox client, social platform software, etc. (by way of example only) may be installed on the first terminal device 101, the second terminal device 102, and the third terminal device 103.
The first terminal device 101, the second terminal device 102, the third terminal device 103 may be various electronic devices having a display screen and supporting web browsing, including but not limited to smartphones, tablets, laptop and desktop computers, and the like.
Server cluster 105 includes a plurality of servers that may be used to provide various services. For example, a user initiates a building request by using application software in the first terminal device 101, the second terminal device 102, and the third terminal device 103. Server cluster 105 receives the construction request through one or more servers, determines nodes to be scheduled, executes the construction request by using the nodes to be scheduled, and feeds back the processing result to the terminal device.
It should be noted that, the method for determining a node to be scheduled applied to a server cluster provided in the embodiments of the present disclosure may be generally performed by the server cluster 105. Accordingly, the node determining apparatus to be scheduled applied to a server cluster provided in the embodiments of the present disclosure may be generally disposed in the server cluster 105. The method for determining a node to be scheduled, which is applied to a server cluster and provided by the embodiments of the present disclosure, may also be performed by a server or a server cluster that is different from the server cluster 105 and is capable of communicating with the first terminal device 101, the second terminal device 102, the third terminal device 103, and/or the server cluster 105. Accordingly, the node determining apparatus to be scheduled, which is applied to the server cluster and provided in the embodiments of the present disclosure, may also be provided in a server or a server cluster that is different from the server cluster 105 and is capable of communicating with the first terminal device 101, the second terminal device 102, the third terminal device 103, and/or the server cluster 105.
It should be understood that the number of terminal devices, networks and servers in fig. 1 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation.
The method for determining nodes to be scheduled, which is applied to a server cluster according to the disclosed embodiments, will be described in detail below with reference to fig. 2 to 6 based on the scenario described in fig. 1.
Fig. 2 schematically illustrates a flowchart of a method for determining nodes to be scheduled applied to a server cluster according to an embodiment of the present disclosure.
As shown in fig. 2, the method 200 includes operations S210-S250.
In operation S210, the build request from the caller is parsed to obtain static demand information.
In operation S220, a node list is determined according to the static demand information, where the node list includes M candidate nodes, and M is greater than or equal to 2.
In operation S230, dynamic resource usage information corresponding to the node to be selected is determined according to the node list.
In operation S240, a node to be scheduled is determined from the M nodes to be selected according to the dynamic resource usage information.
In operation S250, a scheduling result is determined according to the node to be scheduled, so as to return the scheduling result to the caller.
According to an embodiment of the disclosure, the caller includes application software on which the user can perform an operation, through which the build request is initiated.
A build request, according to embodiments of the present disclosure, may be understood as a request to build a container in a server cluster to perform any one of the functions within the application software. The container is a standard unit of software for packaging information such as code for the application software to run between different computing environments. For example, the container may encapsulate a dependent service or operating system for the application software.
According to embodiments of the present disclosure, the build request may be sent to multiple server clusters, such as multiple Kubernetes clusters, also referred to as K8s clusters. A single K8s cluster includes multiple working nodes, each of which may perform all or part of the application functions.
According to an embodiment of the present disclosure, the build request includes static demand information characterizing hardware device information, e.g., storage space, available space, etc., required by the node to be scheduled.
According to the embodiment of the disclosure, after the construction request is analyzed and static demand information is acquired, M nodes to be selected are acquired from one or more K8s clusters according to the static demand information, and a node list is generated. The hardware device information of the M nodes to be selected all meets the requirement of static demand information.
For example, the static demand information includes a central processing unit (Central Processing Unit, CPU) total, such as 30C or more. Working nodes with the total CPU amount of more than 30C can be screened from one or more K8s clusters according to the static demand information.
According to the embodiment of the disclosure, the dynamic resource usage information characterizes the resource real-time usage condition of the node to be selected. After M nodes to be selected are determined according to the static demand information, the dynamic resource use information corresponding to each node to be selected can be acquired or acquired in real time through an acquisition module in the K8s cluster.
For example, after determining the node list, the dynamic resource usage information of the M candidate nodes is collected in real time by the collection module. Or before determining the node list, collecting dynamic resource usage information of all nodes in one or more K8s clusters in real time. And then, acquiring dynamic resource use information of M nodes to be selected according to the node list.
According to the embodiment of the disclosure, the dynamic resource usage information may reflect current resource usage conditions of the M candidate nodes, so that according to the current resource usage conditions of the M candidate nodes, a current most suitable node to be scheduled may be determined.
According to the embodiment of the disclosure, after determining the node to be scheduled, the K8s cluster may generate a scheduling result according to the node to be scheduled, and persistently store the generated scheduling result in the K8s cluster. Meanwhile, the K8s cluster can also return a scheduling result to the calling party.
In the related art, in order to ensure that a working node can execute a container building request, when a node to be scheduled is selected, a native scheduling policy needs to ensure that the amount of remaining resources of the working node is greater than a minimum configuration required for the container to run.
For example, during peak traffic, the working node a has a high computational resource usage and a low amount of remaining resources, which cannot execute the container build request. During the low traffic peak, the working node a has low computational resource usage and high residual resources, which can perform container build requests. In the above case, the prior art determines the node to be scheduled according to the minimum configuration of the remaining resource amount, and therefore, in any case, the working node a cannot be used as the node to be scheduled, which results in that the server cluster cannot utilize the resource of the working node a.
Therefore, in the actual use process, the resources in the server cluster are dynamically transformed, and the node to be scheduled is determined only according to the static information, so that the utilization rate of the resources in the server cluster is low.
In the embodiment of the disclosure, during a traffic low peak, the remaining resource amount of the working node a may be determined according to the dynamic resource usage situation, and in the case that it is determined that the remaining resource amount of the working node a can execute the container building request, the embodiment of the disclosure may use the working node a as a node to be scheduled, and make full use of the resource of the working node a.
In the embodiment of the disclosure, after M nodes to be selected are obtained based on static demand information screening, the nodes to be scheduled are determined according to the dynamic resource use information of the M nodes to be selected, so that the actual resource use condition under the current condition can be aimed at, the normal execution of the construction request of the working node is ensured, and meanwhile, the utilization rate of the working node is improved.
According to the embodiment of the disclosure, working nodes in a server cluster are subjected to primary screening according to static demand information, and a node list comprising a plurality of nodes to be selected is obtained; determining the most suitable current node to be scheduled from a plurality of nodes to be selected according to the dynamic resource use information of the nodes to be selected; and then, generating a scheduling result according to the node to be scheduled, and returning the scheduling result to the calling party, so that the scheduling node can be determined according to the actual resource use condition under the current condition, and the optimal scheduling of the container and the working node is realized. The embodiment of the disclosure can improve the resource utilization rate of the working node and the service operation efficiency while ensuring that the working node normally executes the construction request.
Fig. 3 schematically illustrates a flowchart of a central processor usage information determination method according to an embodiment of the present disclosure.
As shown in fig. 3, the cpu usage information determination 300 of this embodiment includes operations S331 to S332, which may be a specific embodiment of operation S230.
In operation S331, based on the node list, the total resource amount and the real-time usage amount of the central processing units of the M candidate nodes are obtained.
In operation S332, cpu usage information corresponding to the M candidate nodes is determined according to the total resource amount and the real-time usage amount.
According to the embodiment of the disclosure, when the working node runs the code packaged in the container, the working node depends on the bottom layer resources of the computer equipment, such as CPU resources, memory resources, disk resources, network resources and the like. In order to ensure that the working node can normally schedule and operate, the residual quantity of the bottom layer resources in the current situation needs to meet the resource quantity required by the construction request.
For example, the remaining amount of resources of the CPU resources in the working node satisfies the amount of CPU resources required to execute the construction request; the resource remaining amount of the memory resources in the working node meets the memory resource amount required by executing the building request; the resource remaining amount of the disk resources in the working node meets the disk resource amount required by executing the construction request; the amount of resources remaining in the network resources in the working node meets the amount of network resources required to execute the set-up request.
According to an embodiment of the present disclosure, the dynamic resource usage information includes central processor usage information, that is, a resource remaining amount of the CPU resource. The central processing unit use information determining method comprises the following steps of acquiring total resource quantity and real-time use quantity of central processing units of M nodes to be selected based on a node list; and determining the CPU usage information corresponding to the M nodes to be selected according to the total resource amount and the real-time usage amount.
According to an embodiment of the present disclosure, based on the node list, acquiring the total resource amount and the real-time usage amount of the central processing units of the M nodes to be selected includes: and when the node list is determined, the total resource quantity and the real-time use quantity of the M nodes to be selected are acquired in real time through the acquisition module.
According to an embodiment of the disclosure, the node list includes node identifiers of the nodes to be selected. Based on the node list, the obtaining the total resource quantity and the real-time use quantity of the central processing units of the M nodes to be selected comprises the following steps: the total resource amount and the real-time usage amount of all nodes in one or more K8s clusters are collected in real time before determining the node list. And acquiring the total resource quantity and the real-time use quantity of the M nodes to be selected according to the node identifiers of the M nodes to be selected.
According to the total resource amount and the real-time usage amount, determining CPU usage information corresponding to the M nodes to be selected includes: and determining the CPU use information according to the difference value of the total resource amount and the real-time use amount.
According to an embodiment of the present disclosure, the dynamic resource usage information further includes a resource remaining amount of the memory resource. The method for determining the resource remaining amount of the memory resource comprises the following steps: based on the node list, acquiring the total resource amount and the real-time use amount of the memories of M nodes to be selected; and determining the resource remaining quantity of the memory resources corresponding to the M nodes to be selected according to the total resource quantity and the real-time use quantity.
According to an embodiment of the present disclosure, the dynamic resource usage information further includes a resource remaining amount of the disk resource. Determining the resource remaining amount of the disk resource comprises the following steps: based on the node list, acquiring the total resource quantity and the real-time use quantity of the magnetic disks of M nodes to be selected; and determining the resource remaining quantity of the disk resources corresponding to the M nodes to be selected according to the total resource quantity and the real-time use quantity.
According to an embodiment of the present disclosure, the dynamic resource usage information further includes a resource remaining amount of the network resource. Determining the resource remaining of the network resource comprises the steps of: based on the node list, acquiring the total resource quantity and the real-time use quantity of the network of M nodes to be selected; and determining the resource remaining quantity of the network resources corresponding to the M nodes to be selected according to the total resource quantity and the real-time usage quantity. Where the network resources include bandwidth, traffic, etc. for transmission.
According to the embodiments of the present disclosure, a method for determining a remaining amount of resources of a memory resource, a remaining amount of resources of a disk resource, or a remaining amount of resources of a network resource is similar to a method for determining usage information of a central processing unit, and will not be described herein.
In the related art, the remaining resource amount may be determined according to the requested resource amount and the total resource amount of the construction request. However, in actual use, the amount of requested resources is greater than the amount used in real time. On the one hand, in order to ensure that enough resources are reserved to execute the construction request, an operation and maintenance personnel determines the floating range of the requested resource amount according to actual experience. On the other hand, under the condition that the request response is wrong, the real-time usage amount is far smaller than the request resource amount, so that the residual resource amount of the bottom layer resource cannot be accurately determined, and further, the determined node to be scheduled is not an optimal working node, and the resource utilization rate in the server cluster is low.
According to the embodiment of the disclosure, by utilizing the difference value between the real-time usage amount and the total resource amount, inaccuracy of dynamic resource usage information caused by the difference between the actual usage amount and the theoretical request resource amount can be avoided, the scheduling accuracy can be improved, the resource utilization rate can be improved, and the efficient utilization of cloud server cluster computing resources can be realized.
According to an embodiment of the present disclosure, determining a node to be scheduled from M nodes to be selected according to dynamic resource usage information includes the steps of:
and screening R to obtain the R to-be-selected nodes from the M to-be-selected nodes according to the CPU use information and the resource threshold value corresponding to the M to-be-selected nodes, wherein R is more than or equal to 2, and R is less than or equal to M.
And determining evaluation values of the R nodes to be selected based on the CPU usage information of the R nodes to be selected.
And determining the node to be scheduled from the R nodes to be selected according to the evaluation values of the R nodes to be selected.
According to an embodiment of the present disclosure, the resource threshold characterizes the definition of containers within the node to be selected. And comparing the central processing unit use information of the M nodes to be selected with the resource threshold value respectively, and screening R nodes to be selected from the M nodes to be selected, so that the R nodes to be selected can be ensured to smoothly execute the scheduling task.
According to an embodiment of the present disclosure, after R candidate nodes are determined, evaluation values of R candidate nodes are determined based on central processor usage information of the R candidate nodes.
For example, based on the size of the cpu usage information, evaluation values of R candidate nodes are determined.
Or comparing the value of the CPU usage information with a scoring interval to determine the evaluation value of the node to be selected, wherein the evaluation value of the node to be selected in the first scoring interval is 1, and the evaluation value of the node to be selected in the second scoring interval is 2. Wherein, the scoring interval can be determined according to historical operating conditions.
According to the embodiment of the disclosure, according to the evaluation values of the R nodes to be selected, determining the node to be selected with the highest evaluation value from the R nodes to be selected, and determining the node to be selected with the highest evaluation value as the node to be scheduled. Wherein the evaluation value characterizes the resource utilization degree of the node to be selected. The higher the evaluation value is, the higher the resource utilization degree is, and the lower the evaluation value is, the lower the resource utilization degree is.
According to the embodiment of the disclosure, under the condition that the highest evaluation value corresponds to a plurality of nodes to be selected, randomly selecting one die-selected node from the plurality of nodes to be selected as the node to be scheduled.
In the embodiment of the disclosure, since the central processing unit usage information characterizes the real-time usage condition of the CPU, the evaluation value determined according to the central processing unit usage information can embody the working condition of the working node at the current moment. According to the CPU usage information, the node to be scheduled is determined from M nodes to be scheduled, so that the resource utilization rate can be improved.
According to an embodiment of the present disclosure, based on central processor usage information of R candidate nodes, evaluation values of R candidate nodes are determined, including the steps of:
and sequencing the R nodes to be selected based on the CPU usage information to obtain a node sequence.
Based on the node sequence, evaluation values with the R candidate nodes are determined.
According to an embodiment of the present disclosure, based on central processor usage information, ordering R candidate nodes, obtaining a node sequence includes: and sequencing the CPU use information and R to-be-selected nodes corresponding to the R CPU use information according to the order from small to large to obtain a node sequence comprising the R to-be-selected nodes. And determining the evaluation values of R nodes to be selected according to the number of the nodes to be selected.
For example, 3 candidate nodes, node1, node2, and Node3, are determined from the M candidate nodes according to the resource threshold and the cpu usage information. The CPU usage information of Node1 is 15C, that is, the CPU residual resource amount is 15C, the CPU residual resource amount of Node2 is 16C, and the CPU residual resource amount of Node3 is 18C.
The Node sequences obtained by sequencing Node1, node2 and Node3 according to the CPU residual resource amount are Node 1- & gt Node 2- & gt Node3, the evaluation value of Node1 is 3, the evaluation value of Node2 is 2, and the evaluation value of Node3 is 1.
Fig. 4 schematically illustrates a flowchart of a scheduling result generating method according to an embodiment of the present disclosure.
As shown in fig. 4, the scheduling result generating method 400 of this embodiment includes operations S451 to S452, which may be a specific embodiment of operation S250.
In operation S451, a container to be scheduled is determined from N containers of nodes to be scheduled according to the container resource usage information.
In operation S452, a scheduling result is generated according to the container to be scheduled.
According to embodiments of the present disclosure, container resource usage information characterizes the resource real-time usage of the container. A single candidate node may comprise a plurality of containers and the dynamic resource usage information comprises a sum of container resource usage information for the plurality of containers.
According to an embodiment of the present disclosure, after determining a node to be scheduled, a container to be scheduled is determined from N containers of the node to be scheduled according to container resource usage information in the node to be scheduled.
Specifically, similarly to determining a node to be scheduled from M nodes to be selected according to dynamic resource usage information, determining a container to be scheduled according to container resource usage information includes:
according to the CPU usage information and the container threshold value corresponding to the N containers, N ' containers to be selected are obtained from the N containers in a screening mode, wherein N ' is greater than or equal to 2, and N ' is less than or equal to N;
determining evaluation values of N 'containers to be selected based on the CPU usage information of the N' containers to be selected; and
and determining the containers to be scheduled from the N 'containers to be selected according to the evaluation values of the N' containers to be selected.
According to an embodiment of the present disclosure, determining evaluation values of N 'containers to be selected based on central processor usage information of N' containers to be selected includes: based on the central processing unit use information of the N 'containers to be selected, sequencing the N' containers to be selected to obtain a container sequence; and determining an evaluation value for the N' candidate containers based on the container sequence.
According to an embodiment of the present disclosure, after determining a container to be scheduled, a scheduling result is generated. The scheduling result comprises the identification information of the node to be scheduled and the identification information of the container to be scheduled.
According to the embodiment of the disclosure, after the scheduling result is determined, the K8s cluster can store the scheduling result into the Etcd in a lasting mode, the node to be scheduled is controlled through the Master node, and the node to be scheduled pulls the container to be scheduled to the host machine to run. The Node to be selected is a Node, and the Node to be scheduled is: and the Node nodes which are obtained by screening from the plurality of Node nodes and are used for executing the building request.
Master node: is a K8s cluster control node for managing and controlling the entire cluster.
Node: also called hosts, each Node is assigned some workload by the Master Node. When a Node is down, the workload (container) on the Node is automatically transferred to other nodes by the Master Node.
Etcd node: a high availability distributed key value database.
The embodiment of the disclosure can further determine the container to be scheduled in the node to be selected according to the container resource use information, determine the execution unit with smaller granularity and enrich the scheduling policy.
According to an embodiment of the present disclosure, determining dynamic resource usage information corresponding to a node to be selected according to a node list includes the steps of:
and determining a scheduling strategy of a target server cluster, wherein the target server cluster comprises M nodes to be selected.
And under the condition that the scheduling strategy is determined to be the dynamic scheduling strategy, determining the dynamic resource use information corresponding to the node to be selected according to the node list.
According to an embodiment of the present disclosure, the target server cluster comprises a server cluster for processing requests from a caller, in an embodiment of the present disclosure, the target server cluster comprises a K8s cluster. The M nodes to be selected can be from the same K8s cluster or from a plurality of K8s clusters, and each Node to be selected is a Node.
According to an embodiment of the present disclosure, the scheduling policies of the target server include a dynamic scheduling policy and a static scheduling policy, the static scheduling policy includes a native scheduling policy of the target server cluster, and the dynamic scheduling policy includes a scheduling policy modified on the native scheduling policy.
For example, the static scheduling policy includes a policy of determining a node to be scheduled according to static information such as static demand information or hardware device information. The dynamic scheduling policy comprises a policy for determining nodes to be scheduled according to dynamic information such as dynamic resource usage information.
According to the embodiment of the present disclosure, in the case where the scheduling policy is determined to be a dynamic scheduling policy, determining, according to the node list, dynamic resource usage information corresponding to the node to be selected is similar to the foregoing operations S321 to S322, and will not be described herein.
According to the embodiment of the disclosure, on the basis of the target server cluster, the dynamic scheduling strategy based on the target server cluster is realized by configuring the application program interface, and the determination scheme of a plurality of nodes to be scheduled can be realized without totally modifying the scheduling mode of the whole target server cluster, so that the original scheduling strategy is enriched, and the scheduling accuracy and flexibility are improved.
According to an embodiment of the present disclosure, the static demand information includes first hardware specification information including hardware specification information of a central processor of the required node.
According to an embodiment of the present disclosure, determining a list of available nodes based on static demand information includes the following steps.
And acquiring second hardware specification information, wherein the second hardware specification information comprises hardware specification information of central processing units of S nodes, the S nodes belong to a target server cluster, and S is greater than or equal to M.
And screening M candidate nodes from the S nodes according to the first hardware specification information and the second hardware specification information to obtain a node list.
According to embodiments of the present disclosure, the hardware specification information of the central processing unit may be the total amount of resources of the CPU.
According to an embodiment of the present disclosure, a purchase list may be acquired from a database, and second hardware specification information of S nodes is determined according to the purchase list. For example, when purchasing the server device, a purchase list is generated based on the amount of CPU total resources noted by the vendor.
According to an embodiment of the present disclosure, filtering M candidate nodes from S nodes according to the first hardware specification information and the second hardware specification information to obtain a node list includes: and screening M nodes with second hardware specification information larger than the first hardware specification information from the S nodes, taking the M nodes as M nodes to be selected, and generating a node list.
According to an embodiment of the present disclosure, the first hardware specification information includes hardware specification information of the central processing unit, that is, a total amount of resources of the CPU. The screening of M nodes with second hardware specification information greater than the first hardware specification information from the S nodes comprises: and screening M nodes with the total CPU resource amount larger than the required CPU resource amount from the S nodes.
According to an embodiment of the present disclosure, the first hardware specification information further includes at least one of: the total amount of resources required for the memory, the total amount of resources required for the disk, and the total amount of resources required for the network. The second hardware specification information further includes at least one of: the total resource amount of the node memory, the total resource amount of the node disk and the total resource amount of the node network.
According to an embodiment of the present disclosure, screening M nodes having second hardware specification information greater than first hardware specification information from S nodes includes: and screening M nodes with the total CPU resource amount larger than the required CPU resource amount from the S nodes. And/or, M nodes with total resources of the memory larger than the total resources of the required memory are selected from the S nodes. And/or M nodes with total resource quantity of the disk larger than the total resource quantity of the required disk are selected from the S nodes. And/or, M nodes with total resource quantity of the network larger than the total resource quantity of the required network are selected from the S nodes.
According to the embodiment of the disclosure, the preliminary screening is performed according to static resources such as hardware specifications, and the like, the nodes to be selected can be screened from a plurality of nodes in one or more server clusters, so that the number of nodes for collecting the use information of dynamic resources is reduced, the detection workload is reduced, and the scheduling efficiency is improved.
Fig. 5 schematically illustrates an application scenario of a method for determining nodes to be scheduled applied to a server cluster according to a specific embodiment of the present disclosure.
As shown in fig. 5, the application 500 includes operations S501 to S509.
In operation S501, the caller initiates a build request. Specifically, the user may initiate a MySQL container setup request through a caller such as application software.
In operation S502, a build request is received. Specifically, the target server cluster receives the build request, such as by a Master node of K8s from the caller.
In operation S503, the node to be selected is screened based on the static demand information. Specifically, the hardware specification information included in the static demand information is compared with the hardware specification information of a plurality of nodes included in the target server cluster, and a plurality of nodes to be selected are screened from the plurality of nodes.
In operation S504, a scheduling policy is selected. Specifically, the dynamic scheduling policy may be determined as a preferred policy by priority setting.
In operation S505, dynamic resource usage information is collected. Specifically, the dynamic resource usage information of all Node nodes in the target server cluster, such as the real-time CPU usage amount and the total CPU resource amount of the Node nodes, is collected through a collection module, such as Prometaus.
In operation S506, an evaluation value of the node to be selected is determined according to the dynamic resource usage information.
In operation S507, a node to be scheduled is determined. Specifically, the node to be selected with the highest evaluation value is determined as the node to be scheduled.
In operation S508, the scheduling result is persisted and returned. Specifically, a scheduling result is generated according to the node to be scheduled, and the scheduling result is stored in the Etcd node in a lasting manner. And simultaneously, returning the dispatching node to the calling party.
In operation S509, the container is operated. Specifically, the Master Node controls the Node to pull the container to be scheduled to the host machine for operation.
Fig. 6 schematically illustrates a scheduling scenario of a server cluster according to an embodiment of the present disclosure.
As shown in fig. 6, the caller generates a build request 601 and sends the build request 601 to a Master node 602 of the Kubernetes cluster.
The Master Node 602 analyzes the construction request 601 to obtain static demand information, and screens from the Kubernetes cluster according to the static demand information to obtain nodes Node1 6051, node26052 and Node3 6053 to be selected.
The Etcd node 603 is configured to store data or execution results generated by Kubernetes clusters.
The acquisition module 604 acquires actual use conditions of bottom resources in the nodes to be selected Node1 6051, node26052 and Node3 6053, and obtains dynamic resource use information. The collection module 604 may collect actual usage of the node to be selected in the form of a plug-in.
According to an embodiment of the present disclosure, the acquisition module 604 returns dynamic resource usage information of the nodes to be selected Node1 6051, node26052 and Node3 6053 to the Master Node 602, and the Master Node 602 selects a Node to be scheduled from the nodes to be selected.
After the Master node 602 determines a node to be scheduled, a scheduling result is generated according to the node to be scheduled, and the scheduling result is stored to the Etcd node 603.
In the related art, the selection of the working node depends on the scheduler of the K8s cluster for selection. The scheduler may schedule based on a native scheduling policy. The primary scheduling strategy can only statically analyze the soft limit of the container, then screen and score according to the assignable resources of the working nodes, and finally select a proper working node. However, the work node determined based on the static analysis is not the optimal work node at the current time. Because the resources consumed by the container are dynamic during actual operation, the resource margins on the working nodes are also dynamic.
The embodiment of the disclosure provides a K8s expansion scheduling strategy based on MySQL container dynamic resource usage, and the scheduling strategy of dynamic resource usage is added to the original scheduling strategy, so that the scheduling strategy of the MySQL container in a server cluster can be optimized and enriched, the operation and maintenance workload is simplified, the resource utilization rate can be improved, and the further cost of the original resources of a cloud server is reduced.
Fig. 7 schematically illustrates a block diagram of a node to be scheduled determination apparatus applied to a server cluster according to an embodiment of the present disclosure.
As shown in fig. 7, the node to be scheduled determining apparatus 700 applied to a server cluster of this embodiment includes a parsing module 710, a candidate node determining module 720, a dynamic resource determining module 730, a node to be scheduled determining module 740, and a scheduling result determining module 750.
And the parsing module 710 is configured to parse the building request from the caller to obtain static requirement information. In an embodiment, the parsing module 710 may be configured to perform the operation S210 described above, which is not described herein.
The candidate node determining module 720 is configured to determine, according to the static requirement information, a usable node list, where the node list includes M candidate nodes, and M is greater than or equal to 2. In an embodiment, the candidate node determining module 720 may be configured to perform the operation S220 described above, which is not described herein.
And the dynamic resource determining module 730 is configured to determine, according to the node list, dynamic resource usage information corresponding to the node to be selected, where the dynamic resource usage information characterizes a real-time usage situation of the node to be selected. In an embodiment, the dynamic resource determining module 730 may be configured to perform the operation S230 described above, which is not described herein.
And the node to be scheduled determining module 740 is configured to determine a node to be scheduled from the M nodes to be selected according to the dynamic resource usage information. In an embodiment, the node to be scheduled determination module 740 may be configured to perform the operation S240 described above, which is not described herein.
The scheduling result determining module 750 is configured to determine a scheduling result according to the node to be scheduled, so as to return the scheduling result to the caller. In an embodiment, the scheduling result determining module 750 may be configured to perform the operation S250 described above, which is not described herein.
According to an embodiment of the present disclosure, the dynamic resource determination module 730 includes a first resource determination unit and a second resource determination unit.
The first resource determining unit is used for acquiring the total resource quantity and the real-time use quantity of the central processing units of the M nodes to be selected based on the node list. In an embodiment, the first resource determining unit may be configured to perform the operation S331 described above, which is not described herein.
The second resource determining unit is used for determining CPU usage information corresponding to the M nodes to be selected according to the total resource amount and the real-time usage amount. In an embodiment, the second resource determining unit may be configured to perform the operation S332 described above, which is not described herein.
According to an embodiment of the present disclosure, the node to be scheduled determination module 740 includes a first node determination unit, an evaluation value determination unit, and a second node determination unit.
The first node determining unit is used for screening R to-be-selected nodes from the M to-be-selected nodes according to the CPU usage information and the resource threshold value corresponding to the M to-be-selected nodes, wherein R is more than or equal to 2, and R is less than or equal to M.
The evaluation value determining unit is used for determining evaluation values of R nodes to be selected based on the CPU usage information of the R nodes to be selected.
The second node determining unit is used for determining the node to be scheduled from the R nodes to be selected according to the evaluation values of the R nodes to be selected.
According to an embodiment of the present disclosure, the evaluation value determination unit comprises a ranking subunit and an evaluation value determination subunit.
The sequencing subunit is used for sequencing the R nodes to be selected based on the use information of the central processing unit to obtain a node sequence.
The evaluation value determination subunit is configured to determine evaluation values of the R candidate nodes based on the node sequence.
According to an embodiment of the present disclosure, the scheduling result determining module 750 includes a container determining unit and a result generating unit.
The container determining unit is used for determining the container to be scheduled from N containers of the node to be scheduled according to the container resource use information. In an embodiment, the container determining unit may be used to perform operation S451 described above, which is not described herein.
The result generating unit is used for generating a dispatching result according to the container to be dispatched. In an embodiment, the result generating unit may be configured to perform the operation S452 described above, which is not described herein.
According to an embodiment of the present disclosure, the dynamic resource determination module 730 further includes a policy determination unit and a policy execution unit.
The strategy determining unit is used for determining a scheduling strategy of a target server cluster, and the target server cluster comprises M nodes to be selected.
And the policy execution unit is used for determining the dynamic resource use information corresponding to the node to be selected according to the node list under the condition that the scheduling policy is determined to be the dynamic scheduling policy.
According to an embodiment of the present disclosure, the candidate node determination module 720 includes a hardware specification information acquisition unit and a node list determination unit.
The hardware specification information acquisition unit is used for acquiring second hardware specification information, wherein the second hardware specification information comprises hardware specification information of central processing units of S nodes, the S nodes belong to a target server cluster, and S is greater than or equal to M.
The node list determining unit is used for screening M candidate nodes from S nodes according to the first hardware specification information and the second hardware specification information to obtain a node list.
Any of the parsing module 710, the candidate node determining module 720, the dynamic resource determining module 730, the node to be scheduled determining module 740, and the scheduling result determining module 750 may be combined in one module to be implemented, or any of the modules may be split into a plurality of modules according to an embodiment of the present disclosure. Alternatively, at least some of the functionality of one or more of the modules may be combined with at least some of the functionality of other modules and implemented in one module.
According to embodiments of the present disclosure, at least one of the parsing module 710, the candidate node determination module 720, the dynamic resource determination module 730, the node to be scheduled determination module 740, and the scheduling result determination module 750 may be implemented, at least in part, as a hardware circuit, such as a Field Programmable Gate Array (FPGA), a Programmable Logic Array (PLA), a system-on-chip, a system-on-substrate, a system-on-package, an Application Specific Integrated Circuit (ASIC), or as hardware or firmware in any other reasonable manner of integrating or packaging the circuit, or as any one of or a suitable combination of any of the three. Alternatively, at least one of the parsing module 710, the candidate node determination module 720, the dynamic resource determination module 730, the node to be scheduled determination module 740, and the scheduling result determination module 750 may be at least partially implemented as a computer program module, which may perform corresponding functions when being executed.
Fig. 8 schematically illustrates a block diagram of an electronic device adapted to be applied to a method of determining nodes to be scheduled of a server cluster according to an embodiment of the present disclosure.
As shown in fig. 8, an electronic device 800 according to an embodiment of the present disclosure includes a processor 801 that can perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM) 802 or a program loaded from a storage section 808 into a Random Access Memory (RAM) 803. The processor 801 may include, for example, a general purpose microprocessor (e.g., a CPU), an instruction set processor and/or an associated chipset and/or a special purpose microprocessor (e.g., an Application Specific Integrated Circuit (ASIC)), or the like. The processor 801 may also include on-board memory for caching purposes. The processor 801 may include a single processing unit or multiple processing units for performing the different actions of the method flows according to embodiments of the disclosure.
In the RAM 803, various programs and data required for the operation of the electronic device 800 are stored. The processor 801, the ROM 802, and the RAM 803 are connected to each other by a bus 804. The processor 801 performs various operations of the method flow according to the embodiments of the present disclosure by executing programs in the ROM 802 and/or the RAM 803. Note that the program may be stored in one or more memories other than the ROM 802 and the RAM 803. The processor 801 may also perform various operations of the method flows according to embodiments of the present disclosure by executing programs stored in the one or more memories.
According to an embodiment of the present disclosure, the electronic device 800 may also include an input/output (I/O) interface 805, the input/output (I/O) interface 805 also being connected to the bus 804. The electronic device 800 may also include one or more of the following components connected to the input/output I/O interface 805: an input portion 806 including a keyboard, mouse, etc.; an output portion 807 including a display such as a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and a speaker; a storage section 808 including a hard disk or the like; and a communication section 809 including a network interface card such as a LAN card, a modem, or the like. The communication section 809 performs communication processing via a network such as the internet. The drive 810 is also connected to the I/O interface 805 as needed. A removable medium 811 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 810 as needed so that a computer program read out therefrom is mounted into the storage section 808 as needed.
The present disclosure also provides a computer-readable storage medium that may be embodied in the apparatus/device/system described in the above embodiments; or may exist alone without being assembled into the apparatus/device/system. The computer-readable storage medium carries one or more programs which, when executed, implement methods in accordance with embodiments of the present disclosure.
According to embodiments of the present disclosure, the computer-readable storage medium may be a non-volatile computer-readable storage medium, which may include, for example, but is not limited to: a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this disclosure, a computer-readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. For example, according to embodiments of the present disclosure, the computer-readable storage medium may include ROM 802 and/or RAM 803 and/or one or more memories other than ROM 802 and RAM 803 described above.
Embodiments of the present disclosure also include a computer program product comprising a computer program containing program code for performing the methods shown in the flowcharts. The program code, when executed in a computer system, causes the computer system to perform the methods provided by embodiments of the present disclosure.
The above-described functions defined in the system/apparatus of the embodiments of the present disclosure are performed when the computer program is executed by the processor 801. The systems, apparatus, modules, units, etc. described above may be implemented by computer program modules according to embodiments of the disclosure.
In one embodiment, the computer program may be based on a tangible storage medium such as an optical storage device, a magnetic storage device, or the like. In another embodiment, the computer program may also be transmitted, distributed, and downloaded and installed in the form of a signal on a network medium, and/or from a removable medium 811 via a communication portion 809. The computer program may include program code that may be transmitted using any appropriate network medium, including but not limited to: wireless, wired, etc., or any suitable combination of the foregoing.
In such an embodiment, the computer program may be downloaded and installed from a network via the communication section 809, and/or installed from the removable media 811. The above-described functions defined in the system of the embodiments of the present disclosure are performed when the computer program is executed by the processor 801. The systems, devices, apparatus, modules, units, etc. described above may be implemented by computer program modules according to embodiments of the disclosure.
According to embodiments of the present disclosure, program code for performing computer programs provided by embodiments of the present disclosure may be written in any combination of one or more programming languages, and in particular, such computer programs may be implemented in high-level procedural and/or object-oriented programming languages, and/or assembly/machine languages. Programming languages include, but are not limited to, such as Java, c++, python, "C" or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, partly on a remote computing device, or entirely on the remote computing device or server. In the case of remote computing devices, the remote computing device may be connected to the user computing device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computing device (e.g., connected via the Internet using an Internet service provider).
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
Those skilled in the art will appreciate that the features recited in the various embodiments of the disclosure and/or in the claims may be provided in a variety of combinations and/or combinations, even if such combinations or combinations are not explicitly recited in the disclosure. In particular, the features recited in the various embodiments of the present disclosure and/or the claims may be variously combined and/or combined without departing from the spirit and teachings of the present disclosure. All such combinations and/or combinations fall within the scope of the present disclosure.
While the foregoing is directed to embodiments of the present disclosure, other and further details of the invention may be had by the present application, it is to be understood that the foregoing description is merely exemplary of the present disclosure and that no limitations are intended to the scope of the disclosure, except insofar as modifications, equivalents, improvements or modifications may be made without departing from the spirit and principles of the present disclosure.

Claims (11)

1. A method for determining nodes to be scheduled applied to a server cluster comprises the following steps:
analyzing a building request from a calling party to obtain static demand information;
determining a node list according to the static demand information, wherein the node list comprises M nodes to be selected, and M is more than or equal to 2;
Determining dynamic resource use information corresponding to the node to be selected according to the node list, wherein the dynamic resource use information represents the real-time use condition of the resource of the node to be selected;
determining nodes to be scheduled from the M nodes to be selected according to the dynamic resource use information;
and determining a scheduling result according to the node to be scheduled so as to return the scheduling result to the calling party.
2. The method of claim 1, wherein the dynamic resource usage information comprises central processor usage information;
and determining the dynamic resource usage information corresponding to the node to be selected according to the node list, including:
based on the node list, acquiring the total resource quantity and the real-time use quantity of the central processing units of M nodes to be selected; and
and determining CPU usage information corresponding to the M nodes to be selected according to the total resource amount and the real-time usage amount.
3. The method of claim 2, wherein the determining a node to be scheduled from the M nodes to be selected according to the dynamic resource usage information comprises:
according to the CPU usage information and the resource threshold value corresponding to the M nodes to be selected, R nodes to be selected are obtained from the M nodes to be selected in a screening mode, wherein R is more than or equal to 2, and R is less than or equal to M;
Determining evaluation values of the R nodes to be selected based on the CPU usage information of the R nodes to be selected; and
and determining the node to be scheduled from the R nodes to be selected according to the evaluation values of the R nodes to be selected.
4. The method of claim 3, wherein the determining the evaluation values of the R candidate nodes based on the central processor usage information of the R candidate nodes comprises:
based on the CPU usage information, ordering the R nodes to be selected to obtain a node sequence; and
and determining evaluation values of the R nodes to be selected based on the node sequence.
5. The method of claim 1, wherein the node to be scheduled comprises N containers for encapsulating execution code, the dynamic resource usage information comprising N container resource usage information, N being equal to or greater than 1;
the step of determining the scheduling result according to the node to be scheduled comprises the following steps:
determining a container to be scheduled from N containers of the node to be scheduled according to the container resource use information; and
and generating a dispatching result according to the container to be dispatched.
6. The method of claim 1, wherein the determining, according to the node list, dynamic resource usage information corresponding to the node to be selected includes:
Determining a scheduling policy of a target server cluster, wherein the target server cluster comprises the M nodes to be selected;
and under the condition that the scheduling strategy is determined to be a dynamic scheduling strategy, determining dynamic resource use information corresponding to the node to be selected according to the node list.
7. The method of claim 1, wherein the static demand information comprises first hardware specification information comprising hardware specification information of a central processor of a desired node;
determining a list of available nodes according to the static demand information comprises:
acquiring second hardware specification information, wherein the second hardware specification information comprises hardware specification information of central processing units of S nodes, the S nodes belong to a target server cluster, and S is greater than or equal to M; and
and screening the M candidate nodes from the S nodes according to the first hardware specification information and the second hardware specification information to obtain the node list.
8. A node to be scheduled determining apparatus applied to a server cluster, comprising:
the analysis module is used for analyzing the construction request from the calling party to obtain static demand information;
The candidate node determining module is used for determining a usable node list according to the static demand information, wherein the node list comprises M nodes to be selected, and M is more than or equal to 2;
the dynamic resource determining module is used for determining dynamic resource use information corresponding to the node to be selected according to the node list, and the dynamic resource use information characterizes the real-time use condition of the node to be selected;
the node to be scheduled determining module is used for determining nodes to be scheduled from the M nodes to be selected according to the dynamic resource use information;
and the scheduling result determining module is used for determining a scheduling result according to the node to be scheduled so as to return the scheduling result to the calling party.
9. An electronic device, comprising:
one or more processors;
storage means for storing one or more programs,
wherein the one or more programs, when executed by the one or more processors, cause the one or more processors to perform the method of any of claims 1-7.
10. A computer readable storage medium having stored thereon executable instructions which, when executed by a processor, cause the processor to perform the method according to any of claims 1-7.
11. A computer program product comprising a computer program which, when executed by a processor, implements the method according to any one of claims 1 to 7.
CN202310372191.3A 2023-04-10 2023-04-10 Method, device and equipment for determining nodes to be scheduled applied to server cluster Pending CN116401056A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310372191.3A CN116401056A (en) 2023-04-10 2023-04-10 Method, device and equipment for determining nodes to be scheduled applied to server cluster

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310372191.3A CN116401056A (en) 2023-04-10 2023-04-10 Method, device and equipment for determining nodes to be scheduled applied to server cluster

Publications (1)

Publication Number Publication Date
CN116401056A true CN116401056A (en) 2023-07-07

Family

ID=87017530

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310372191.3A Pending CN116401056A (en) 2023-04-10 2023-04-10 Method, device and equipment for determining nodes to be scheduled applied to server cluster

Country Status (1)

Country Link
CN (1) CN116401056A (en)

Similar Documents

Publication Publication Date Title
US12175266B1 (en) Virtual provisioning with implementation resource boundary awareness
US10013662B2 (en) Virtual resource cost tracking with dedicated implementation resources
US10460241B2 (en) Server and cloud computing resource optimization method thereof for cloud big data computing architecture
CA2811020C (en) Virtual resource cost tracking with dedicated implementation resources
US10783002B1 (en) Cost determination of a service call
US20200218579A1 (en) Selecting a cloud service provider
CN110389816B (en) Method, apparatus and computer readable medium for resource scheduling
US8875149B2 (en) Product-specific system resource allocation within a single operating system instance
US20090282413A1 (en) Scalable Scheduling of Tasks in Heterogeneous Systems
US9423957B2 (en) Adaptive system provisioning
CN104243405A (en) Request processing method, device and system
US11755305B2 (en) Automatic update scheduler systems and processes
US11005718B2 (en) Determining capabilities of cognitive entities in a distributed network based on application of cognitive protocols
US20230136226A1 (en) Techniques for auto-tuning compute load resources
JP6799313B2 (en) Business policy construction support system, business policy construction support method and program
CN116974747A (en) Resource allocation method, device, equipment, medium and program product
CN113886086B (en) Cloud platform computing resource allocation method, system, terminal and storage medium
US11500399B2 (en) Adjustable control of fluid processing networks based on proportions of server effort
US20220058727A1 (en) Job based bidding
CN109960572B (en) Equipment resource management method and device and intelligent terminal
CN116401056A (en) Method, device and equipment for determining nodes to be scheduled applied to server cluster
CN118227289A (en) Task scheduling method, device, electronic equipment, storage medium and program product
CN118132260A (en) Resource scheduling method, device, equipment, medium and program product
CN119484406B (en) Pod scheduling methods, devices, equipment, storage media, and program products
CN118377573A (en) Cross-cluster application deployment method, device, equipment, medium and program product

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination