CN111984510A - Performance test method and device of scheduling system - Google Patents
Performance test method and device of scheduling system Download PDFInfo
- Publication number
- CN111984510A CN111984510A CN201910423634.0A CN201910423634A CN111984510A CN 111984510 A CN111984510 A CN 111984510A CN 201910423634 A CN201910423634 A CN 201910423634A CN 111984510 A CN111984510 A CN 111984510A
- Authority
- CN
- China
- Prior art keywords
- node
- target container
- performance test
- slave
- container
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/30—Monitoring
- G06F11/34—Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment
- G06F11/3409—Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment for performance assessment
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/30—Monitoring
- G06F11/34—Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment
- G06F11/3457—Performance evaluation by simulation
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/44—Arrangements for executing specific programs
- G06F9/455—Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
- G06F9/45533—Hypervisors; Virtual machine monitors
- G06F9/45558—Hypervisor-specific management and integration aspects
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/44—Arrangements for executing specific programs
- G06F9/455—Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
- G06F9/45533—Hypervisors; Virtual machine monitors
- G06F9/45558—Hypervisor-specific management and integration aspects
- G06F2009/45591—Monitoring or debugging support
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Software Systems (AREA)
- Computer Hardware Design (AREA)
- Quality & Reliability (AREA)
- Debugging And Monitoring (AREA)
- Test And Diagnosis Of Digital Computers (AREA)
Abstract
One or more embodiments of the present specification provide a method and an apparatus for testing performance of a scheduling system, where the method may include: determining a target container for performance testing; starting a plurality of processes in the target container for respectively simulating a plurality of slave nodes of a scheduling system; and carrying out performance test on the master node of the scheduling system through the simulated slave nodes.
Description
Technical Field
One or more embodiments of the present disclosure relate to the field of system testing technologies, and in particular, to a method and an apparatus for testing performance of a scheduling system.
Background
The scheduling system is used for scheduling the container to a proper host (a physical machine or a virtual machine) for operation, and simultaneously ensuring the stability of the operation of the container and the utilization rate of the host resources. The scheduling system comprises a master node (master) and a slave node (slave), wherein the master node can make global decision (such as scheduling) on the cluster, detect and respond to cluster events (such as controlling the starting of a new copy when the number of copies is insufficient), and the slave node runs on each host to maintain a running Pod (container set) and provide a running environment.
Disclosure of Invention
In view of this, one or more embodiments of the present disclosure provide a method and an apparatus for testing performance of a scheduling system.
To achieve the above object, one or more embodiments of the present disclosure provide the following technical solutions:
according to a first aspect of one or more embodiments of the present specification, there is provided a performance testing method for a scheduling system, including:
determining a target container for performance testing;
starting a plurality of processes in the target container for respectively simulating a plurality of slave nodes of a scheduling system;
and carrying out performance test on the master node of the scheduling system through the simulated slave nodes.
According to a second aspect of one or more embodiments of the present specification, there is provided a resource scheduling method based on performance test data, including:
the method comprises the steps that a master node of a scheduling system obtains performance test data, wherein the performance test data are obtained by performing performance test on the master node through a slave node simulated by a target container, and a plurality of processes started in the target container are respectively used for simulating a plurality of slave nodes;
and the master node carries out resource scheduling on the slave nodes in the scheduling system within the performance range represented by the performance test data.
According to a third aspect of one or more embodiments of the present specification, there is provided a performance testing apparatus for a scheduling system, including:
a determination unit that determines a target container for performance testing;
the starting unit is used for starting a plurality of processes in the target container so as to respectively simulate a plurality of slave nodes of a scheduling system;
and the test unit is used for carrying out performance test on the master node of the scheduling system through the simulated slave node.
According to a fourth aspect of one or more embodiments of the present specification, there is provided a resource scheduling apparatus based on performance test data, including:
an obtaining unit, configured to enable a master node of a scheduling system to obtain performance test data, where the performance test data is obtained by performing a performance test on the master node by using a slave node simulated by a target container, where a plurality of processes started in the target container are respectively used to simulate a plurality of slave nodes;
and the scheduling unit enables the master node to schedule resources of the slave nodes in the scheduling system within the performance range represented by the performance test data.
According to a fifth aspect of one or more embodiments herein, there is provided an electronic device, comprising:
A processor;
a memory for storing processor-executable instructions;
wherein the processor implements the method according to the first aspect or the second aspect by executing the executable instructions.
According to a sixth aspect of one or more embodiments of the present description, there is provided a computer readable storage medium having stored thereon computer instructions which, when executed by a processor, implement the steps of the method according to the first or second aspect.
Drawings
Fig. 1 is a schematic architecture diagram of a performance testing system of a scheduling system according to an exemplary embodiment.
Fig. 2A is a flowchart of a performance testing method of a scheduling system according to an exemplary embodiment.
FIG. 2B is a flowchart of a method for scheduling resources based on performance test data according to an exemplary embodiment.
FIG. 3 is a schematic diagram of a production cluster provided by an exemplary embodiment.
Fig. 4 is a flowchart illustrating a performance test of a host node according to an example embodiment.
FIG. 5 is a schematic diagram of a performance test cluster provided by an exemplary embodiment.
Fig. 6 is a schematic structural diagram of an apparatus according to an exemplary embodiment.
Fig. 7 is a block diagram of a performance testing apparatus of a scheduling system according to an exemplary embodiment.
Fig. 8 is a schematic structural diagram of another apparatus provided in an exemplary embodiment.
Fig. 9 is a block diagram of a resource scheduling apparatus based on performance test data according to an exemplary embodiment.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The implementations described in the following exemplary embodiments do not represent all implementations consistent with one or more embodiments of the present specification. Rather, they are merely examples of apparatus and methods consistent with certain aspects of one or more embodiments of the specification, as detailed in the claims which follow.
It should be noted that: in other embodiments, the steps of the corresponding methods are not necessarily performed in the order shown and described herein. In some other embodiments, the method may include more or fewer steps than those described herein. Moreover, a single step described in this specification may be broken down into multiple steps for description in other embodiments; multiple steps described in this specification may be combined into a single step in other embodiments.
Fig. 1 is a schematic architecture diagram of a performance testing system of a scheduling system according to an exemplary embodiment. As shown in fig. 1, the system may include a server 11, a network 12, a master node 13, and a set of slave nodes 14.
The server 11 may be a physical server comprising a separate host, or the server 11 may be a virtual server carried by a cluster of hosts. During operation, the server 11 may run a related program to implement the creation of the set of slave nodes 14 and the performance testing of the master node 13.
The master node 13 may be a physical server comprising an independent host, or the server 11 may be a virtual server carried by a cluster of hosts. In operation, the master node 13 is used to schedule each of the slave nodes included in the set of slave nodes 14, and the server 11 can determine the scheduling capabilities of the master node 13 based thereon.
The slave node set 14 includes slave nodes 141, 142, etc., and the number of slave nodes is not limited in this specification. The server 11 may perform a performance test for the scheduling capability of the master node 13 by allocating corresponding test resources to simulate a large number of slave nodes and form the slave node set 14.
Fig. 2A is a flowchart of a performance testing method of a scheduling system according to an exemplary embodiment. As shown in fig. 2A, the method applied to a server (e.g., the server 11 shown in fig. 1) may include the following steps:
In step 202A, a target container for performance testing is determined.
In one embodiment, the target container refers to a container used to simulate the slave nodes required to form the specification. For example, when implemented using a kubernets system, the container may include a container implemented by the kubernets system.
In an embodiment, the target container may be attributed to a respective group of containers; each container group comprises one or more target containers, and external resources are shared among the target containers of the same container group. For example, when implemented using a kubernets system, a container group may include pods implemented by the kubernets system, and each pod may contain one or more containers. Wherein, different external resources, such as IP addresses and ports, need to be used between different pods; while the same external resource is shared among multiple containers contained in the same pod.
In one embodiment, the target container may comprise some or all of the containers contained in the same pod, and the description is not intended to be limiting. Thus, the number of target containers may be one or more; generally, since the specification (such as the number of CPU cores, the memory size, the disk size, etc.) of each container is limited, in order to simulate the formation of a large-scale slave node cluster, a larger number of target containers are often required to meet the scale requirement of the slave node cluster.
In an embodiment, the target container may be created based on specifically allocated resources; alternatively, the target container may be created based on free resources of the production environment or other environment, which may help to improve resource utilization in the production environment or other environment.
In one embodiment, one target container can be used to simulate a plurality of slave nodes in a scheduling system by starting a plurality of processes in the target container, that is, one-to-many correspondence between the target container and the slave nodes. Accordingly, all the slave nodes formed by simulation of the same target container can share the same external resource without applying for independent external resources for each slave node, so that the external resources can be greatly saved.
In an embodiment, a node (node) image may be constructed, and parameters of the node image may be configured to set the number of processes started by the target container, so that after the target container is created by the node image, a plurality of processes corresponding to the number of processes are started. The number of processes started for each target container may be determined according to the actual specification of the target container, for example, a target container with a relatively higher specification may simulate relatively more slave nodes, and a target container with a relatively lower specification may simulate relatively fewer slave nodes.
In an embodiment, configuration information related to performance testing may be set in the node image to configure the slave node of the target container simulation, so that the slave node is applied to perform performance testing on the master node. In other words, by setting configuration information related to performance testing, a slave node formed by simulation can be pointed to a performance testing cluster, and the performance testing cluster comprises a plurality of slave nodes formed by simulation and a master node to be tested; in particular, when a plurality of scenes exist simultaneously, the purpose of the slave node generated by simulation can be set accurately, and the slave node formed by simulation is prevented from being applied to other scenes.
And step 206A, performing performance test on the master node of the scheduling system through the simulated slave node.
In one embodiment, performance testing on the master node often requires a large-scale slave node, so that by starting multiple processes in each target container to respectively simulate multiple slave nodes, the number of target containers required by slave nodes of the same scale is greatly reduced compared with the case that each target container only simulates one slave node, and all slave nodes simulated by the target container under the same container group share the same external resource even when multiple target containers belong to the same container group, so that dependence on and occupation of the external resource are greatly reduced.
In one embodiment, at least one of the following operations may be controlled by the operation and maintenance system: creating the target container, simulating the slave node, performing a performance test on the master node, deleting the target container, and the like, which is not limited in this specification. Because the operation and maintenance system can automatically, accurately and efficiently complete the operation based on the configured processing strategy, compared with the manual implementation of the operation, the operation and maintenance system can effectively improve the processing efficiency of the related operation and reduce the error probability.
FIG. 2B is a flowchart of a method for scheduling resources based on performance test data according to an exemplary embodiment. As shown in fig. 2B, the method is applied to the master node of the scheduling system, and may include the following steps:
step 202B, the master node of the scheduling system obtains performance test data, where the performance test data is obtained by performing a performance test on the master node by a slave node simulated by a target container, where a plurality of processes started in the target container are respectively used to simulate a plurality of slave nodes.
In an embodiment, the performance test data obtained by the master node may be obtained by the embodiment shown in fig. 2A and described above, so that a plurality of slave nodes may be obtained on the same target container based on a plurality of process simulations in the process, the number of used target containers may be reduced, and the performance test requirement may be reduced, which is not described herein again.
And step 204B, the master node performs resource scheduling on the slave nodes in the scheduling system within the performance range represented by the performance test data.
In an embodiment, based on the performance test data, a performance range of the master node when performing resource scheduling, such as the maximum number of slave nodes supporting scheduling, the recommended number of slave nodes, and the like, may be obtained, and in an actual resource scheduling process, the master node may ensure that resource scheduling is controllable for the master node by referring to the performance range, thereby avoiding situations such as master node abnormality caused by exceeding or greatly exceeding the performance range, and ensuring normal operation of the scheduling system.
FIG. 3 is a schematic diagram of a production cluster provided by an exemplary embodiment. As shown in fig. 3, a production cluster comprising a production master node and a number of production slave nodes is formed in a production environment, with the production master node scheduling the number of production slave nodes. Similarly, a similar master-slave form of scheduling system or cluster may exist in other environments.
Assume that the kubernets system is used in this specification to implement application deployment and container management. Producing nodes which are specifically configured on the corresponding round corner rectangular structure shown in the figure 3; wherein the node is a working machine (physical machine or virtual machine) in kubernets. One or more nodes may be included in the production cluster, corresponding to the one or more production slave nodes described above. Each node may run one or more pods, such as each node in fig. 3 contains 4 pods, each characterized by the circular structure shown in fig. 3, located inside the node. pod is an atomic unit in a kubernets system, each pod containing a container group of one or more containers (containers), each container characterized as a square structure inside the pod, as shown in fig. 3; for example, in the node on the left side of fig. 3, the pod in the upper left corner contains 4 containers.
In the operation process, a certain amount of idle resources may exist in a production environment, and the idle resources may be applied to form a performance test cluster for performing a performance test on a master node to be tested, so as to improve the utilization rate of the idle resources. Similarly, free resources in other environments may also be applied to form the performance test cluster. Of course, there may be other sources of the resources used by the performance testing cluster, such as specially allocated resources, and the like, and this specification does not limit this.
The creation and operation of a performance test cluster is described below in conjunction with fig. 4. Fig. 4 is a flowchart illustrating a performance test performed on a host node according to an exemplary embodiment. As shown in fig. 4, the process may include the following steps:
In one embodiment, it may be determined whether the free resources that can be provided by the production environment are sufficient, based on the number of simulated slave nodes required; if the number of the simulation slave nodes is large and exceeds the idle resources provided by the production environment, other resources can be additionally allocated according to the exceeding condition, otherwise, the idle resources provided by the production environment can be directly used.
In one embodiment, assuming that the specification of a single container is "4 CPU cores +8G memory +60G hard disk", and each container can form 40 simulation slave nodes, when the number of required simulation slave nodes is 20000, the number of containers required to be created is 500, and the resources required to be occupied are 2000 CPU cores, 4000G memory, and 30000G hard disk.
And step 404, constructing a hollow-node mirror image.
In one embodiment, a hollow-node mirror image may be used to create containers (specifically, to create pods containing containers) for use in forming simulated slave nodes. Certain parameter configurations can be implemented for the hold-node mirror, for example:
the number of threshold-kubel processes to start is set. By starting n hollow-kubel processes within the container, n simulated slave nodes can be formed accordingly. The number of slave nodes to be formed in each container can be predetermined according to the specification of the container, the total number of required slave nodes to be simulated, and the like, so as to set the number of corresponding hollow-kubel processes.
Specifying configuration information for the performance test cluster. By specifying the configuration information of the performance testing cluster in this embodiment in the hold-node mirror image, it can be ensured that the subsequently formed simulation slave node points to the performance testing cluster, that is, the formed simulation slave node is added to the performance testing cluster for performing the performance test on the master node to be tested, and is not added to other clusters or applied to other purposes.
At step 406, a container is created by mirroring.
At step 408, a simulated slave node is formed in the container.
In one embodiment, when an operation and maintenance system, such as a PAAS (Platform-as-a-Service) or other type, has been pre-configured in an enterprise, the operation and maintenance system may be configured to be reused in the performance test cluster, for example, by creating a container through the above mentioned hold-node mirror image. For example, by inputting the number of containers to be created into the operation and maintenance system (determined by step 402), and invoking the above-mentioned hold-node mirror, a corresponding number of containers can be created.
In one embodiment, multiple emulated slave nodes may be formed per container by creating multiple hold-kubel processes in the container such that each process may be used to form one emulated slave node. Here, since the hold-kubel code in the related art supports only one process to be created in each container, a plurality of processes can be created in each container by adjusting the code in the present specification. For example, in the embodiment shown in FIG. 5, free resources in a production environment are created as a corresponding number of test nodes, each test node containing one or more pods, each pod containing one or more containers, each container having a plurality of simulated slave nodes formed therein.
Since different pods require different external resources such as IP addresses and ports, by forming a plurality of simulation slave nodes in each container, under the condition that the number of pods and the external resources occupied by the pods are not changed, compared with the case that only one simulation slave node is formed in each container, a relatively larger number of simulation slave nodes can be formed, and especially when a performance test cluster with a super-large scale (such as ten thousand or more) is formed, the external resources can be greatly saved.
In an embodiment, each simulation slave node can be configured with a different slave node ID, so that during a subsequent performance test, an erroneous simulation slave node can be accurately positioned based on the slave node ID, thereby facilitating quick resolution of a corresponding error.
At step 410, a performance test operation is performed.
In an embodiment, the operation and maintenance system may automatically perform control processing on the performance test cluster according to a pre-configured performance test logic, so as to test the scheduling performance of the master node to be tested on the simulated slave node, and obtain corresponding performance test data.
At step 412, test resources are returned.
In an embodiment, after the performance test is completed, the operation and maintenance system can automatically implement capacity reduction processing for the containers in the performance test cluster, so that resources corresponding to the related containers are returned to the production environment, the work of occupying and influencing the production cluster for a long time is avoided, and elastic expansion management of the performance test environment is achieved.
By reusing the operation and maintenance system in the performance test process, compared with manual execution of related operations, the operation and maintenance system can improve the operation efficiency and reduce the error probability, thereby greatly reducing the operation and maintenance risks in the performance test, and being particularly suitable for rapid elastic expansion management in large-scale or ultra-large-scale performance test scenes.
FIG. 6 is a schematic block diagram of an apparatus provided in an exemplary embodiment. Referring to fig. 6, at the hardware level, the apparatus includes a processor 602, an internal bus 604, a network interface 606, a memory 608 and a non-volatile memory 610, but may also include hardware required for other services. The processor 602 reads the corresponding computer program from the non-volatile memory 610 into the memory 608 and runs the computer program, thereby forming a performance testing device of the dispatching system on a logic level. Of course, besides software implementation, the one or more embodiments in this specification do not exclude other implementations, such as logic devices or combinations of software and hardware, and so on, that is, the execution subject of the following processing flow is not limited to each logic unit, and may also be hardware or logic devices.
Referring to fig. 7, in a software implementation, the performance testing apparatus of the scheduling system may include:
A determination unit 71 that determines a target container for performance testing;
a starting unit 72 that starts a plurality of processes in the target container for respectively simulating a plurality of slave nodes of a scheduling system;
and the test unit 73 is used for carrying out performance test on the master node of the scheduling system through the simulated slave nodes.
Optionally, the target container belongs to a corresponding container group; each container group comprises one or more target containers, and external resources are shared among the target containers of the same container group.
Optionally, the external resource includes at least one of: IP address and port.
Optionally, the starting unit 72 is specifically configured to:
constructing a node mirror image;
performing parameter configuration on the node mirror image to set the process number started by the target container;
after the target container is created through the node image, a plurality of processes corresponding to the number of processes are started.
Optionally, the method further includes:
a setting unit 74, configured to set configuration information related to performance testing in the node image, so as to configure the slave node of the target container simulation, so that the slave node is applied to perform performance testing on the master node.
Optionally, the method further includes:
a control unit 75 for controlling at least one of the following operations through the operation and maintenance system: creating the target container, simulating the slave node, performing performance test on the master node, and deleting the target container.
Optionally, the target container is created based on free resources of the production environment.
FIG. 8 is a schematic block diagram of an apparatus provided in an exemplary embodiment. Referring to fig. 8, at the hardware level, the apparatus includes a processor 802, an internal bus 804, a network interface 806, a memory 808, and a non-volatile memory 810, but may also include hardware required for other services. The processor 802 reads a corresponding computer program from the non-volatile memory 810 into the memory 808 and then runs the computer program, thereby forming a resource scheduling device based on the performance test data on a logic level. Of course, besides software implementation, the one or more embodiments in this specification do not exclude other implementations, such as logic devices or combinations of software and hardware, and so on, that is, the execution subject of the following processing flow is not limited to each logic unit, and may also be hardware or logic devices.
Referring to fig. 9, in a software implementation, the resource scheduling apparatus based on performance test data may include:
An obtaining unit 91, configured to enable a master node of a scheduling system to obtain performance test data, where the performance test data is obtained by performing a performance test on the master node by using a slave node simulated by a target container, where a plurality of processes started in the target container are respectively used to simulate a plurality of slave nodes;
the scheduling unit 92 is configured to enable the master node to perform resource scheduling on the slave nodes in the scheduling system within the performance range represented by the performance test data.
The systems, devices, modules or units illustrated in the above embodiments may be implemented by a computer chip or an entity, or by a product with certain functions. A typical implementation device is a computer, which may take the form of a personal computer, laptop computer, cellular telephone, camera phone, smart phone, personal digital assistant, media player, navigation device, email messaging device, game console, tablet computer, wearable device, or a combination of any of these devices.
In a typical configuration, a computer includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include forms of volatile memory in a computer readable medium, Random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic disk storage, quantum memory, graphene-based storage media or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, a computer readable medium does not include a transitory computer readable medium such as a modulated data signal and a carrier wave.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The foregoing description has been directed to specific embodiments of this disclosure. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims may be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing may also be possible or may be advantageous.
The terminology used in the description of the one or more embodiments is for the purpose of describing the particular embodiments only and is not intended to be limiting of the description of the one or more embodiments. As used in one or more embodiments of the present specification and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items.
It should be understood that although the terms first, second, third, etc. may be used in one or more embodiments of the present description to describe various information, such information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope of one or more embodiments herein. The word "if" as used herein may be interpreted as "at … …" or "when … …" or "in response to a determination", depending on the context.
The above description is only for the purpose of illustrating the preferred embodiments of the one or more embodiments of the present disclosure, and is not intended to limit the scope of the one or more embodiments of the present disclosure, and any modifications, equivalent substitutions, improvements, etc. made within the spirit and principle of the one or more embodiments of the present disclosure should be included in the scope of the one or more embodiments of the present disclosure.
Claims (18)
1. A performance test method of a scheduling system is characterized by comprising the following steps:
determining a target container for performance testing;
starting a plurality of processes in the target container for respectively simulating a plurality of slave nodes of a scheduling system;
and carrying out performance test on the master node of the scheduling system through the simulated slave nodes.
2. The method of claim 1, wherein the target container belongs to a respective group of containers; each container group comprises one or more target containers, and external resources are shared among the target containers of the same container group.
3. The method of claim 2, wherein the external resource comprises at least one of: IP address and port.
4. The method of claim 1, wherein the initiating a plurality of processes in the target container comprises:
Constructing a node mirror image;
performing parameter configuration on the node mirror image to set the process number started by the target container;
after the target container is created through the node image, a plurality of processes corresponding to the number of processes are started.
5. The method of claim 4, further comprising:
setting configuration information related to performance testing in the node mirror image for configuring the slave node of the target container simulation, so that the slave node is applied to perform performance testing on the master node.
6. The method of claim 1, further comprising:
controlling, by the operation and maintenance system, at least one of: creating the target container, simulating the slave node, performing performance test on the master node, and deleting the target container.
7. The method of claim 1, wherein the target container is created based on free resources of a production environment.
8. A resource scheduling method based on performance test data is characterized by comprising the following steps:
the method comprises the steps that a master node of a scheduling system obtains performance test data, wherein the performance test data are obtained by performing performance test on the master node through a slave node simulated by a target container, and a plurality of processes started in the target container are respectively used for simulating a plurality of slave nodes;
And the master node carries out resource scheduling on the slave nodes in the scheduling system within the performance range represented by the performance test data.
9. A performance testing apparatus for a scheduling system, comprising:
a determination unit that determines a target container for performance testing;
the starting unit is used for starting a plurality of processes in the target container so as to respectively simulate a plurality of slave nodes of a scheduling system;
and the test unit is used for carrying out performance test on the master node of the scheduling system through the simulated slave node.
10. The apparatus of claim 9, wherein the target container belongs to a respective group of containers; each container group comprises one or more target containers, and external resources are shared among the target containers of the same container group.
11. The apparatus of claim 10, wherein the external resource comprises at least one of: IP address and port.
12. The apparatus according to claim 9, wherein the starting unit is specifically configured to:
constructing a node mirror image;
performing parameter configuration on the node mirror image to set the process number started by the target container;
After the target container is created through the node image, a plurality of processes corresponding to the number of processes are started.
13. The apparatus of claim 12, further comprising:
and the setting unit is used for setting configuration information related to performance test in the node mirror image so as to configure the slave node of the target container simulation, so that the slave node is applied to the performance test of the master node.
14. The apparatus of claim 9, further comprising:
the control unit controls at least one of the following operations through the operation and maintenance system: creating the target container, simulating the slave node, performing performance test on the master node, and deleting the target container.
15. The apparatus of claim 9, wherein the target container is created based on free resources of a production environment.
16. A resource scheduling apparatus based on performance test data, comprising:
an obtaining unit, configured to enable a master node of a scheduling system to obtain performance test data, where the performance test data is obtained by performing a performance test on the master node by using a slave node simulated by a target container, where a plurality of processes started in the target container are respectively used to simulate a plurality of slave nodes;
And the scheduling unit enables the master node to schedule resources of the slave nodes in the scheduling system within the performance range represented by the performance test data.
17. An electronic device, comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor implements the method of any one of claims 1-8 by executing the executable instructions.
18. A computer-readable storage medium having stored thereon computer instructions, which, when executed by a processor, carry out the steps of the method according to any one of claims 1 to 8.
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201910423634.0A CN111984510B (en) | 2019-05-21 | 2019-05-21 | Performance test method and device for dispatching system |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201910423634.0A CN111984510B (en) | 2019-05-21 | 2019-05-21 | Performance test method and device for dispatching system |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| CN111984510A true CN111984510A (en) | 2020-11-24 |
| CN111984510B CN111984510B (en) | 2024-05-17 |
Family
ID=73436192
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN201910423634.0A Active CN111984510B (en) | 2019-05-21 | 2019-05-21 | Performance test method and device for dispatching system |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN111984510B (en) |
Cited By (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN113934579A (en) * | 2021-10-11 | 2022-01-14 | 中国科学院地质与地球物理研究所 | A hardware detection method and device for a short-distance single-signal transmission network |
Family Cites Families (10)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| KR101585160B1 (en) * | 2015-03-12 | 2016-01-13 | 주식회사 모비젠 | Distributed Computing System providing stand-alone environment and controll method therefor |
| EP3561672B1 (en) * | 2015-04-07 | 2022-06-01 | Huawei Technologies Co., Ltd. | Method and apparatus for a mobile device based cluster computing infrastructure |
| CN106209741B (en) * | 2015-05-06 | 2020-01-03 | 阿里巴巴集团控股有限公司 | Virtual host, isolation method, resource access request processing method and device |
| CN104899077A (en) * | 2015-06-30 | 2015-09-09 | 北京奇虎科技有限公司 | Process information acquiring method and device based on container technology |
| US9621643B1 (en) * | 2015-07-31 | 2017-04-11 | Parallels IP Holdings GmbH | System and method for joining containers running on multiple nodes of a cluster |
| CN106685949A (en) * | 2016-12-24 | 2017-05-17 | 上海七牛信息技术有限公司 | Container access method, container access device and container access system |
| CN107688526A (en) * | 2017-08-25 | 2018-02-13 | 上海壹账通金融科技有限公司 | Performance test methods, device, computer equipment and the storage medium of application program |
| CN107678836B (en) * | 2017-10-12 | 2021-09-03 | 新华三大数据技术有限公司 | Cluster test data acquisition method and device |
| CN108345497A (en) * | 2018-01-17 | 2018-07-31 | 千寻位置网络有限公司 | GNSS positions execution method and system, the positioning device of simulation offline |
| CN109302314B (en) * | 2018-09-28 | 2022-04-29 | 深信服科技股份有限公司 | Controlled node simulation method and related device |
-
2019
- 2019-05-21 CN CN201910423634.0A patent/CN111984510B/en active Active
Cited By (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN113934579A (en) * | 2021-10-11 | 2022-01-14 | 中国科学院地质与地球物理研究所 | A hardware detection method and device for a short-distance single-signal transmission network |
| CN113934579B (en) * | 2021-10-11 | 2022-05-17 | 中国科学院地质与地球物理研究所 | Hardware detection method and equipment for short-distance single-signal transmission network |
Also Published As
| Publication number | Publication date |
|---|---|
| CN111984510B (en) | 2024-05-17 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN114327861B (en) | Method, device, system and storage medium for executing EDA task | |
| CN112559163B (en) | Method and device for optimizing tensor calculation performance | |
| US9576019B2 (en) | Increasing distributed database capacity | |
| US20250123891A1 (en) | Resource processing method and resource scheduling method | |
| CN111143039A (en) | Virtual machine scheduling method and device and computer storage medium | |
| US11954419B2 (en) | Dynamic allocation of computing resources for electronic design automation operations | |
| CN113204353B (en) | Big data platform assembly deployment method and device | |
| CN115129460B (en) | Method, device, computer equipment and storage medium for acquiring operator hardware time | |
| CN113986846B (en) | Data processing method, system, device and storage medium | |
| CN108037977B (en) | Virtual computer resource management method, device, computer medium, and system | |
| CN113986539A (en) | Method, device, electronic equipment and readable storage medium for realizing pod fixed IP | |
| CN112600931A (en) | API gateway deployment method and device | |
| CN113760446A (en) | Resource scheduling method, apparatus, device and medium | |
| JP2019106031A (en) | Data processing system and data analysis/processing method | |
| CN111984510B (en) | Performance test method and device for dispatching system | |
| CN114153732A (en) | Failure scenario test method, device, electronic device and storage medium | |
| CN107102898B (en) | Memory management and data structure construction method and device based on NUMA (non Uniform memory Access) architecture | |
| CN113535087A (en) | Data processing method, server and storage system in data migration process | |
| CN119182773A (en) | Method, device and storage medium for executing computing tasks on cloud system | |
| CN107688634A (en) | Method for writing data and device, electronic equipment | |
| CN112596669A (en) | Data processing method and device based on distributed storage | |
| US20230121052A1 (en) | Resource resettable deep neural network accelerator, system, and method | |
| US9372816B2 (en) | Advanced programmable interrupt controller identifier (APIC ID) assignment for a multi-core processing unit | |
| CN107015883A (en) | A kind of dynamic data backup method and device | |
| CN115543560A (en) | Container group scheduling method and device |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| GR01 | Patent grant | ||
| GR01 | Patent grant |