[go: up one dir, main page]

CN111984397A - Computing resource allocation system and method - Google Patents

Computing resource allocation system and method Download PDF

Info

Publication number
CN111984397A
CN111984397A CN201910439929.7A CN201910439929A CN111984397A CN 111984397 A CN111984397 A CN 111984397A CN 201910439929 A CN201910439929 A CN 201910439929A CN 111984397 A CN111984397 A CN 111984397A
Authority
CN
China
Prior art keywords
service
fpga
server
serverless
computing resource
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910439929.7A
Other languages
Chinese (zh)
Other versions
CN111984397B (en
Inventor
李峰
龙欣
张振祥
张军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alibaba Group Holding Ltd
Original Assignee
Alibaba Group Holding Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alibaba Group Holding Ltd filed Critical Alibaba Group Holding Ltd
Priority to CN201910439929.7A priority Critical patent/CN111984397B/en
Priority to PCT/CN2020/091231 priority patent/WO2020238720A1/en
Publication of CN111984397A publication Critical patent/CN111984397A/en
Application granted granted Critical
Publication of CN111984397B publication Critical patent/CN111984397B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5011Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
    • G06F9/5016Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals the resource being the memory
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • G06F9/5044Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering hardware capabilities

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Stored Programmes (AREA)

Abstract

A computing resource allocation system and method are disclosed. Wherein, this system includes: the management server is used for responding to a server-free server less service request initiated by the client, determining FPGA (field programmable gate array) computing resources to be used and commanding the service server to provide server less service for the client based on the FPGA computing resources; and the business server is used for providing FPGA computing resources to be used for the server less service through the built-in FPGA board card. The technical problem that the server-free service cannot be provided for the client side through the FPGA board card in the prior art is solved.

Description

Computing resource allocation system and method
Technical Field
The present application relates to the field of computers, and in particular, to a computing resource allocation system and method.
Background
A serverless architecture (i.e., a serverless architecture) refers to an architecture in which a developer uses computing resources as services without much consideration of server issues, where server management and resource allocation are not visible to users. The no-Service architecture can provide SaaS (Software-as-a-Service) Service, which can provide better experience for customers and increase the stickiness of the platform to the customers. An FPGA (Field Programmable Gate array) on the cloud can be used for providing FPGA computing capability for customers, and is suitable for computing intensive business scenes, but the development threshold of the FPGA is high.
However, in the prior art, a server-free service cannot be provided for a client through an FPGA board card, and schemes such as server-free service based on FPGA, pooling and the like do not exist, so that the use experience of a client is improved, and the application range of the FPGA board card is expanded.
In view of the above problems, no effective solution has been proposed.
Disclosure of Invention
The embodiment of the application provides a computing resource allocation system and a computing resource allocation method, which are used for at least solving the technical problem that the server-free service cannot be provided for a client through an FPGA (field programmable gate array) board card in the prior art.
According to an aspect of an embodiment of the present application, there is provided a computing resource allocation system, including: the management server is used for responding to a serverless service request initiated by the client, determining FPGA (field programmable gate array) computing resources to be used and commanding the service server to provide serverless service for the client based on the FPGA computing resources; and the business server is used for providing FPGA computing resources to be used for the server service through the built-in FPGA board card.
According to another aspect of the embodiments of the present application, there is also provided a computing resource allocation method, including: receiving a serverless service request initiated by a client; responding to a serverless service request, and determining field programmable gate array FPGA computing resources to be used; and the command service server provides a serverless service for the client based on the FPGA computing resource.
According to another aspect of the embodiments of the present application, there is also provided a computing resource allocation method, including: receiving a command which is issued by a management server and provides a serverless service for a client, wherein identification information of the client is pre-stored in an information storage server; and creating an application running environment according to the command of the management server, and providing a serverless service for the client through the application running environment.
According to another aspect of the embodiments of the present application, there is also provided a storage medium including a stored program, wherein when the program runs, a device in which the storage medium is located is controlled to execute the computing resource allocation method.
According to another aspect of the embodiments of the present application, there is also provided a processor configured to execute a program, where the program executes a computing resource allocation method.
In the embodiment of the application, a mode of providing FPGA computing resources based on the serverless service is adopted, the management server responds to a serverless service request initiated by the client, the FPGA computing resources to be used are determined, and the service server is instructed to create the application running environment, so that the service server provides the server service to the client through the application running environment based on the FPGA computing resources.
According to the content, by means of the processing mode of the management server and the service server for the server service request, the FPGA board card can provide FPGA computing resources to be used for the server service, the purpose of providing the server-free service for the client through the FPGA board card is achieved, the application range and the technical benefits of the FPGA board card are expanded, the technical effect of user experience is improved, and the technical problem that the server-free service cannot be provided for the client through the FPGA board card in the prior art is solved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the application and together with the description serve to explain the application and not to limit the application. In the drawings:
FIG. 1 is a schematic diagram of a computing resource allocation system according to an embodiment of the present application;
fig. 2 is a schematic diagram of an optional service server and FPGA board according to an embodiment of the present application;
FIG. 3 is a schematic illustration of an alternative dynamic region according to an embodiment of the present application;
FIG. 4 is a block diagram of an alternative hardware configuration of a computer terminal according to an embodiment of the present application;
FIG. 5 is a flow chart of a method of computing resource allocation according to an embodiment of the present application;
FIG. 6 is a flow chart of a method of computing resource allocation according to an embodiment of the present application;
FIG. 7 is a schematic diagram of an apparatus for allocating computing resources according to an embodiment of the present application;
FIG. 8 is a schematic diagram of an apparatus for allocating computing resources according to an embodiment of the present application; and
fig. 9 is a block diagram of a computer terminal according to an embodiment of the present application.
Detailed Description
In order to make the technical solutions better understood by those skilled in the art, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only partial embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
It should be noted that the terms "first," "second," and the like in the description and claims of this application and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the application described herein are capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
Example 1
According to the embodiment of the application, a computing resource allocation system is provided, and it should be noted that in the prior art, the on-cloud FPGA and the serverless service without a server can be realized, but at present, the serverless service and pooling based on the FPGA cannot be realized, so that the application range of the goods station FPGA is used.
To solve the above problem, an embodiment of the present application provides a computing resource allocation system, as shown in fig. 1, the system including: the system comprises an information storage server 101, a management server 103, a business server 105 and a Field Programmable Gate Array (FPGA) board card 107.
The FPGA board card 107 is arranged on the service server 105, and FPGA computing resources to be used are provided for the serverless server by the service server through the built-in FPGA board card; the information storage server 101 is configured to store function identifier information corresponding to a serverless service request sent by a client, where the function identifier information may be IP (Intellectual Property) identifier information, and different service functions correspond to different function identifiers, for example, the function identifier information for accessing a DDR memory is different from the function identifier information for picture coding and decoding, for example, IP1, IP2, IPN-1, and IPN in fig. 1 respectively represent different function identifier information.
And the management server 103 is configured to respond to a server service request initiated by the client, determine an FPGA computing resource to be used, and instruct the service server to provide a server service to the client based on the FPGA computing resource.
Optionally, the management server communicates with the client, the client sends a serverless service request to the management server through a network, and the management server responds to the request after receiving the serverless service request. The serverless service request at least includes relevant information of the FPGA computing resource requested by the client, including but not limited to identification information (e.g., name), type, and the like of the FPGA computing resource. And then, the management server analyzes the request, determines the FPGA board card providing the serverless service in an optimal scheduling mode, and sends the function identification information to the service server so that the service server provides the computing resources corresponding to the FPGA board card.
In addition, the service server 105 is further configured to create an application running environment according to a command of the management server, and provide a serverless service to the client through the application running environment, where the application running environment may include an application running environment, the application running environment may be, but is not limited to, a Docker running environment, and a developer may package an application into a portable container using the application running environment and then issue the application, or perform virtualization processing.
In another optional scheme, the FPGA card is used as a providing carrier for providing the computing resource, and the to-be-used FPGA computing resource can be provided in a Time Division Multiplexing (TDM) manner and a Space Division Multiplexing (SDM) manner.
As can be seen from the above, the FPGA computing resource is provided based on the serverless service, the FPGA computing resource to be used is determined by the management server responding to the serverless service request initiated by the client, and the service server is instructed to create the application running environment, so that the service server provides the serverless service to the client through the application running environment based on the FPGA computing resource.
It is easy to notice that through the processing mode of the management server and the service server for the server service request, the FPGA board can provide the FPGA computing resource to be used for the server service, and the purpose of providing the server-free service to the client through the FPGA board is achieved, so that the application range and the technical benefits of the FPGA board are expanded, the technical effect of the user experience is improved, and the technical problem that the server-free service cannot be provided to the client through the FPGA board in the prior art is solved.
In an optional embodiment, the management server is further configured to respond to the serverless service request, and determine, according to the best adaptive algorithm best-fit and the use conditions of the dynamic regions on all the FPGA boards, the FPGA computing resource to be used, where the computing resource on each FPGA board is divided into a plurality of dynamic regions based on the minimum computing resource use requirement of the function identification information corresponding to the serverless service request.
Optionally, as shown in fig. 2, the schematic diagram of the service server and the FPGA board card includes a CPU as a processor of the service server, and the service server is connected through a PCIe BUS (i.e., a PCIe BUS in fig. 2). In fig. 2, the parsing unit (SHELL) of the FPGA board corresponds to a plurality of dynamic areas, and the parsing unit is configured to parse the command sent by the service server, as shown in fig. 3. Each dynamic region includes an identification storage region for storing function identification information and a content storage region, the content storage region may be a DDR (Double Data Rate) memory cell, as shown in fig. 2, the IP1 and the DDR1 form one dynamic region, and the IP2 and the DDR2 form another dynamic region.
It should be noted that the dynamic regions on the FPGA board are obtained by dividing based on the usage requirement of the minimum computing resource, where the number of the dynamic regions of the FPGA is N, N is a power of 2, and N satisfies the usage requirement of the minimum computing resource, for example, if the number of the clients is 5, the corresponding minimum value of N is 8.
In an optional embodiment, when it is detected that a client sends a serverless service request, the management server analyzes the service request sent by the client to determine an FPGA computing resource usable by the client, then determines, based on an optimal adaptation algorithm, an FPGA board card providing the serverless service according to a use condition of a dynamic region of the FPGA board card corresponding to the FPGA computing resource usable, sends an index of function identification information of the service request or the function identification information to a service server corresponding to the FPGA board card providing the service, and refreshes the dynamic region of the FPGA board card by the service server.
It should be noted that, the optimal adaptation algorithm can meet the requirement of the system for executing the task when performing memory allocation, and can allocate the minimum free area to the system for executing the task. In addition, management of the dynamic resources of the FPGA board card is achieved through the management server, and minimum fragmentation of the computing resources in the dynamic area of the FPGA board card is achieved.
In an optional embodiment, the management server may further send the index of the function identification information to the service server when the backup of the function identification information corresponding to the serverless service request is stored in the service server; or, when the backup of the function identification information is not stored in the service server, the function identification information is sent to the service server.
Specifically, the function identification information is stored in the information storage server, the service server locally backs up the function identification information, when the function identification information needs to be used, the management server can issue an index corresponding to the function identification information to the service server, and the service server searches for the corresponding function identification information locally through the index. And if the service server can not store the function identification information, the management server acquires the corresponding function identification information from the information storage server and transmits the function identification information to the service server through the network.
In an optional scheme, the service server may configure the index of the function identification information or the dynamic region corresponding to the function identification information as a computing resource required to provide a serverless service in a Partial Reconfiguration (PR) manner. Wherein partial reconfiguration may be performed by downloading a partial file while not blocking the operation of other logic. Optionally, after receiving the command sent by the management server, the service server performs partial reconfiguration operation on the FPGA board, and provides a serverless service based on the proxy function component.
Specifically, the service server performs data interaction associated with the serverless service in an application running environment and a configured dynamic region by setting a proxy function component, wherein the application running environment transmits an operation instruction through a command queue arranged between the proxy function component and the FPGA board card, and executes data transceiving operation corresponding to the operation instruction through a data queue arranged between the proxy function component and the FPGA board card.
Optionally, taking fig. 2 as an example for explanation, in fig. 2, docker is an application running environment, and Agency is an agent function component. After receiving the command sent by the management node, the service server creates a plurality of application operating environments (e.g., docker1, docker2, and dockerN in fig. 2), where the plurality of application operating environments provide a serverless service to the outside and implement many-to-one data transceiving through a proxy function component agent in a software layer.
In addition, as can be seen from fig. 2, two queues exist between the proxy function component and the FPGA board card, namely CMD queue and data queue, where CMD queue is a command queue and data queue is a data queue.
Optionally, when the client initiates a serverless service request of the FPGA, the management server determines available resources in the computing resource allocation system according to the configuration information of the client, and then selects a suitable FPGA from all the FPGAs to provide computing resources in a best-fit manner based on the use conditions of the dynamic regions of all the FPGAs. Then, the management server issues function identification information or function identification information index to the service server, and the agent functional component in the service server refreshes the designated dynamic region of the FPGA in a partial reconfiguration mode, so that the FPGA provides a serverless service.
It should be noted that, when the serverless service is finished, the management server notifies the service server to stop the operation of the application running environment, recovers the FPGA computing resources to be used, and performs merging processing on the recovered FPGA computing resources to be used. Specifically, after the serverless service is finished, the management server notifies the corresponding service server to stop the corresponding docker work, recovers the relevant FPGA non-dynamic area, and performs merging processing on the FPGA dynamic area to meet the following possible work requirement.
According to the scheme, the FPGA board card can provide the FPGA computing resources to be used for the serverless service through the processing mode of the management server and the service server for the serverless service request, the purpose of providing the serverless service to the client through the FPGA board card is achieved, the application range and the technical benefits of the FPGA board card are expanded, and the technical effect of using experience of a user is improved.
Example 2
There is also provided, in accordance with an embodiment of the present application, an embodiment of a method for computing resource allocation, the steps illustrated in the flowchart of the drawings being executable by a computer system, such as a set of computer-executable instructions, and although a logical order is illustrated in the flowchart, in some cases the steps illustrated or described may be executed out of order from that shown.
The method provided by the first embodiment of the present application may be executed in a mobile terminal, a computer terminal, or a similar computing device. Fig. 4 shows a hardware configuration block diagram of a computer terminal (or mobile device) for implementing the computing resource allocation method. As shown in fig. 4, the computer terminal 10 (or mobile device 10) may include one or more (shown as 102a, 102b, … …, 102 n) processors 102 (the processors 102 may include, but are not limited to, a processing device such as a microprocessor MCU or a programmable logic device FPGA, etc.), a memory 104 for storing data, and a transmission device 106 for communication functions. Besides, the method can also comprise the following steps: a display, an input/output interface (I/O interface), a Universal Serial Bus (USB) port (which may be included as one of the ports of the I/O interface), a network interface, a power source, and/or a camera. It will be understood by those skilled in the art that the structure shown in fig. 4 is only an illustration and is not intended to limit the structure of the electronic device. For example, the computer terminal 10 may also include more or fewer components than shown in FIG. 4, or have a different configuration than shown in FIG. 4.
It should be noted that the one or more processors 102 and/or other data processing circuitry described above may be referred to generally herein as "data processing circuitry". The data processing circuitry may be embodied in whole or in part in software, hardware, firmware, or any combination thereof. Further, the data processing circuit may be a single stand-alone processing module, or incorporated in whole or in part into any of the other elements in the computer terminal 10 (or mobile device). As referred to in the embodiments of the application, the data processing circuit acts as a processor control (e.g. selection of a variable resistance termination path connected to the interface).
The memory 104 may be used to store software programs and modules of application software, such as program instructions/data storage devices corresponding to the computing resource allocation method in the embodiment of the present application, and the processor 102 executes various functional applications and data processing by running the software programs and modules stored in the memory 104, so as to implement the computing resource allocation method described above. The memory 104 may include high speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, the memory 104 may further include memory located remotely from the processor 102, which may be connected to the computer terminal 10 via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The transmission device 106 is used for receiving or transmitting data via a network. Specific examples of the network described above may include a wireless network provided by a communication provider of the computer terminal 10. In one example, the transmission device 106 includes a Network adapter (NIC) that can be connected to other Network devices through a base station to communicate with the internet. In one example, the transmission device 106 can be a Radio Frequency (RF) module, which is used to communicate with the internet in a wireless manner.
The display may be, for example, a touch screen type Liquid Crystal Display (LCD) that may enable a user to interact with a user interface of the computer terminal 10 (or mobile device).
It should be noted here that in some alternative embodiments, the computer device (or mobile device) shown in fig. 4 above may include hardware elements (including circuitry), software elements (including computer code stored on a computer-readable medium), or a combination of both hardware and software elements. It should be noted that fig. 4 is only one example of a particular specific example and is intended to illustrate the types of components that may be present in the computer device (or mobile device) described above.
Under the operating environment, the application provides a computing resource allocation method as shown in fig. 5. The management server in embodiment 1 may execute the computing resource allocation method in this embodiment. FIG. 5 is a flowchart of a computing resource allocation method according to an embodiment of the present application, and as shown in FIG. 5, the method includes the following steps:
step S502, receiving a serverless service request initiated by a client;
step S504, responding to the server service request, and determining field programmable gate array FPGA computing resources to be used;
step S506, the business server is instructed to provide a serverless service to the client based on the FPGA computing resource.
In an alternative, the management server communicates with a client, the client sends a serverless service request to the management server via the network, and the management server responds to the serverless service request after receiving the request. The serverless service request at least includes relevant information of the FPGA computing resource requested by the client, including but not limited to identification information (e.g., name), type, and the like of the FPGA computing resource. Then, the management server analyzes the request, determines the FPGA board card providing the serverless service through an optimal scheduling mode, and sends the function identification information corresponding to the serverless service request to the service server so that the service server provides the computing resource corresponding to the FPGA board card, wherein the function identification information is stored in the information storage server.
Based on the schemes defined in the above steps S502 to S506, it can be known that, in a manner of providing FPGA computing resources based on serverless services, after receiving a serverless service request initiated by a client, the service server responds to the serverless service request, determines the FPGA computing resources to be used, and then instructs the service server to provide serverless services to the client based on the FPGA computing resources.
It is easy to notice that through the processing mode of the management server and the service server for the server service request, the FPGA board can provide the FPGA computing resource to be used for the server service, and the purpose of providing the server-free service to the client through the FPGA board is achieved, so that the application range and the technical benefits of the FPGA board are expanded, the technical effect of the user experience is improved, and the technical problem that the server-free service cannot be provided to the client through the FPGA board in the prior art is solved.
In an optional embodiment, the management server is further configured to respond to the serverless service request, and determine, according to the best adaptive algorithm best-fit and the use conditions of the dynamic regions on all the FPGA boards, the FPGA computing resource to be used, where the computing resource on each FPGA board is divided into a plurality of dynamic regions based on the minimum computing resource use requirement of the function identification information corresponding to the serverless service request.
It should be noted that the dynamic regions on the FPGA board are obtained by dividing based on the usage requirement of the minimum computing resource, where the number of the dynamic regions of the FPGA is N, N is a power of 2, and N satisfies the usage requirement of the minimum computing resource, for example, if the number of the clients is 5, the corresponding minimum value of N is 8.
In an optional embodiment, when it is detected that a client sends a serverless service request, the management server analyzes the service request sent by the client to determine an FPGA computing resource usable by the client, then determines, based on an optimal adaptation algorithm, an FPGA board card providing the serverless service according to a use condition of a dynamic region of the FPGA board card corresponding to the FPGA computing resource usable, sends an index of function identification information of the service request or the function identification information to a service server corresponding to the FPGA board card providing the service, and refreshes the dynamic region of the FPGA board card by the service server.
It should be noted that, the optimal adaptation algorithm can meet the requirement of the system for executing the task when performing memory allocation, and can allocate the minimum free area to the system for executing the task. In addition, management of the dynamic resources of the FPGA board card is achieved through the management server, and minimum fragmentation of the computing resources in the dynamic area of the FPGA board card is achieved.
In an optional embodiment, before instructing the service server to provide the serverless service to the client based on the FPGA computing resource, the management server further sends the index of the function identification information to the service server when the backup of the function identification information corresponding to the serverless service request is stored in the service server; or, when the backup of the function identification information is not stored in the service server, the function identification information is sent to the service server.
Specifically, the function identification information is stored in the information storage server, the service server locally backs up the function identification information, when the function identification information needs to be used, the management server can issue an index corresponding to the function identification information to the service server, and the service server searches for the corresponding function identification information locally through the index. And if the service server can not store the function identification information, the management server acquires the corresponding function identification information from the information storage server and transmits the function identification information to the service server through the network.
In an optional embodiment, after instructing the service server to provide the serverless service to the client based on the FPGA computing resource, the management server notifies the service server to stop the work of the application running environment when the serverless service is ended, recovers the FPGA computing resource to be used, and performs merging processing on the recovered FPGA computing resource to be used. Specifically, after the serverless service is finished, the management server notifies the corresponding service server to stop the corresponding docker work, recovers the relevant FPGA non-dynamic area, and performs merging processing on the FPGA dynamic area to meet the following possible work requirement.
It should be noted that the application execution environment may include an application execution environment, the application execution environment may be, but is not limited to, a Docker execution environment, and a developer may package an application into a portable container using the application execution environment and then issue the application, or perform virtualization processing.
It should be noted that, for simplicity of description, the above-mentioned method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present application is not limited by the order of acts described, as some steps may occur in other orders or concurrently depending on the application. Further, those skilled in the art should also appreciate that the embodiments described in the specification are preferred embodiments and that the acts and modules referred to are not necessarily required in this application.
Through the above description of the embodiments, those skilled in the art can clearly understand that the computing resource allocation method according to the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but the former is a better implementation mode in many cases. Based on such understanding, the technical solutions of the present application may be embodied in the form of a software product, which is stored in a storage medium (e.g., ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal device (e.g., a mobile phone, a computer, a server, or a network device) to execute the method according to the embodiments of the present application.
Example 3
According to the embodiment of the present application, there is also provided a computing resource allocation method, which is also executable in the computing resource allocation system in embodiment 1, wherein a service server of the computing resource allocation system may serve as an execution subject of this embodiment. Specifically, fig. 6 shows a flowchart of a computing resource allocation method, and as shown in fig. 6, the method includes the following steps:
step S602, receiving a command issued by the management server to provide a serverless service to the client.
Step S604, creating an application running environment according to the command of the management server, and providing a serverless service to the client through the application running environment.
In step S604, the application execution environment may include an application execution environment, the application execution environment may be, but is not limited to, a Docker execution environment, and a developer may package the application into a portable container using the application execution environment and then issue the application, or perform virtualization processing.
Optionally, the management server may respond to a serverless service request initiated by the client, determine an FPGA computing resource of the field programmable gate array to be used, and instruct the service server associated with the FPGA computing resource to provide a serverless service to the client based on the FPGA computing resource. And after receiving the command, the business server creates an application running environment and provides a serverless service for the client through the application running environment. The application running environment may be, but is not limited to, a Docker running environment, and a developer may package an application into a portable container using the application running environment and then issue the application, or perform virtualization.
In an alternative embodiment, the management server communicates with a client, the client sends a serverless service request to the management server via the network, and the management server responds to the serverless service request after receiving the request. The serverless service request at least includes relevant information of the FPGA computing resource requested by the client, including but not limited to identification information (e.g., name), type, and the like of the FPGA computing resource. Then, the management server analyzes the request, determines the FPGA board card providing the serverless service through an optimal scheduling mode, and sends the function identification information corresponding to the serverless service request to the service server so that the service server provides the computing resource corresponding to the FPGA board card, wherein the function identification information is stored in the information storage server.
Based on the schemes defined in steps S602 to S604, it can be known that, in a manner of providing FPGA computing resources based on serverless service, after receiving a command issued by a management server to provide serverless service to a client, an application operating environment is created according to the command of the management server, and the serverless service is provided to the client through the application operating environment.
It is easy to notice that through the processing mode of the management server and the service server for the server service request, the FPGA board can provide the FPGA computing resource to be used for the server service, and the purpose of providing the server-free service to the client through the FPGA board is achieved, so that the application range and the technical benefits of the FPGA board are expanded, the technical effect of the user experience is improved, and the technical problem that the server-free service cannot be provided to the client through the FPGA board in the prior art is solved.
In an optional embodiment, the service server configures, by means of partial reconfiguration, an index of the function identification information or a dynamic region corresponding to the function identification information as a computing resource required to provide a serverless service, and then performs data interaction associated with the serverless service in an application running environment and the configured dynamic region by means of setting a proxy function component, where the application running environment transmits an operation instruction through a command queue set between the proxy function component and the FPGA board, and performs data transceiving operation corresponding to the operation instruction through a data queue set between the proxy function component and the FPGA board.
Optionally, after receiving the command sent by the management server, the service server performs partial reconfiguration operation on the FPGA board, and provides a serverless service based on the proxy function component. And the service server performs data interaction associated with the serverless service in an application operating environment and the configured dynamic region by setting the agent function component, wherein the application operating environment transmits an operation instruction through a command queue arranged between the agent function component and the FPGA board card, and executes data transceiving operation corresponding to the operation instruction through a data queue arranged between the agent function component and the FPGA board card.
Optionally, when the client initiates a serverless service request of the FPGA, the management server determines available resources in the computing resource allocation system according to the configuration information of the client, and then selects a suitable FPGA from all the FPGAs to provide computing resources in a best-fit manner based on the use conditions of the dynamic regions of all the FPGAs. Then, the management server issues function identification information or function identification information index to the service server, and the agent functional component in the service server configures the designated dynamic area of the FPGA into the computing resource needed by providing the serverless service in a partial reconfiguration mode, so that the FPGA provides the serverless service.
It should be noted that, when the serverless service is finished, the management server notifies the service server to stop the operation of the application running environment, recovers the FPGA computing resources to be used, and performs merging processing on the recovered FPGA computing resources to be used. Specifically, after the serverless service is finished, the management server notifies the corresponding service server to stop the corresponding docker work, recovers the relevant FPGA non-dynamic area, and performs merging processing on the FPGA dynamic area to meet the following possible work requirement.
Example 4
According to an embodiment of the present invention, there is also provided a computing resource allocation apparatus for implementing the above computing resource allocation method, as shown in fig. 7, the apparatus 70 includes: a first receiving module 701, a determining module 703 and a first processing module 705.
The first receiving module 701 is configured to receive a serverless service request initiated by a client; the determining module 703 is configured to determine, in response to the serverless service request, a field programmable gate array FPGA computing resource to be used; the first processing module 705 is configured to instruct the service server to provide a serverless service to the client based on the FPGA computing resource.
It should be noted here that the first receiving module 701, the determining module 703 and the first processing module 705 correspond to steps S502 to S506 in embodiment 2, and the three modules are the same as the corresponding steps in the implementation example and the application scenario, but are not limited to the disclosure in embodiment 2.
Example 5
According to an embodiment of the present invention, there is also provided a computing resource allocation apparatus for implementing the above computing resource allocation method, as shown in fig. 8, the apparatus 80 includes: a second receiving module 801 and a second processing module 803.
The second receiving module 801 is configured to receive a command, which is issued by the management server and provides a serverless service to the client; the second processing module 803 is configured to create an application running environment according to the command of the management server, and provide a serverless service to the client through the application running environment.
It should be noted that the second receiving module 801 and the second processing module 803 correspond to steps S602 to S604 in embodiment 3, and the two modules are the same as the corresponding steps in the implementation example and application scenario, but are not limited to the disclosure in embodiment 3.
Example 6
The embodiment of the application can provide a computer terminal, and the computer terminal can be any one computer terminal device in a computer terminal group. Optionally, in this embodiment, the computer terminal may also be replaced with a terminal device such as a mobile terminal.
Optionally, in this embodiment, the computer terminal may be located in at least one network device of a plurality of network devices of a computer network.
In this embodiment, the computer terminal may execute the program code of the following steps in the computing resource allocation method: receiving a serverless service request initiated by a client; responding to a serverless service request, and determining field programmable gate array FPGA computing resources to be used; and the command service server provides a serverless service for the client based on the FPGA computing resource.
Optionally, fig. 9 is a block diagram of a computer terminal according to an embodiment of the present application. As shown in fig. 9, the computer terminal 10 may include: one or more processors 902 (only one shown), a memory 904, and a transmitting device 906.
The memory may be configured to store software programs and modules, such as program instructions/modules corresponding to the computing resource allocation method and apparatus in the embodiments of the present application, and the processor executes various functional applications and data processing by running the software programs and modules stored in the memory, so as to implement the computing resource allocation method. The memory may include high speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, the memory may further include memory remotely located from the processor, and these remote memories may be connected to terminal a through a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The processor can call the information and application program stored in the memory through the transmission device to execute the following steps: receiving a serverless service request initiated by a client; responding to a serverless service request, and determining field programmable gate array FPGA computing resources to be used; and the command service server provides a serverless service for the client based on the FPGA computing resource.
Optionally, the processor may further execute the program code of the following steps: and responding to the server service request, and determining the FPGA computing resources to be used according to the best adaptive algorithm best-fit and the use conditions of the dynamic regions on all the FPGA board cards, wherein the computing resources on each FPGA board card are divided into a plurality of dynamic regions based on the minimum computing resource use requirement of the function identification information corresponding to the server service request.
Optionally, the processor may further execute the program code of the following steps: when the backup of the function identification information corresponding to the serverless service request is stored in the service server, the index of the function identification information is sent to the service server; or, when the backup of the function identification information is not stored in the service server, the function identification information is sent to the service server.
Optionally, the processor may further execute the program code of the following steps: and when the serverless service is finished, informing the service server to stop the work of the application running environment, recovering the FPGA computing resources to be used, and merging the recovered FPGA computing resources to be used.
Optionally, the processor may further execute the program code of the following steps: receiving a command which is issued by a management server and provides a serverless service for a client, wherein identification information of the client is pre-stored in an information storage server; and creating an application running environment according to the command of the management server, and providing a serverless service for the client through the application running environment.
Optionally, the processor may further execute the program code of the following steps: configuring the index of the function identification information or the dynamic area corresponding to the function identification information into computing resources required by providing a serverless service in a partial reconfiguration mode; and performing data interaction associated with the serverless service in an application running environment and a configured dynamic area by setting the agent function component, wherein the application running environment transmits an operation instruction through a command queue arranged between the agent function component and the FPGA board card, and executes data transceiving operation corresponding to the operation instruction through a data queue arranged between the agent function component and the FPGA board card.
It can be understood by those skilled in the art that the structure shown in fig. 9 is only an illustration, and the computer terminal may also be a terminal device such as a smart phone (e.g., an Android phone, an iOS phone, etc.), a tablet computer, a palmtop computer, a Mobile Internet Device (MID), a PAD, and the like. Fig. 9 is a diagram illustrating a structure of the electronic device. For example, the computer terminal 10 may also include more or fewer components (e.g., network interfaces, display devices, etc.) than shown in FIG. 9, or have a different configuration than shown in FIG. 9.
Those skilled in the art will appreciate that all or part of the steps in the methods of the above embodiments may be implemented by a program instructing hardware associated with the terminal device, where the program may be stored in a computer-readable storage medium, and the storage medium may include: flash disks, Read-Only memories (ROMs), Random Access Memories (RAMs), magnetic or optical disks, and the like.
Example 7
Embodiments of the present application also provide a storage medium. Optionally, in this embodiment, the storage medium may be configured to store a program code executed by the computing resource allocation method provided in the first embodiment.
Optionally, in this embodiment, the storage medium may be located in any one of computer terminals in a computer terminal group in a computer network, or in any one of mobile terminals in a mobile terminal group.
Optionally, in this embodiment, the storage medium is configured to store program code for performing the following steps: receiving a serverless service request initiated by a client; responding to a serverless service request, and determining field programmable gate array FPGA computing resources to be used; and the command service server provides a serverless service for the client based on the FPGA computing resource.
Optionally, in this embodiment, the storage medium is configured to store program code for performing the following steps: and responding to the server service request, and determining the FPGA computing resources to be used according to the best adaptive algorithm best-fit and the use conditions of the dynamic regions on all the FPGA board cards, wherein the computing resources on each FPGA board card are divided into a plurality of dynamic regions based on the minimum computing resource use requirement of the function identification information corresponding to the server service request.
Optionally, in this embodiment, the storage medium is configured to store program code for performing the following steps: when the backup of the function identification information corresponding to the serverless service request is stored in the service server, the index of the function identification information is sent to the service server; or, when the backup of the function identification information is not stored in the service server, the function identification information is sent to the service server.
Optionally, in this embodiment, the storage medium is configured to store program code for performing the following steps: and when the serverless service is finished, informing the service server to stop the work of the application running environment, recovering the FPGA computing resources to be used, and merging the recovered FPGA computing resources to be used.
Optionally, in this embodiment, the storage medium is configured to store program code for performing the following steps: receiving a command which is issued by a management server and provides a serverless service for a client, wherein identification information of the client is pre-stored in an information storage server; and creating an application running environment according to the command of the management server, and providing a serverless service for the client through the application running environment.
Optionally, in this embodiment, the storage medium is configured to store program code for performing the following steps: configuring the index of the function identification information or the dynamic area corresponding to the function identification information into computing resources required by providing a serverless service in a partial reconfiguration mode; and performing data interaction associated with the serverless service in an application running environment and a configured dynamic area by setting the agent function component, wherein the application running environment transmits an operation instruction through a command queue arranged between the agent function component and the FPGA board card, and executes data transceiving operation corresponding to the operation instruction through a data queue arranged between the agent function component and the FPGA board card.
The above-mentioned serial numbers of the embodiments of the present application are merely for description and do not represent the merits of the embodiments.
In the above embodiments of the present application, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the embodiments provided in the present application, it should be understood that the disclosed technology can be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one type of division of logical functions, and there may be other divisions when actually implemented, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, units or modules, and may be in an electrical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be substantially implemented or contributed to by the prior art, or all or part of the technical solution may be embodied in a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic or optical disk, and other various media capable of storing program codes.
The foregoing is only a preferred embodiment of the present application and it should be noted that those skilled in the art can make several improvements and modifications without departing from the principle of the present application, and these improvements and modifications should also be considered as the protection scope of the present application.

Claims (17)

1. A computing resource allocation system, comprising:
the service server is used for providing FPGA computing resources to be used for the serverless service through the built-in field programmable gate array FPGA board card;
and the management server is used for responding to a serverless service request initiated by a client, determining the FPGA computing resource to be used and commanding the business server to provide serverless service for the client based on the FPGA computing resource.
2. The system of claim 1, wherein the service server is further configured to create an application execution environment according to the command of the management server, and provide the serverless service to the client through the application execution environment.
3. The system according to claim 1, wherein the management server is further configured to determine, in response to the serverless service request, the FPGA computing resource to be used according to a best-fit algorithm best-fit and usage of the dynamic regions on all the FPGA boards, and the computing resource on each FPGA board is divided into a plurality of dynamic regions based on a minimum computing resource usage requirement of the function identification information corresponding to the serverless service request.
4. The system according to claim 2, wherein the management server is further configured to send an index of the function identification information to the service server when the service server stores a backup of the function identification information corresponding to the serverless service request; or, when the backup of the function identification information is not stored in the service server, the function identification information is sent to the service server.
5. The system of claim 1, wherein the FPGA board is further configured to provide the FPGA computing resource to be used in a time division multiplexing and space division multiplexing manner.
6. The system according to claim 4, wherein the service server is further configured to configure, in a partial reconfiguration manner, the index of the function identification information or the dynamic area corresponding to the function identification information as the computing resource required to provide the serverless service.
7. The system of claim 6, wherein the service server is further configured to perform data interaction associated with the serverless service in the application execution environment and the configured dynamic region by setting a proxy function component, wherein the application execution environment transmits an operation instruction through a command queue provided between the proxy function component and the FPGA board, and performs a data transceiving operation corresponding to the operation instruction through a data queue provided between the proxy function component and the FPGA board.
8. The system according to claim 2, wherein the management server is further configured to notify the service server to stop the operation of the application running environment when the serverless service is ended, recover the to-be-used FPGA computing resource, and perform merging processing on the recovered to-be-used FPGA computing resource.
9. The system of claim 1, further comprising:
and the information storage server is used for storing the function identification information.
10. A method for allocating computing resources, comprising:
receiving a serverless service request initiated by a client;
responding to the serverless service request, and determining field programmable gate array FPGA computing resources to be used;
and the command service server provides a serverless service for the client based on the FPGA computing resource.
11. The method of claim 10, wherein determining the FPGA computing resource to be used in response to the serverless service request comprises:
responding to the serverless service request, and determining the FPGA computing resource to be used according to the best adaptive algorithm best-fit and the use condition of the dynamic areas on all the FPGA board cards, wherein the computing resource on each FPGA board card is divided into a plurality of dynamic areas based on the minimum computing resource use requirement of the function identification information corresponding to the serverless service request.
12. The method of claim 10, prior to instructing a business server to provide a serverless service to the client based on the FPGA computing resource, further comprising:
when the backup of the function identification information corresponding to the serverless service request is stored in the service server, sending the index of the function identification information to the service server; or, when the backup of the function identification information is not stored in the service server, the function identification information is sent to the service server.
13. The method of claim 10, after instructing the business server to provide a serverless service to the client based on the FPGA computing resource, further comprising:
and when the serverless service is finished, informing the service server to stop the work of the application running environment, recovering the FPGA computing resources to be used, and merging the recovered FPGA computing resources to be used.
14. A method for allocating computing resources, comprising:
receiving a command which is issued by a management server and provides a serverless service for a client;
And creating an application running environment according to the command of the management server, and providing the serverless service for the client through the application running environment.
15. The method of claim 14, wherein providing the serverless service to the client through the application execution environment comprises:
configuring the index of the function identification information or the dynamic area corresponding to the function identification information into computing resources required by the serverless service in a partial reconfiguration mode;
and performing data interaction associated with the serverless service in the application running environment and the configured dynamic area by setting a proxy function component, wherein the application running environment transmits an operation instruction through a command queue arranged between the proxy function component and the FPGA board card, and executes data transceiving operation corresponding to the operation instruction through a data queue arranged between the proxy function component and the FPGA board card.
16. A storage medium comprising a stored program, wherein the program, when executed, controls an apparatus in which the storage medium is located to perform the method of allocating computing resources of any one of claims 10 to 15.
17. A processor configured to run a program, wherein the program when running performs the method of allocating computing resources of any one of claims 10 to 15.
CN201910439929.7A 2019-05-24 2019-05-24 Computing resource allocation system and method Active CN111984397B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201910439929.7A CN111984397B (en) 2019-05-24 2019-05-24 Computing resource allocation system and method
PCT/CN2020/091231 WO2020238720A1 (en) 2019-05-24 2020-05-20 Computing resource allocation system and method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910439929.7A CN111984397B (en) 2019-05-24 2019-05-24 Computing resource allocation system and method

Publications (2)

Publication Number Publication Date
CN111984397A true CN111984397A (en) 2020-11-24
CN111984397B CN111984397B (en) 2024-06-21

Family

ID=73436676

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910439929.7A Active CN111984397B (en) 2019-05-24 2019-05-24 Computing resource allocation system and method

Country Status (2)

Country Link
CN (1) CN111984397B (en)
WO (1) WO2020238720A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114596113A (en) * 2022-03-07 2022-06-07 阿里巴巴(中国)有限公司 Method and device for generating commodity push strategy
CN119172225A (en) * 2024-11-20 2024-12-20 杭州菲田云计算有限公司 Data processing method and device

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115271279B (en) * 2021-04-29 2025-10-28 中国移动通信集团江苏有限公司 Service request processing method, device and storage medium
CN115437710A (en) * 2022-08-31 2022-12-06 金蝶软件(中国)有限公司 WebIDE container management method, webIDE container management apparatus, and computer storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106776998A (en) * 2016-12-06 2017-05-31 华为技术有限公司 A kind of database service provides method and server
CN108984125A (en) * 2018-07-17 2018-12-11 郑州云海信息技术有限公司 A kind of method, apparatus and storage medium of resource allocation
US20190026150A1 (en) * 2017-07-20 2019-01-24 Cisco Technology, Inc. Fpga acceleration for serverless computing
US20190132314A1 (en) * 2017-10-30 2019-05-02 EMC IP Holding Company LLC Systems and methods of serverless management of data mobility domains

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107479964A (en) * 2016-06-08 2017-12-15 成都赫尔墨斯科技股份有限公司 A kind of cloud rendering system
CN106776002B (en) * 2016-11-15 2020-09-25 华为技术有限公司 Communication method and device for virtualized hardware architecture of FPGA
TW201926951A (en) * 2017-11-23 2019-07-01 財團法人資訊工業策進會 Platform as a service cloud server and multi-tenant operating method thereof
CN108829512B (en) * 2018-05-09 2021-08-24 山东浪潮科学研究院有限公司 A method, system and cloud center for allocating computing power for cloud center hardware acceleration

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106776998A (en) * 2016-12-06 2017-05-31 华为技术有限公司 A kind of database service provides method and server
US20190026150A1 (en) * 2017-07-20 2019-01-24 Cisco Technology, Inc. Fpga acceleration for serverless computing
US20190132314A1 (en) * 2017-10-30 2019-05-02 EMC IP Holding Company LLC Systems and methods of serverless management of data mobility domains
CN108984125A (en) * 2018-07-17 2018-12-11 郑州云海信息技术有限公司 A kind of method, apparatus and storage medium of resource allocation

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114596113A (en) * 2022-03-07 2022-06-07 阿里巴巴(中国)有限公司 Method and device for generating commodity push strategy
CN119172225A (en) * 2024-11-20 2024-12-20 杭州菲田云计算有限公司 Data processing method and device
CN119172225B (en) * 2024-11-20 2025-03-21 杭州菲田云计算有限公司 Data processing method and device

Also Published As

Publication number Publication date
WO2020238720A1 (en) 2020-12-03
CN111984397B (en) 2024-06-21

Similar Documents

Publication Publication Date Title
EP3385835B1 (en) Method and apparatus for configuring accelerator
EP3702915B1 (en) Data processing method and device, storage medium, processor, and system
CN111984397A (en) Computing resource allocation system and method
CN110442450B (en) Image processing device, method and device and material calculating and rendering system
CN110688146A (en) Method, device and storage medium for dynamically configuring monitoring system
CN106354559A (en) Method and device for processing cloud desktop resources
CN107360633B (en) Pairing connection method and device of virtual reality system and virtual reality system
CN110928637A (en) Load balancing method and system
CN113296871A (en) Method, equipment and system for processing container group instance
CN110879741A (en) Virtual machine live migration method and device, storage medium and processor
CN110750206B (en) Data processing method, device and system
CN106156044B (en) Database switching method and device
CN111130820B (en) Cluster management method and device and computer system
CN109361693B (en) Virtual device communication method and device
CN116243853A (en) Data transmission method and device, electronic equipment and nonvolatile storage medium
CN112395040A (en) Memory data transmission method, system and server
CN114079909B (en) A network slice information processing method, device and network equipment
CN113377490B (en) Memory allocation method, device and system of virtual machine
HK40039489A (en) Computing resource allocation system and method
CN115469961A (en) Method and device for creating container group, electronic equipment and storage medium
JP2021511688A (en) Network slice configuration method and equipment, computer storage medium
CN110874264B (en) Instance thermomigration method and device, storage medium and processor
CN115243394B (en) Communication method, device, equipment and medium based on network slice
CN117880217A (en) Network resource allocation method and device and nonvolatile storage medium
CN114356493B (en) Communication method, device and processor between virtual machine instances of cross-cloud server

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40039489

Country of ref document: HK

GR01 Patent grant
GR01 Patent grant