[go: up one dir, main page]

WO2020185132A1 - Procédé et gestionnaire de nuage de bord actuel pour commander des ressources - Google Patents

Procédé et gestionnaire de nuage de bord actuel pour commander des ressources Download PDF

Info

Publication number
WO2020185132A1
WO2020185132A1 PCT/SE2019/050221 SE2019050221W WO2020185132A1 WO 2020185132 A1 WO2020185132 A1 WO 2020185132A1 SE 2019050221 W SE2019050221 W SE 2019050221W WO 2020185132 A1 WO2020185132 A1 WO 2020185132A1
Authority
WO
WIPO (PCT)
Prior art keywords
edge cloud
cloud manager
service
resources
current edge
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/SE2019/050221
Other languages
English (en)
Inventor
Aleksandra OBESO DUQUE
Jinhua Feng
Remi ROBERT
Morgan Lindqvist
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Telefonaktiebolaget LM Ericsson AB
Original Assignee
Telefonaktiebolaget LM Ericsson AB
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Telefonaktiebolaget LM Ericsson AB filed Critical Telefonaktiebolaget LM Ericsson AB
Priority to PCT/SE2019/050221 priority Critical patent/WO2020185132A1/fr
Publication of WO2020185132A1 publication Critical patent/WO2020185132A1/fr
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1001Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
    • H04L67/1029Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers using data related to the state of servers by a load balancer
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1001Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
    • H04L67/1031Controlling of the operation of servers by a load balancer, e.g. adding or removing servers that serve requests
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/60Scheduling or organising the servicing of application requests, e.g. requests for application data transmissions using the analysis and optimisation of the required network resources
    • H04L67/61Scheduling or organising the servicing of application requests, e.g. requests for application data transmissions using the analysis and optimisation of the required network resources taking into account QoS or priority requirements

Definitions

  • Embodiments herein relate to a method and a current edge cloud manager for controlling resources.
  • embodiments herein relate to controlling resources in a cloud environment.
  • resources for computing, processing and storing of data herein referred to as“resources” for short, may be hired and used
  • Such cloud resources may be deployed in large data centers, commonly known as“the cloud”, which are typically attached to various communications networks.
  • the communications networks mentioned herein may be any type of networks that may be used by clients for accessing the cloud, e.g. including wireless, fixed, public and private networks, using any suitable protocols and standards for communication.
  • Machine learning (ML) operations are usually executed on huge amounts of data provided by various communication devices and nodes, such as network nodes, servers, wireless devices, Machine-to-Machine devices, Internet-of-Things, loT, devices, and so forth.
  • edge cloud which is generally closer to the clients and end users than the more traditional cloud core or“central cloud”, as schematically illustrated in Fig. 1.
  • This figure depicts a cloud environment which is comprised of a central cloud to which a set of edge clouds are attached.
  • the central cloud is sometimes also referred to as the“back-end cloud” which in practice may be comprised of several individually controlled and administrated sets of resources or central clouds.
  • Each edge cloud is further connected to one or more access networks by which various clients may communicate with a respective edge cloud.
  • an edge cloud generally refers to a set of resources located relatively close to clients so that they may communicate directly with the edge cloud through a suitable access network, as opposed to the central cloud which has resources located relatively far away from the clients, as can be seen in Fig. 1.
  • resources of an edge cloud can be implemented in one or more edge nodes.
  • the amount of resources available in the central cloud may be considered as infinite and resource congestion mitigation can usually be handled by a centralized cloud manager.
  • This centralized cloud manager spawns a new instance of the service in a scenario with less resource constraints.
  • resources are scarcer, and the same strategy for resource allocation as applied for the central cloud may not be suitable or even possible to apply for an edge cloud.
  • the edge cloud provides an advantage when compared to the central cloud by the ability to offer low delay for latency-critical applications, since the edge cloud is closer to where the client, e.g. operating a user equipment (UE), is located.
  • An edge cloud is sometimes also referred to as an“edge node” even though it could be comprised of several physical nodes.
  • a more efficient strategy to avoid resource congestion at edge nodes may be considered, since having this situation would not allow serving high priority services at the most optimal location. This situation makes it harder to ensure a high Quality of Experience (QoE), especially during a service migration process.
  • QoE Quality of Experience
  • a latency-critical service or application is typically associated with a deadline based on the highest acceptable response delay, which may be defined as a Key Performance Indicator (KPI) of the service quality.
  • KPI Key Performance Indicator
  • the edge nodes can handle admission, scheduling and (re)placement of services, which includes deciding which service to move when there is a need to deploy a new one.
  • the service requests should preferably be moved along the shortest path to the central cloud.
  • LRU Least Recently Used
  • LFU Least Frequently Used
  • the uncoordinated strategies found so far for edge clouds are mainly developed in a proactive way addressing from the admission to the scheduling to the placement of the resources based on strategies such as LRU, LFU or service deadline.
  • This proactive strategy makes it harder to ensure the placement of latency-critical applications at the most optimal edge node location, which is especially important for live-service migration processes supporting mobility scenarios.
  • resources are typically released either by completely removing the less frequent or less recent services or by moving them all the way to the central cloud, while hardly impacting the service, e.g. in terms of Quality of Service QoS/ Quality of Experience QoE. Summary
  • the object is achieved by providing a method performed by a current edge cloud manager for controlling resources in a cloud environment, to be used for handling a requested service s h.
  • the current edge cloud manager detects that there are not enough available resources managed by the current edge cloud manager for handling the requested service s h.
  • the current edge cloud manager then evaluates ongoing services using resources managed by the current edge cloud manager, with respect to service priority.
  • the current edge cloud manager transfers the ongoing service Si with lower priority to a first neighbour edge cloud manager, thereby releasing resources managed by the current edge cloud manager for handling the requested service s h.
  • the current edge cloud manager transfers the requested service s h to a second neighbour edge cloud manager.
  • the object is achieved by providing a current edge cloud manager for controlling resources in a cloud environment to be used for handling a requested service s h.
  • the current edge cloud manager is configured to detect that there are not enough available resources managed by the current edge cloud manager for handling the requested service s h.
  • the current edge cloud manager is further configured to evaluate ongoing services using resources managed by the current edge cloud manager with respect to service priority.
  • the current edge cloud manager is configured to transfer the ongoing service Si with lower priority to a first neighbour edge cloud manager, thereby releasing resources managed by the current edge cloud manager to handle the requested service s h.
  • the current edge cloud manager is configured to transfer the requested service s h to a second neighbour edge cloud manager.
  • a computer program product comprising instructions, which, when executed on at least one processor, cause the at least one processor to carry out the method above, as performed by the current edge cloud manager. It is additionally provided herein a computer-readable storage medium, having stored thereon a computer program product comprising instructions which, when executed on at least one processor, cause the at least one processor to carry out the method according to the method above, as performed by the current edge cloud manager.
  • a requested service i.e. a new service to be handled, is herein denoted s h
  • an ongoing service is herein denoted si .
  • the advantages to free-up resources managed by the current edge cloud manager, upon congestion, may be achieved by detecting that there are not enough available resources managed by the current edge cloud manager for handling a high priority requested service s h. And, when there is an ongoing service Si using resources managed by the current edge cloud manager with lower service priority than the high priority requested service, the ongoing service Si with lower priority is transferred to a first neighbour edge cloud manager. Thereby resources managed by the current edge cloud manager are released to handle the requested service s h .
  • the embodiments herein allow to adaptively handling resources for high priority services based on load prediction models. Upon congestion, the temporary reallocation of a service with less priority is provided as a way to free-up resources in a decentralized way.
  • the approach according to embodiments herein would represent a better strategy.
  • the approach according to embodiments herein does not rely on how frequent or how recently a service has been used, which may not be relevant for mission-critical applications and when non-delayed delivery is desired. Instead, it is more appropriate to rely on the priority of the applications that are requesting to be served.
  • Fig. 1 is a schematic illustration of a typical centralized cloud environment, according to the prior art, where the management is handled from the central cloud.
  • Fig. 2 is a schematic overview illustrating an example of a first level of a
  • Hierarchical resource buffer as a way to model local resources in cloud nodes including edge nodes, according to some example embodiments.
  • Fig. 3 is a schematic overview illustrating an example of a hierarchical model of clouds including edge clouds, according to further example embodiments, where each of the edge clouds has one manager node.
  • Fig. 4 is a schematic overview illustrating an example of a hierarchical model of cloud nodes including edge nodes, according to further example embodiments, where each of the edge nodes is a manager node.
  • Fig. 5 is a schematic overview illustrating an example of how buffer slots are provided by nearby edge clouds and edge nodes to which services are temporarily migrated, according to further example embodiments.
  • Fig. 6a is a schematic diagram illustrating an example of a look-up table to store information regarding neighbor nodes, according to further example embodiments.
  • Fig. 6b is a schematic overview illustrating an example of a system view and relation edge cloud manager-hierarchical resource buffer.
  • Fig. 7 is a flow chart illustrating a procedure in a current edge cloud manager, according to further example embodiments.
  • Fig. 8 is a flow chart illustrating a procedure of how a current edge cloud manager may operate, according to further example embodiments.
  • Fig. 9 is a signaling diagram illustrating how the current edge cloud manager may operate and interact with other edge cloud-managers and a client device, according to further example embodiments.
  • Fig. 10 is a flow chart illustrating another example of how a current edge cloud manager may operate in more detail, according to further example embodiments.
  • Fig. 11 is a block diagram illustrating how a current edge cloud manager may be structured, according to further example embodiments.
  • Embodiments herein mainly focus on edge cloud resources that offer a better QoE for latency-critical services, e.g. a main scenario for live service migration processes as support for mobile applications.
  • a detecting unit in the current edge cloud manager 130 detects that there are not enough available resources for handling a requested service s h. Therefore, the current edge cloud manager 130 evaluates ongoing services, with respect to service priority. When there is an ongoing service Si using resources managed by the current edge cloud manager 130 with lower service priority than the requested service s h , the current edge cloud manager 130 transfers the ongoing service Si with lower priority to a first neighbour edge cloud manager 140. By doing this, resources managed by the current edge cloud manager 130 are released for handling the requested service s h .
  • the current edge cloud manager 130 transfers the requested service s h to a second neighbour edge cloud manager 150.
  • An example of how embodiments herein may be used to address the congestion problem in the current edge cloud manager 130 may be based on the usage of a special resource buffer in the edge cloud manager 130.
  • the functionalities of this buffer allow uninterrupted processing, adaptiveness, workload distribution and location-awareness.
  • a distributed cloud may be modeled as a hierarchical resource buffer.
  • the first level of this hierarchical resource buffer which is represented by the local buffer 20, may be in charge of managing local resources as shown in Fig. 2.
  • the second level of this hierarchical resource buffer which is represented by the distributed buffer 30, may be in charge of managing remote resources in the different edge clouds, for a partially decentralized cloud environment as shown in Fig. 3, or in the different edge nodes, for a fully decentralized cloud environment as shown in Fig. 4.
  • the local buffer 20 allows reserving or pre-allocating resources on each edge cloud or edge node for high priority services
  • the distributed buffer 30 allows to free-up, e.g. release, resources when the resources managed by the current edge cloud manager 130 are fully congested.
  • an edge cloud may include a set of edge nodes but sometimes an edge cloud is referred to herein as an edge node.
  • an edge cloud is referred to herein as an edge node.
  • the scenario depicted in Fig. 3 representing a partially decentralized cloud environment where each edge cloud, made up by a set of edge nodes, has one manager node such as the current edge cloud manager 130.
  • the other scenario depicted in Fig. 4 represents the case of a fully decentralized cloud environment, where each edge node is a manager node.
  • Each edge cloud manager may have a local buffer 20 for reserving or allocating resource capacity in terms of, e.g., processing, networking, memory and storage.
  • prediction models may be used to optimize the local buffer management whose size may be dynamically defined.
  • the maximum and minimum limits, to which resources may be pre-allocated and local buffer size, may be configurable parameters.
  • the distributed buffer slots, S1 -S3, may be represented by a set of nearby edge cloud managers eDC1 , eDC2, eDC3, managing edge clouds, edge nodes or even by virtual gateways vGW1 with computational capabilities where services may be temporary migrated as depicted in Fig. 5.
  • a buffer may allow freeing- up resources on a congested node upon a new service allocation request from a service with a higher priority, by temporary moving away services with less priority.
  • This methodology improves the likelihood to allocate a latency-critical service at the most optimal location and, at the same time, it helps to keep a high QoE on the temporary reallocated service on a best effort basis.
  • An ongoing service Si with lower priority than the requested new service s h may be temporary migrated to a new location at a neighbor edge cloud, based on a look up table 60 containing information, as shown in Fig. 6a.
  • the neighbour edge cloud where the ongoing service Si is temporary placed may be seen as an active buffer slot where the ongoing service Si may be live-migrated in such a way that the service availability is not impacted.
  • the proximity to the original location where the ongoing service Si was running preserves the QoS/QoE to a certain point thus still providing low latency in the service even after its temporary migration.
  • the local buffer 20 in Fig. 2, the distributed buffer in Fig. 3, 4, 5 and the look-up table 60 in Fig. 6a are examples of implementation details on how the resource buffer may be modeled as a base for the method performed by the current edge cloud manager 130 to keep track of information regarding the resource availability and resource allocation in each edge cloud or edge node and their proximity.
  • Fig. 6b shows an example of a system view and the relation between the current edge cloud manager 130, the neighbor edge cloud managers 140 and 150 and the hierarchical resource buffer. I.e. , Fig. 6b thus depicts how the edge cloud relates to both the local buffer 20 and the distributed buffer 30 of the hierarchical resource buffer.
  • the strategies found so far in conventional solutions do not address the tradeoff between attending a high priority service request and at the same time reducing the impact on the less priority services that are (re)moved.
  • the approach according to embodiments herein increases the likelihood of placing a latency- critical service on the most optical edge cloud location, especially for live service migration processes supporting mobility scenarios.
  • a method performed by a current edge cloud manager 130 for controlling resources in a cloud environment, to be used for handling a requested service s h is provided.
  • Services with lower priority can temporarily be reallocated to make room for services with higher priority.
  • This temporary (re)allocation of running, e.g. ongoing, services in a close by edge cloud, e.g. neighbour edge cloud allows freeing-up resources on a congested edge cloud, without major repercussions on the QoS of the applications that are being served.
  • a hierarchical resource buffer may be used enabling low RTT on latency-critical applications, by using intelligent resource pre-allocation, live- migration for uninterrupted service delivery and uncoordinated management.
  • the hierarchical resource buffer lowers the probabilities of congestion on the most optimal location at a particular time for a high priority service.
  • the destiny location may be in an edge node managed by a neighbor edge cloud manager, such as a first neighbour edge cloud manager 140, or a virtual gateway with computational capabilities.
  • the hierarchical resource buffer comprises: • A local buffer 20 and its adaptive capacity and size allow the current edge cloud manager 130 to pre-allocate local resources for high priority services in an intelligent way to reduce the waste of those resources to idle services.
  • An active and distributed buffer 30 made up by a set of edge cloud managers that allows to free resources on a congested edge node, by temporary moving a service with less priority to nearby nodes, such as a first- or second neighbor edge node. It represents a way to dynamically make space for a new request with a higher priority, thus improving the likelihood of allocating a new service request in the most optimal edge node.
  • the active and location-aware buffer slots allow the live-migration of services with less priority than the new service requesting for allocation at the congested edge node. This live migration to a nearby node preserves the continuity on the service that is being moved and to keep the best possible QoS/QoE.
  • the resource preemption avoids an ongoing service S
  • a first action 700 illustrates that the current edge cloud manager may receive a request for a service s h , e.g. for hosting a service with high priority, such as a latency-critical service.
  • the requested service s h may have a service priority h and information, e.g. about the resource requirements to be allocated in terms of amount of required processing resources, amount of required storage and memory and amount of required networking resources.
  • the above information may be obtained by means of a deterministic load prediction model, e.g. a deterministic task graph model, or any stochastic model, such as a machine learning model.
  • the requested service s h may be triggered by, e.g. a new service request or a migration process.
  • a next action 701 it is then evaluated, e.g. detected, if there are available resources in the current edge cloud to attend this new service request, i.e. the current edge cloud manager 130 then detects that there are not enough available resources managed by the current edge cloud manager 130 for handling the requested service s h.
  • the current edge cloud manager 130 may schedule the execution of the requested service S h .
  • a further action 702 when there are not enough available resources in the current edge cloud for handling the requested service s h , then the current edge cloud manager 130 proceeds to evaluate the ongoing services that are running on the congested nodes managed by the current edge cloud manager 130 based on their specified priority. In another action 703, it is decided if there is an ongoing service Si using resources managed by the current edge cloud manager 130, with lower priority than the requested service s h .
  • the current edge cloud manager 130 may check information about the neighbor edge clouds or edge nodes, e.g. based on different metrics such as proximity and resource capacity, where to temporary reallocate the ongoing service Si , e.g. by querying a look-up table 60.
  • the ongoing service Si is then temporary migrated, e.g. transferred, to a first neighbor edge cloud 140, which may be a child node.
  • the ongoing service Si with lower priority than the requested service s h is transferred to the first neighbour edge cloud manager 140, thereby releasing resources managed by the current edge cloud manager 130 for handling the requested service s h .
  • a next action 705 when there is no ongoing service Si using resources managed by the current edge cloud manager 130, with lower priority than the requested service s h, then the current edge cloud manager 130 may check the second-best location for the requested service s h , e.g. by querying the look-up table 60 and comparing the service requirements. Thus, the requested service s h is transferred to a second neighbour edge cloud manager 150.
  • FIG. 8 An example of how the embodiments herein may be employed in terms of actions which may be performed by the current edge cloud manager 130 is illustrated by the flow chart in Fig. 8 which will now be described with further reference to Fig. 7, although this procedure is not limited to the example of Fig. 7.
  • the actions in Fig. 8 could thus be performed by the current edge cloud manager 130, which is operable to control resources in a cloud environment, to be used for handling a requested service s h .
  • the actions may be taken in any suitable order. Actions that are optional are presented in dashed boxes in Fig. 8.
  • a first action 800 illustrates that the current edge cloud manager 130 detects that there are not enough available resources managed by the current edge cloud manager 130 for handling the requested service s h , which corresponds to the above action 701 .
  • the current edge cloud manager 130 evaluates ongoing services using resources managed by the current edge cloud manager 130 with respect to service priority. This action corresponds to the above action 702.
  • a further action 804 when there is an ongoing service Si using resources managed by the current edge cloud manager 130 with lower service priority than the requested service s h , i.e. when (si) ⁇ (s h ) , then the current edge cloud manager 130 transfers the ongoing service Si with lower priority to a first neighbour edge cloud manager 140. Thereby resources managed by the current edge cloud manager 130 for handling the requested service s h are released.
  • This action corresponds to the above actions 703 and 704.
  • the action 806 illustrates that when there is no ongoing service Si using resources managed by the current edge cloud manager 130 with lower service priority than the requested service s h , i.e. when (si) > (s h ) , then the current edge node manager 130 transfers the requested service s h to a second neighbour edge cloud manager 150, as in the above actions 703 and 705.
  • a final action 808 illustrates the case when there are available resources or after resources have been released.
  • the current edge cloud manager 130 may then schedule the execution of the requested service s h. This action corresponds to the above action 706.
  • either of the first neighbour edge cloud manager 140 and second neighbour edge cloud manager 150 may be identified based on information about resources managed by neighbour edge clouds in a look-up table 60.
  • Said information in the look-up table 60 may comprise any one or more of: amount of available processing resources, amount of available storage and memory, amount of available networking resources, and communication distance to the neighbour edge clouds.
  • the information in the look-up table 60 may continuously or regularly be updated through communication with the respective neighbour edge cloud managers.
  • said service priority may be determined based on type of service and/or subscription.
  • said detecting may be performed by checking the amount of available resources in a distributed resource buffer in the current edge cloud manager 130, wherein the resource buffer may comprise pre-allocated resources.
  • the embodiments herein focus on edge cloud resources where a better QoE may be offered at the time of dealing with latency-critical services, even though these resources are scarcer when compared to a central cloud. This case represents the main scenario for live-service migration processes as support for mobile
  • the edge cloud where the service is temporary moved to may be seen as an active buffer slot where the service may be live-migrated, e.g. transferred, in such a way that the service availability is not impacted.
  • the hierarchical model of the resource buffer is described by Fig. 2, Fig. 3 and Fig. 4.
  • the first level, the local buffer 20 managed by each edge cloud manager 130, is shown in Fig. 2.
  • the buffer slots represent the pre-allocated resources for each high priority service S in terms of, e.g. processing p, memory m, networking n and storage s.
  • the minimum and maximum capacities that may be reserved are configurable.
  • Other local components are Load Prediction Models based on, for example, machine intelligence mechanisms that allow setting the required capacities to be allocated for each particular service.
  • Another option is to make use of the Mobility Planners information, for example, GPS navigation apps, or in the case of drones, flight planning applications. These applications are usually hosted in more central cloud locations.
  • the second level represented by the distributed buffer 30, allows freeing-up resources on a particular node when the maximum capacity of the local buffer 20 is reached and there is a need of hosting a high priority service.
  • the strategy that it handles temporary pushes services with less priority that are running on the congested node e.g. a node managed by the current edge cloud manager 130, towards close by edge nodes, e.g. a node managed by a first neighbour edge cloud manager 140, using live-service migration to ensure uninterrupted service.
  • a neighbor look-up table e. g. the look-up table 60 depicted in Fig. 6a, is regularly updated using, for example, peer to peer communication with information regarding the distance among the edge clouds or edge nodes and the available capacities.
  • the hierarchical resource buffer may be characterized by the following features:
  • the service priority may be based on different metrics, such as clients with premium accounts or mission-critical services.
  • the local buffer 20 and its adaptiveness allow reserving, e.g. allocating, resources for highly priority applications in a smart way so that the pre allocated resources are not just kept idle.
  • This intelligent strategy may be based on load models considering location as a parameter. The models may be learned by using machine intelligence strategies, make use of known information such as a mobility plan usually hosted in a central location or the load of high priority services on neighbour nodes.
  • the active and distributed buffer 30 allows freeing-up, e.g. releasing,
  • a congested edge node e.g. a node managed by the current edge cloud manager 130.
  • the current edge cloud manager 130 reviews in its local buffer if there exists a service with a lower priority. Assuming that ongoing service Si with priority I fulfills the previous condition. It then proceeds to temporarily migrating, e.g. transferring, this service to a nearby node, e.g. a node managed by the first neighbour edge cloud manager 140, with available capacity to allocate this service.
  • This procedure represents a way to dynamically make space for a new service request with a higher priority, thus improving the likelihood of allocating a new service request in the most optimal edge cloud.
  • the slots of the distributed and location-aware buffer may be made up by nearby edge cloud managers, e.g. the first neighbor edge cloud 140 or the second neighbor edge cloud 150, as depicted in Fig. 6b.
  • these nearby edge nodes may refer to either a parent_node, a sibling_node or a childjiode, as depicted in Fig. 4; or a parent_cloud, a sibling_cloud or a child_cloud as depicted in Fig. 3, considering the location among the edge nodes and even their proximity with the UE.
  • the active buffer slots allow the live-migration of services with less priority than the new service requesting for allocation at the congested and optimal edge node current_node, e.g. a node managed by the current edge cloud manager 130.
  • This live migration to a nearby node allows preserving the continuity on the service that is being moved and to keep the best possible QoE.
  • FIG. 9 A message sequence chart (MSC), showing steps and actions according to some embodiments, is depicted in Fig. 9.
  • This example procedure involves a current manager corresponding to the above-described current edge cloud manager 130, which provides execution of a service s h requested by a client device depicted on the left side of the figure.
  • the requested service s h is executed by using resources in one or more neighbour edge clouds, including neighbour edge clouds managed by a parent manager, a sibling manager and a child manager, which neighbour edge clouds are arranged in a hierarchical model as shown in Fig. 3 and Fig. 4.
  • step 1 a new service request s h is received for a service with high priority on the edge manager current_manager, e.g. the current edge cloud manager 130.
  • step 2 it is evaluated by the current edge cloud manager 130 if there are available local resources managed by the current edge cloud manager 130 to handle the new service request s h.
  • step 3 in case there are available resources managed by the current edge cloud manager 130 to handle the new service request s h , the current manager, e.g. the current edge cloud manager 130, accepts the requested service s h.
  • step 4 the current manager, e.g. the current edge cloud manager 130, triggers either the service migration process or a normal deployment of the requested service s h . 5.
  • step 5 in case there are no available resources managed by the current edge cloud manager 130 to handle the new service request s h , then the current manager, e.g. the current edge cloud manager 130, proceeds to evaluate the ongoing services using resources managed by the current edge cloud manager 130, based on the services’ specified priority.
  • step 6 in case there is an ongoing service Si using resources managed by the current edge cloud manager 130 with lower priority than the requested service s h , the current edge cloud manager 130 checks for the most suitable neighbour edge cloud, based on different metrics such as proximity, distance and networking capacity, where to temporary reallocate the ongoing service Si by querying a look-up table 60.
  • the most optimal placement may be assumed to be in a child_manager, e.g. the first neighbor edge node manager 140.
  • step 7 the ongoing service Si is then temporary migrated, e.g. transferred, to the child_manager, e.g. the first neighbor edge cloud manager 140, and thus releasing resources for the new requested service s h.
  • the requested service s h may then either be migrated or deployed in the most optimal edge location i.e. a node managed by the current_manager i.e. the current edge cloud manager 130.
  • step 9 in case there is no ongoing service Si using resources managed by the current edge cloud manager 130 with lower priority than the requested service s h , the current edge cloud manager 130 may then check for the second-best suitable location for the requested service s h by querying the look-up table 60 and comparing with service requirements.
  • the second-best suitable location may be assumed to be in a node managed by the sibling_manager, e.g. in the second neighbor edge cloud manager 150. 10.
  • the requested service s h may then be migrated or deployed in a node managed by the sibling_manager, i.e. the second neighbor edge cloud manager 150.
  • a first action 1000 illustrates that the current edge cloud manager may receive a request for a service s h , e.g. for hosting a service with high priority, such as a latency-critical service.
  • the requested service s h may have a service priority h and information, e.g. about the resource requirements to be allocated in terms of amount of required processing resources, amount of required storage and memory, amount of required networking resources.
  • the information may come from a deterministic or any stochastic model, such as a machine learning model.
  • the request for a service s h may be triggered by, e.g. a new service request or a migration process.
  • a next action 1001 it is then detected, if there are enough available resources to handle this new service request, i.e. the current edge cloud manager 130 then detects that there are enough available resources managed by the current edge cloud manager 130 for handling the requested service s h.
  • next action 1002 if there are enough available resources managed by the current edge cloud manager 130 for handling the requested service s h , then the current edge cloud manager 130 may trigger scheduling of deployment for the requested service s h. Thus, the current edge cloud manager 130 may then schedule the execution of the requested service s h
  • the current edge cloud manager 130 may check the neighbor edge clouds, e.g. based on different metrics such as proximity, networking capacity and distance, where to temporary reallocate the ongoing service Si by e.g. querying a look-up table 60.
  • the current edge cloud manager 130 determines if there is any neighbor edge cloud with enough resources for the ongoing service si .
  • the ongoing service si is then temporary migrated, e.g. transferred, to this neighbor edge cloud, e.g. a first neighbor edge cloud manager 140. Thereby, resources are released for the new requested service s h. Then the procedure may return to action 1001 .
  • the current edge cloud manager 130 may check the second-best location for the requested service s h by, e.g. querying the look-up table 60 and comparing the service requirements.
  • the current edge cloud manager determines if there is any neighbor edge cloud with enough resources for the requested service s h.
  • Fig. 11 is a block diagram depicting the current edge cloud manager 130 for controlling resources in a cloud environment according to embodiments herein.
  • the current edge cloud manager 130 may comprise processing circuitry 1 101 , e.g. one or more processors, configured to perform the methods herein.
  • the current edge cloud manager 130 may comprise a detecting unit 1102.
  • the current edge cloud manager 130, the processing circuitry 1 101 , and/or the detecting unit 1 102 is configured to detect that there are not enough available resources managed by the current edge cloud manager (130) for handling the requested service s h.
  • the current edge cloud manager 130 may comprise an evaluating unit 1103.
  • the current edge cloud manager 130, the processing circuitry 1 101 , and/or the evaluating unit 1 103 is configured to evaluate ongoing services using resources managed by the current edge cloud manager 130 with respect to service priority.
  • the current edge cloud manager 130 may comprise a transferring unit 1104.
  • the current edge cloud manager 130, the processing circuitry 1 101 , and/or the transferring unit 1 104 is configured to, when there is an ongoing service Si using resources managed by the current edge cloud manager 130 with lower service priority than the requested service s h , transfer the ongoing service Si with lower priority to a first neighbour edge cloud manager 140, thereby releasing resources managed by the current edge cloud manager 130 for handling the requested service S h .
  • the current edge cloud manager 130, the processing circuitry 1 101 , and/or the transferring unit 1 104 is configured to, when there is no ongoing service Si using resources managed by the current edge cloud manager 130 with lower service priority than the requested service s h , transfer the requested service s h to a second neighbour edge cloud manager 150 .
  • the current edge cloud manager 130 may comprise a scheduling unit 1105.
  • the current edge cloud manager 130, the processing circuitry 1 101 , and/or the scheduling unit 1105 may be configured to schedule the execution of the requested service s h.
  • the current edge cloud manager 130 further comprises a memory 1106.
  • the memory 1005 may be a flash memory, a Random-Access Memory (RAM), a Read-Only Memory (ROM), an Electrically Erasable Programmable ROM (EEPROM) or Flard Drive storage (FIDD).
  • the memory 1105 comprises one or more units to be used to store data on, such as resources for computing, processing and storing of data, events and applications to perform the methods disclosed herein when being executed, and similar.
  • the current edge cloud manager 130 may comprise a communication interface such as comprising a transmitter, a receiver and/or a transceiver.
  • the methods according to the embodiments described herein for the current edge cloud manager 130 are respectively implemented by means of e.g. a computer program product 1107 or a computer program, comprising instructions, i.e. , software code portions, which, when executed on at least one processor, cause the at least one processor to carry out the actions described herein, as performed by the current edge cloud manager 130.
  • the computer program product 1107 may be stored on a computer-readable storage medium 1108, e g. a disc, a universal serial bus (USB) stick or similar.
  • the computer-readable storage medium 1108, having stored thereon the computer program product may comprise the instructions which, when executed on at least one processor, cause the at least one processor to carry out the actions described herein, as performed by the current edge cloud manager 130.
  • the computer-readable storage medium may be a transitory or a non-transitory computer-readable storage medium.
  • embodiments herein may disclose a current edge cloud manager for controlling resources in a cloud environment, wherein the current edge cloud manager comprises processing circuitry and a memory, said memory comprising instructions executable by said processing circuitry whereby said current edge cloud manager is operative to perform any of the methods herein.
  • Fig. 11 illustrates various functional modules or units in the current edge cloud manager 130, and the skilled person is able to implement these functional modules in practice using suitable software and hardware.
  • the solution is generally not limited to the shown structures of current edge cloud manager 130, and the functional modules or units 1102-1105 therein may be configured to operate according to any of the features and embodiments described in this disclosure, where appropriate.
  • the embodiments herein may thus be implemented in the current edge cloud manager 130 by a computer program comprising instructions which, when executed on at least one processor, cause the at least one processor to carry out the actions according to any of the above embodiments and examples, where appropriate.
  • the solution may also be implemented in a carrier containing the above computer program, wherein the carrier could be one of an electronic signal, an optical signal, a radio signal, or a computer readable storage product or computer program product.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Computer And Data Communications (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

L'invention concerne un procédé et un gestionnaire de nuage de bord actuel (130) pour commander des ressources dans un environnement en nuage, destinés à être utilisés pour la gestion d'un service demandé (sh). Le gestionnaire de nuage de bord actuel (130) détecte qu'il n'y a pas suffisamment de ressources disponibles gérées par le gestionnaire de nuage de bord actuel (130) pour gérer le service demandé (sh) et évalue ensuite des services en cours à l'aide de ressources gérées par le gestionnaire de nuage de bord actuel (130), sur la base d'une priorité de service. Lorsqu'il existe un service en cours (sl) présentant une priorité de service inférieure à celle du service demandé (sh), le gestionnaire de nuage de bord actuel (130) transfère le service en cours (sl) à un premier gestionnaire de nuage de bord voisin (140), libérant ainsi des ressources pour gérer le service demandé (sh). Lorsqu'il n'y a pas de service en cours (sl) présentant une priorité de service inférieure à celle du service demandé (sh), le gestionnaire de nuage de bord actuel (130) transfère le service demandé (sh) à un second gestionnaire de nuage de bord voisin (150).
PCT/SE2019/050221 2019-03-12 2019-03-12 Procédé et gestionnaire de nuage de bord actuel pour commander des ressources Ceased WO2020185132A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/SE2019/050221 WO2020185132A1 (fr) 2019-03-12 2019-03-12 Procédé et gestionnaire de nuage de bord actuel pour commander des ressources

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/SE2019/050221 WO2020185132A1 (fr) 2019-03-12 2019-03-12 Procédé et gestionnaire de nuage de bord actuel pour commander des ressources

Publications (1)

Publication Number Publication Date
WO2020185132A1 true WO2020185132A1 (fr) 2020-09-17

Family

ID=65991879

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/SE2019/050221 Ceased WO2020185132A1 (fr) 2019-03-12 2019-03-12 Procédé et gestionnaire de nuage de bord actuel pour commander des ressources

Country Status (1)

Country Link
WO (1) WO2020185132A1 (fr)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112468569A (zh) * 2020-11-23 2021-03-09 华能国际电力股份有限公司 一种基于“云计算”工业视频级联的生产实时监管架构
CN113452751A (zh) * 2021-05-20 2021-09-28 国网江苏省电力有限公司信息通信分公司 基于云边协同的电力物联网任务安全迁移系统及方法
CN113590324A (zh) * 2021-07-30 2021-11-02 郑州轻工业大学 一种面向云边端协同计算的启发式任务调度方法和系统
CN113849364A (zh) * 2021-07-29 2021-12-28 浪潮软件科技有限公司 一种边缘应用管理方法、装置、设备及可读存储介质
CN114885028A (zh) * 2022-05-25 2022-08-09 国网北京市电力公司 业务调度方法、装置及计算机可读存储介质
CN116132535A (zh) * 2022-12-20 2023-05-16 中国电信股份有限公司 服务部署调度方法、装置、电子设备及存储介质
CN116170444A (zh) * 2023-02-10 2023-05-26 平安科技(深圳)有限公司 基于人工智能的边缘节点资源分配方法及相关设备
CN119728555A (zh) * 2024-11-21 2025-03-28 中诚智信工程咨询集团股份有限公司 一种工程监测数据一体化处理方法和系统

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170366472A1 (en) * 2016-06-16 2017-12-21 Cisco Technology, Inc. Fog Computing Network Resource Partitioning
WO2018144060A1 (fr) * 2017-02-05 2018-08-09 Intel Corporation Fourniture et gestion de microservices
EP3462316A1 (fr) * 2017-09-29 2019-04-03 NEC Laboratories Europe GmbH Système et procédé permettant de prendre en charge le tranchage de réseau dans un système mec fournissant une résolution automatique de conflits découlant d'une tenance multiple dans l'environnement mec

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170366472A1 (en) * 2016-06-16 2017-12-21 Cisco Technology, Inc. Fog Computing Network Resource Partitioning
WO2018144060A1 (fr) * 2017-02-05 2018-08-09 Intel Corporation Fourniture et gestion de microservices
EP3462316A1 (fr) * 2017-09-29 2019-04-03 NEC Laboratories Europe GmbH Système et procédé permettant de prendre en charge le tranchage de réseau dans un système mec fournissant une résolution automatique de conflits découlant d'une tenance multiple dans l'environnement mec

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
O. ASCIGIL; T. K. PHAN; A. G. TASIOPOULOS; V. SOURLAS; I. PSARAS; G. PAVLOU: "On Uncoordinated Service Placement in Edge-Clouds", 2017 IEEE INTERNATIONAL CONFERENCE ON CLOUD COMPUTING TECHNOLOGY AND SCIENCE (CLOUDCOM, 2017, pages 41 - 48, XP033281020, DOI: doi:10.1109/CloudCom.2017.46

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112468569A (zh) * 2020-11-23 2021-03-09 华能国际电力股份有限公司 一种基于“云计算”工业视频级联的生产实时监管架构
CN113452751A (zh) * 2021-05-20 2021-09-28 国网江苏省电力有限公司信息通信分公司 基于云边协同的电力物联网任务安全迁移系统及方法
CN113849364A (zh) * 2021-07-29 2021-12-28 浪潮软件科技有限公司 一种边缘应用管理方法、装置、设备及可读存储介质
CN113849364B (zh) * 2021-07-29 2023-12-26 浪潮软件科技有限公司 一种边缘应用管理方法、装置、设备及可读存储介质
CN113590324A (zh) * 2021-07-30 2021-11-02 郑州轻工业大学 一种面向云边端协同计算的启发式任务调度方法和系统
CN114885028A (zh) * 2022-05-25 2022-08-09 国网北京市电力公司 业务调度方法、装置及计算机可读存储介质
CN114885028B (zh) * 2022-05-25 2024-01-23 国网北京市电力公司 业务调度方法、装置及计算机可读存储介质
CN116132535A (zh) * 2022-12-20 2023-05-16 中国电信股份有限公司 服务部署调度方法、装置、电子设备及存储介质
CN116170444A (zh) * 2023-02-10 2023-05-26 平安科技(深圳)有限公司 基于人工智能的边缘节点资源分配方法及相关设备
CN119728555A (zh) * 2024-11-21 2025-03-28 中诚智信工程咨询集团股份有限公司 一种工程监测数据一体化处理方法和系统

Similar Documents

Publication Publication Date Title
WO2020185132A1 (fr) Procédé et gestionnaire de nuage de bord actuel pour commander des ressources
US20230007662A1 (en) Dynamic slice priority handling
US11463554B2 (en) Systems and methods for dynamic multi-access edge allocation using artificial intelligence
EP3295630B1 (fr) Système et procédés de gestion d'une infrastructure virtuelle entre des réseaux d'opérateurs
KR102034532B1 (ko) 스펙트럼 리소스들의 제공 및 분배를 위한 시스템 및 방법
US10993127B2 (en) Network slice instance management method, apparatus, and system
WO2019230659A1 (fr) Système de communication
CN107690822B (zh) 网络管理的方法、设备、系统及计算机可读存储介质
CN113875192A (zh) 第一实体、第二实体、第三实体及由之执行的用于在通信网络中提供服务的方法
US12294523B2 (en) Application instance deployment method, application instance scheduling method, and apparatus
KR102128357B1 (ko) 주소 할당 방법, 게이트웨이 및 시스템
WO2019029704A1 (fr) Procédé de gestion d'objet de réseau et appareil s'y rapportant
KR102389334B1 (ko) 클라우드 서비스를 위한 가상 머신 프로비저닝 시스템 및 방법
JP2008177846A (ja) 無線基地局装置および無線リソース管理方法
CN107665143A (zh) 资源管理方法、装置及系统
WO2020057490A1 (fr) Procédé et appareil de gestion de ressources de service, dispositif de réseau, et support de stockage lisible par machine
WO2019029645A1 (fr) Procédé et appareil de gestion de tranche de réseau
WO2018143235A1 (fr) Système de gestion, dispositif de gestion, dispositif, et procédé de gestion et programme
CN119698824A (zh) 用于按需边缘平台计算的系统和方法
CN108667956B (zh) 一种5g系统中的ip地址池管理方法
US12301474B2 (en) Network packet handling
US10986036B1 (en) Method and apparatus for orchestrating resources in multi-access edge computing (MEC) network
JP5412656B2 (ja) 通信システム及び通信制御方法
US20250267696A1 (en) Methods and Apparatuses for Mapping a Service Request to Radio Resources and Transport Resources in a Network
WO2019218294A1 (fr) Procédé de gestion de groupe d'adresses ip dans un système 5g

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19714500

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19714500

Country of ref document: EP

Kind code of ref document: A1