[go: up one dir, main page]

WO2025074404A1 - Method and system for automatic scaling of one or more nodes - Google Patents

Method and system for automatic scaling of one or more nodes Download PDF

Info

Publication number
WO2025074404A1
WO2025074404A1 PCT/IN2024/051963 IN2024051963W WO2025074404A1 WO 2025074404 A1 WO2025074404 A1 WO 2025074404A1 IN 2024051963 W IN2024051963 W IN 2024051963W WO 2025074404 A1 WO2025074404 A1 WO 2025074404A1
Authority
WO
WIPO (PCT)
Prior art keywords
nodes
peegn
resources
request
transceiver unit
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
PCT/IN2024/051963
Other languages
French (fr)
Inventor
Aayush Bhatnagar
Ankit Murarka
Rizwan Ahmad
Kapil Gill
Arpit Jain
Shashank Bhushan
Jugal Kishore
Meenakshi Sarohi
Kumar Debashish
Supriya Kaushik DE
Gaurav Kumar
Kishan Sahu
Gaurav Saxena
Vinay Gayki
Mohit Bhanwria
Durgesh KUMAR
Rahul Kumar
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jio Platforms Ltd
Original Assignee
Jio Platforms Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jio Platforms Ltd filed Critical Jio Platforms Ltd
Publication of WO2025074404A1 publication Critical patent/WO2025074404A1/en
Pending legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5061Partitioning or combining of resources
    • G06F9/5077Logical partitioning of resources; Management or configuration of virtualized resources
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/4557Distribution of virtual machine instances; Migration and load balancing
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/45595Network integration; Enabling network access in virtual machine instances

Definitions

  • Embodiment of the present disclosure generally relates to the field of network management. More particularly, the present disclosure may relate to method and system for automatic scaling of one or more nodes.
  • VNF/VNFC virtual network functions
  • CNF/CNFC containerized functions
  • PEEGN Policy Execution Engine
  • An aspect of the present disclosure may relate to a method for automatic scaling of one or more nodes.
  • the method comprises receiving, by a transceiver unit at a Policy Execution Engine (PEEGN), from a Network Function Virtualization Platform Decision and Analytics (NPDA), a request for executing an automatic scaling policy for the one or more nodes. Further, the method comprises fetching, by the transceiver unit at the PEEGN, a set of data relating to the one or more nodes.
  • PEEGN Policy Execution Engine
  • NPDA Network Function Virtualization Platform Decision and Analytics
  • the method further comprises sending, by the transceiver unit, from the PEEGN a request to get one or more current used resources by each of the one or more nodes, the allocated resource quota for each of the one or more nodes, and one or more available resources for the one or more nodes, to a physical and virtual inventory manager (PVIM). Further, the method comprises analysing, by a processing unit at the PEEGN, a demand for one or more resources, for automatic scaling of the one or more nodes, based on the one or more current used resources by each of the one or more nodes, the allocated resource quota for each of the one or more nodes, the one or more available resources for the one or more nodes, a set of automatic scaling constraints data, and the automatic scaling policy.
  • PVIM physical and virtual inventory manager
  • the method comprises transmitting, by the transceiver unit from the PEEGN, to the PVIM, a request for one of reserving and unreserving the one or more resources. Thereafter, the method comprises triggering, by the transceiver unit, from the PEEGN, the automatic scaling request to a node manager, based on a response from the PVIM on the request for one of the reserving and the unreserving of the one or more resources.
  • the one or more nodes comprise at least one of virtual network functions (VNFs), virtual network function components (VNFCs), container network functions (CNFs), and container network function components (CNFCs).
  • fetching, by the transceiver unit at the PEEGN node, the set of data relating to the one or more nodes comprises at least one of transmitting, by the transceiver unit, from the PEEGN a request to one or more node components associated with the one or more nodes to fetch the set of data related to the one or more nodes and saving, by a storage unit, at the PEEGN, the set of data related to the one or more nodes in a database.
  • the set of automatic scaling constraints data comprises at least one of a total number of CPUs, a virtual memory size, and a disk size with the one or more nodes.
  • automatic-scaling of the one or more nodes comprises at least one of scale-in and scale-out of the one or more nodes.
  • the response from the PVIM on the request for one of the reserving and the unreserving of the one or more resources comprises one or more tokens for each of the one or more nodes.
  • the triggering, by the transceiver unit, from the PEEGN, the automatic scaling request to a node manager comprises the one or more tokens for each of the one or more nodes.
  • the method prior to transmitting, by the transceiver unit from the PEEGN, to the PVIM, a request for reserving the one or more resources, the method comprises updating, by the storage unit, at the PEEGN, the one or more current used resources by each of the one or more nodes, in the database.
  • the method further comprises receiving, by the transceiver unit, at the PEEGN, an acknowledgement response from the node manager and transmitting, by the transceiver unit, from the PEEGN, a response to the NPDA, of the automatic scaling of the one or more nodes.
  • Another aspect of the present disclosure may relate to a system for automatic scaling of one or more nodes.
  • the system comprises a transceiver unit configured to receive at a Policy Execution Engine (PEEGN), from a Network Function Virtualization Platform Decision and Analytics (NPDA), a request for executing an automatic scaling policy for the one or more nodes.
  • PEEGN Policy Execution Engine
  • NPDA Network Function Virtualization Platform Decision and Analytics
  • the transceiver unit is further configured to fetch at the PEEGN, a set of data relating to the one or more nodes.
  • the transceiver unit is configured to send from the PEEGN a request to get one or more current used resources by each of the one or more nodes, the allocated resource quota for each of the one or more nodes and one or more available resources for the one or more nodes, to a physical and virtual inventory manager (PVIM).
  • the system comprises a processing unit, configured to analyse at the PEEGN, a demand for one or more resources, for automatic scaling of the one or more nodes, based on the one or more current used resources by each of the one or more nodes, the allocated resource quota for each of the one or more nodes, the one or more available resources for the one or more nodes a set of automatic scaling constraints data, and the automatic scaling policy.
  • the transceiver unit is configured to transmit from the PEEGN, to the PVIM, a request for one of reserving and unreserving the one or more resources. Moreover, the transceiver unit is configured to trigger from the PEEGN, the automatic scaling request to a node manager, based on a response from the PVIM on the request for one of the reserving and the unreserving of the one or more resources.
  • Yet another aspect of the present disclosure may relate to a non-transitory computer readable storage medium storing one or more instructions for automatic scaling of one or more nodes, the instructions include executable code which, when executed by one or more units of a system, causes a transceiver unit of the system to receive at a Policy Execution Engine (PEEGN), from a Network Function Virtualization Platform Decision and Analytics (NPDA), a request for executing an automatic scaling policy for the one or more nodes. Further, the executable code when executed causes the transceiver unit to fetch at the PEEGN, a set of data relating to the one or more nodes.
  • PEEGN Policy Execution Engine
  • NPDA Network Function Virtualization Platform Decision and Analytics
  • the executable code when executed causes the transceiver unit to send from the PEEGN a request to get one or more current used resources by each of the one or more nodes, the allocated resource quota for each of the one or more nodes and one or more available resources for the one or more nodes, to a physical and virtual inventory manager (PVIM).
  • the executable code when further executed causes a processing unit of the system to analyse at the PEEGN, a demand for one or more resources, for automatic scaling of the one or more nodes, based on the one or more current used resources by each of the one or more nodes, the allocated resource quota for each of the one or more nodes, the one or more available resources for the one or more nodes a set of automatic scaling constraints data, and the automatic scaling policy.
  • the executable code when executed causes the transceiver unit to transmit from the PEEGN, to the PVIM, a request for one of reserving and unreserving the one or more resources.
  • the executable code when executed causes the transceiver unit to trigger from the PEEGN, the automatic scaling request to a node manager, based on a response from the PVIM on the request for one of the reserving and the unreserving of the one or more resources.
  • FIG. 1 illustrates an exemplary block diagram of a management and orchestration (MANO) architecture, in accordance with exemplary implementation of the present disclosure.
  • MANO management and orchestration
  • FIG. 2 illustrates an exemplary block diagram of a computing device upon which the features of the present disclosure may be implemented in accordance with exemplary implementation of the present disclosure.
  • FIG. 3 illustrates an exemplary block diagram of a system for automatic scaling of one or more nodes, in accordance with exemplary implementations of the present disclosure.
  • FIG. 4 illustrates an exemplary flow diagram of a method for automatic scaling of one or more nodes, in accordance with exemplary implementations of the present disclosure.
  • FIG. 5 illustrates an exemplary of session flow diagram for automatic scaling of one or more nodes, in accordance with exemplary implementations of the present disclosure.
  • exemplary and/or “demonstrative” is used herein to mean serving as an example, instance, or illustration. For the avoidance of doubt, the subject matter disclosed herein is not limited by such examples.
  • any aspect or design described herein as “exemplary” and/or “demonstrative” is not necessarily to be construed as preferred or advantageous over other aspects or designs, nor is it meant to preclude equivalent exemplary structures and techniques known to those of ordinary skill in the art.
  • a “processing unit” or “processor” or “operating processor” includes one or more processors, wherein processor refers to any logic circuitry for processing instructions.
  • a processor may be a general -purpose processor, a special purpose processor, a conventional processor, a digital signal processor, a plurality of microprocessors, one or more microprocessors in association with a (Digital Signal Processing) DSP core, a controller, a microcontroller, Application Specific Integrated Circuits, Field Programmable Gate Array circuits, any other type of integrated circuits, etc.
  • the processor may perform signal coding data processing, input/output processing, and/or any other functionality that enables the working of the system according to the present disclosure.
  • a user equipment As used herein, “a user equipment”, “a user device”, “a smart-user-device”, “a smartdevice”, “an electronic device”, “a mobile device”, “a handheld device”, “a wireless communication device”, “a mobile communication device”, “a communication device” may be any electrical, electronic and/or computing device or equipment, capable of implementing the features of the present disclosure.
  • the user equipment/device may include, but is not limited to, a mobile phone, smart phone, laptop, a general-purpose computer, desktop, personal digital assistant, tablet computer, wearable device or any other computing device which is capable of implementing the features of the present disclosure.
  • the user device may contain at least one input means configured to receive an input from at least one of a transceiver unit, a processing unit, a storage unit, a detection unit and any other such unit(s) which are required to implement the features of the present disclosure.
  • storage unit or “memory unit” refers to a machine or computer-readable medium including any mechanism for storing information in a form readable by a computer or similar machine.
  • a computer-readable medium includes read-only memory (“ROM”), random access memory (“RAM”), magnetic disk storage media, optical storage media, flash memory devices or other types of machine-accessible storage media.
  • the storage unit stores at least the data that may be required by one or more units of the system to perform their respective functions.
  • interface refers to a shared boundary across which two or more separate components of a system exchange information or data.
  • the interface may also refer to a set of rules or protocols that define communication or interaction of one or more modules or one or more units with each other, which also includes the methods, functions, or procedures that may be called.
  • modules, units, components used herein, unless explicitly excluded herein, may be software modules or hardware processors, the processors being a general -purpose processor, a special purpose processor, a conventional processor, a digital signal processor (DSP), a plurality of microprocessors, one or more microprocessors in association with a DSP core, a controller, a microcontroller, Application Specific Integrated Circuits (ASIC), Field Programmable Gate Array circuits (FPGA), any other type of integrated circuits, etc.
  • the transceiver unit includes at least one receiver and at least one transmitter configured respectively for receiving and transmitting data, signals, information or a combination thereof between units/components within the system and/or connected with the system.
  • PVIM Physical and Virtual Inventory Manager
  • Policy Execution Engine provides a network function virtualisation (NFV) software defined network (SDN) platform functionality to support dynamic requirements of resource management and network service orchestration in the virtualized network. Further, the PEEGN is involved during CNF instantiation flow to check for CNF policy and to reserve resources required to instantiate CNF at PVIM. The PEEGN supports scaling policy for CNFC.
  • NFV network function virtualisation
  • SDN software defined network
  • CMP Capacity Manager Platform
  • NPDA NFV Platform and Decision Analytics
  • the current known solutions have several shortcomings. For implementing proper resources allocations, there are some challenges, such as excessive provisioning of resources, insufficient provisioning of resources, resource failures, resource mismanagement, performance degradation, conflict while reservation and allocation of resources, unavailability of Policy Execution Engine Service, time consumed in reservation and allocation of VNF/VNFC7CNFC7CNF resources and cost increment.
  • the present disclosure aims to overcome the above-mentioned and other existing problems in this field of technology by providing method and system for automatic scaling of one or more nodes. More particularly, the present disclosure provides a solution to apply automatic scale constraints based on policies that are applicable for VNF/VNFC/CNC/CNFC for automatic scaling of resources.
  • the present disclosure provides a solution to apply automatic scale constraints based on affinity, anti-affinity, dependent and deployment flavor. Further, the present disclosure provides a solution that leads to zero data loss policies while VNF/VNFC/CNC/CNFC resources are scaling up. Furthermore, the present disclosure provides a solution that supports event driven scaling.
  • FIG. 1 an exemplary block diagram representation of a management and orchestration (MANO) architecture/ platform [100], in accordance with exemplary implementation of the present disclosure is illustrated.
  • the MANO architecture [100] may be developed for managing telecom cloud infrastructure automatically, managing design or deployment design, managing instantiation of network node(s)/ service(s) etc.
  • the MANO architecture [100] deploys the network node(s) in the form of Virtual Network Function (VNF) and Cloud-native/ Container Network Function (CNF).
  • VNF Virtual Network Function
  • CNF Cloud-native/ Container Network Function
  • the system as provided by the present disclosure may comprise one or more components of the MANO architecture [100],
  • the MANO architecture [100] may be used to auto-instantiate the VNFs into the corresponding environment of the present disclosure so that it could help in onboarding other vendor(s) CNFs and VNFs to the platform.
  • the MANO architecture comprises a user interface layer [102], a Network Function Virtualization (NFV) and Software Defined Network (SDN) Design Function module [104], a Platform Foundation Services module [106], a Platform Core Services module [108] and a Platform Resource Adapters and Utilities module [112], All the components are assumed to be connected to each other in a manner as obvious to the person skilled in the art for implementing features of the present disclosure.
  • NFV Network Function Virtualization
  • SDN Software Defined Network
  • the NFV and SDN design function module [104] comprises a VNF lifecycle manager (compute) [1042], a VNF catalogue [1044], a network services catalogue [1046], a network slicing and service chaining manager [1048], a physical and virtual resource manager [1050] and a CNF lifecycle manager [1052],
  • the VNF lifecycle manager (compute) [1042] may be responsible for deciding on which server of the communication network, and the microservice will be instantiated.
  • the VNF lifecycle manager (compute) [1042] may manage the overall flow of incoming/ outgoing requests during interaction with the user.
  • the VNF lifecycle manager (compute) [1042] may be responsible for determining which sequence to be followed for executing the process.
  • the VNF catalogue [1044] stores the metadata of all the VNFs (also CNFs in some cases).
  • the network services catalogue [1046] stores the information of the services that need to be run.
  • the network slicing and service chaining manager [1048] manages the slicing (an ordered and connected sequence of network service/ network functions (NFs)) that must be applied to a specific networked data packet.
  • the physical and virtual inventory manager (PVIM) [1050] stores the logical and physical inventory of the VNFs.
  • the CNF lifecycle manager [1052] may be used for the CNFs lifecycle management.
  • the platforms foundation services module [106] comprises a microservices elastic load balancer [1062], an identity & access manager [1064], a command line interface (CLI) [1066], a central logging manager [1068], and an event routing manager [1070],
  • the microservices elastic load balancer [1062] may be used for maintaining the load balancing of the request for the services.
  • the identity and access manager [1064] may be used for logging purposes.
  • the command line interface (CLI) [1066] may be used to provide commands to execute certain processes which require changes during the run time.
  • the central logging manager [1068] may be responsible for keeping the logs of every service. These logs are generated by the MANO platform [100], These logs are used for debugging purposes.
  • the event routing manager [1070] may be responsible for routing the events i.e., the application programming interface (API) hits to the corresponding services.
  • API application programming interface
  • the platforms core services module [108] comprises NFV infrastructure monitoring manager [1082], an assure manager [1084], a performance manager [1086], a policy execution engine [1088], a capacity monitoring manager (CMM) [1090], a release management (mgmt.) repository [1092], a configuration manager & GCT [1094], an NFV platform decision analytics [1096], a platform NoSQL DB [1098]; a Platform Schedulers and Cron Jobs (PSC) service [1100], a VNF backup & Restore manager [1102], a microservice auditor [1104], and a platform operations, administration and maintenance manager [1106],
  • the NFV infrastructure monitoring manager [1082] monitors the infrastructure part of the NFs.
  • the assure manager [1084] may be responsible for supervising the alarms the vendor may be generating.
  • the performance manager [1086] may be responsible for managing the performance counters.
  • the Policy Execution Engine (PEEGN) [1088] may be responsible for managing all of the policies.
  • the capacity monitoring manager (CMM) [1090] may be responsible for sending the request to the PEEGN [1088],
  • the release management (mgmt.) repository (RMR) [1092] may be responsible for managing the releases and the images of all of the vendor's network nodes.
  • the configuration manager & (GCT) [1094] manages the configuration and GCT of all the vendors.
  • the NFV Platform Decision Analytics (NPDA) [1096] helps in deciding the priority of using the network resources. It may be further noted that the policy execution engine (PEEGN) [1088], the configuration manager & GCT [1094] and the NPDA [1096] work together.
  • the platform NoSQL DB [1098] may be a database for storing all the inventory (both physical and logical) as well as the metadata of the VNFs and CNF.
  • the platform schedulers and cron jobs (PSC) service [1100] schedules the task such as but not limited to triggering of an event, traverse the network graph etc.
  • the VNF backup & restore manager [1102] takes backup of the images, binaries of the VNFs and the CNFs and produces those backups on demand in case of server failure.
  • the microservice auditor [1104] audits the microservices. For e.g., in a hypothetical case, instances not being instantiated by the MANO architecture [100] may be using the network resources. In such cases, the microservice auditor [1104] audits and informs the same so that resources can be released for services running in the MANO architecture [100], The audit assures that the services only run on the MANO platform [100], The platform operations, administration and maintenance manager [1106] may be used for newer instances that are spawning.
  • the platform resource adapters and utilities module [112] further comprises a platform external API adapter and gateway [1122], a generic decoder and indexer (XML, CSV, JSON) [1124], a service adapter [1126], an API adapter [1128], and a NFV gateway [1130],
  • the platform external API adapter and gateway [1122] may be responsible for handling the external services (to the MANO platform [100]) that require the network resources.
  • the generic decoder and indexer (XML, CSV, JSON) [1124] gets directly the data of the vendor system in the XML, CSV, JSON format.
  • the service adapter [1126] may be the interface provided between the telecom cloud and the MANO architecture [100] for communication.
  • the API adapter [1128] may be used to connect with the virtual machines (VMs).
  • the NFV gateway [1130] may be responsible for providing the path to each service going to/incoming from the MANO architecture [100],
  • the service adapter (SA) [1126] is a microservices-based system designed to deploy and manage Container Network Functions (CNFs) and their components (CNFCs) across nodes.
  • the SA [1126] offers REST endpoints for key operations, including uploading container images to a registry, terminating CNFC instances, and creating volumes and networks.
  • CNFs which are network functions packaged as containers, may consist of multiple CNFCs.
  • the SA [1126] facilitates the deployment, configuration, and management of these components by interacting with API, ensuring proper setup and scalability within a containerized environment. This approach provides a modular and flexible framework for handling network functions in a virtualized network setup.
  • FIG. 2 an exemplary block diagram of a computing device [200] (also referred herein as a computer system [200]) upon which the features of the present disclosure may be implemented in accordance with exemplary implementation of the present disclosure, is illustrated.
  • the computing device [200] may also implement a method for performing one or more corrective actions on one or more Network Functions (NFs) utilising the system.
  • the computing device [200] itself implements the method for performing one or more corrective actions on one or more Network Functions (NFs) using one or more units configured within the computing device [200], wherein said one or more units are capable of implementing the features as disclosed in the present disclosure.
  • NFs Network Functions
  • the computing device [200] may include a bus [202] or other communication mechanism for communicating information, and a hardware processor [204] coupled with the bus [202] for processing information.
  • the hardware processor [204] may be, for example, a general -purpose microprocessor.
  • the computing device [200] may also include a main memory [206], such as a random-access memory (RAM), or other dynamic storage device, coupled to the bus [202] for storing information and instructions to be executed by the processor [204],
  • the main memory [206] also may be used for storing temporary variables or other intermediate information during execution of the instructions to be executed by the processor [204], Such instructions, when stored in non-transitory storage media accessible to the processor [204], render the computing device [200] into a special-purpose machine that is customized to perform the operations specified in the instructions.
  • the computing device [200] further includes a read only memory (ROM) [208] or other static storage device coupled to the bus [202] for storing static information and instructions for the processor [204],
  • ROM read only memory
  • a storage device [210] such as a magnetic disk, optical disk, or solid-state drive is provided and coupled to the bus [202] for storing information and instructions.
  • the computing device [200] may be coupled via the bus [202] to a display [212], such as a cathode ray tube (CRT), Liquid crystal Display (LCD), Light Emitting Diode (LED) display, Organic LED (OLED) display, etc. for displaying information to a computer user.
  • An input device [214] including alphanumeric and other keys, touch screen input means, etc.
  • a cursor controller [216] such as a mouse, a trackball, or cursor direction keys, for communicating direction information and command selections to the processor [204], and for controlling cursor movement on the display [212].
  • the input device typically has two degrees of freedom in two axes, a first axis (e.g., x) and a second axis (e.g., y), that allow the device to specify positions in a plane.
  • the computing device [200] may implement the techniques described herein using customized hard-wired logic, one or more ASICs or FPGAs, firmware and/or program logic which in combination with the computing device [200] causes or programs the computing device [200] to be a special-purpose machine.
  • the techniques herein are performed by the computing device [200] in response to the processor [204] executing one or more sequences of one or more instructions contained in the main memory [206], Such instructions may be read into the main memory [206] from another storage medium, such as the storage device [210], Execution of the sequences of instructions contained in the main memory [206] causes the processor [204] to perform the process steps described herein.
  • hard-wired circuitry may be used in place of or in combination with software instructions.
  • the system comprises at least one policy execution engine (PEEGN) [1088] and at least one database [308], Further, the PEEGN [1088] comprises at least one transceiver unit [302], at least one processing unit [304] and at least one storage unit [306], Also, all of the components/ units of the system [300] are assumed to be connected to each other unless otherwise indicated below. Also, in FIG. 3 only a few units are shown, however, the system [300] may comprise multiple such units or the system [300] may comprise any such numbers of said units, as required to implement the features of the present disclosure. In an implementation, the system [300] may reside in a server or a network entity. In yet another implementation, the system [300] may reside partly in the server/ network entity.
  • PEEGN policy execution engine
  • the system [300] is configured for automatic scaling of one or more nodes, with the help of the interconnection between the components/units of the system [300],
  • the transceiver unit [302] may receive at the policy execution engine (PEEGN) [1088], from a Network Function Virtualization Platform Decision and Analytics (NPDA) [1096], a request for executing an automatic scaling policy for the one or more nodes.
  • the one or more nodes may comprise at least one of one or more virtual network functions (VNFs), one or more virtual network function components (VNFCs), one or more container network functions (CNFs), and one or more container network function components (CNFCs).
  • VNFs virtual network functions
  • VNFCs virtual network function components
  • CNFs container network functions
  • CNFCs container network function components
  • the system [300] provides an intelligent scaling framework which helps to scale the VNFC/CNFC instances as per the traffic requirements.
  • the NPDA [1096] evaluates a policy relating to the breach event. It may be noted that the policy and the set of data related to historical instances of breach event of the VNF/CNF instances may be retrieved by the NPDA [1096], Based on the retrieved policy and the set of data, the NPDA [1096] evaluates a hysteresis for the breach event. Further, the NPDA [1096] based on the scaling policy of the VNF/VNFC/CNF/CNFC for which the threshold breach event is detected, executes a hysteresis.
  • the NPDA [1096] requests the PEEGN [1088] to execute a scaling policy.
  • the attributes associated with a policy as defined by a user comprise policyld, policyVersion, instanceld.
  • the PEEGN [1088] is a system that may create, manage and enforce the polices and rules to regulate the behaviour and the operations of the network function. Further, the PEEGN [1088] may ensure that the network function and its components may function and operate as per the predefined polices and rules.
  • the PEEGN [1088] calculates required resources for any VNF /CNF, does quota check and updates PVIM [1050] based on affinity/anti- aflfmity and other policies which are defined at PEEGN [1088],
  • the request received by the transceiver unit [302] at the PEEGN [1088] may be an INVOKE POLICY event. It is to be noted that the events are generated based on the predefined policies and rules.
  • the INVOKE POLICY event may be a trigger that may initiate any pre-defined policy to be executed for the one or more nodes. Considering an example, traffic load on a node increases and crosses a predefined threshold. The INVOKE POLICY event may be triggered, and more resources may be allocated to handle the extra load. Thereby, ensuring the optimal performance of the node.
  • the INVOKE POLICY event comprises attributes such as: policy Id, VIM Id, VNF Id, VNF Version, VNF Instance id, host Id, policy Action (e.g., VNF-scale-out/VNFC-scale-out/healing /manual-VNFC-scale-out).
  • VIM Id refers to identifier of a Virtualized Infrastructure Manager (VIM) instance on which the VNF/VNFC/CNF/CNFC is to be spawned for scale-out or from which the VNF/VNFC/CNF/CNFC instance needs to be removed for scale-in.
  • VIP Virtualized Infrastructure Manager
  • the automatic scaling policy may refer to rules and policies that may automatically and/or dynamically allocate or de-allocate resources based on the demand to ensure optimal performance of the one or more nodes.
  • automatic-scaling of the one or more nodes comprises at least one of scale-in and scale-out of the one or more nodes.
  • the scale-in may refer to a process to reduce the number of active instances and the resources allocated to the network function, in response to the decreased demand and usage of the resources.
  • the scale-out may refer to the process where new instances are created to handle the workload on the existing instances as demand of the resources may increase.
  • the transceiver unit [302] may fetch at the PEEGN [1088], a set of data relating to the one or more nodes. Further, to fetch the set of data relating to the one or more nodes, the transceiver unit [302] may transmit from the PEEGN [1088] a request to one or more node components associated with the one or more nodes to fetch the set of data related to the one or more nodes. [0067] In an implementation, the request to fetch the set of data related to the one or more nodes may be a GET VNF DETAILS event.
  • the PEEGN [1088] may send the GET VNF DETAILS to the one or more node catalogues to fetch details related to the one or more nodes and the one or more node components.
  • the one or more node catalogues are a VNF catalogues, or a CNF catalogues.
  • the set data may include such as, but not limited to, performance status, workload, capacity and resource consumption. It is to be noted that the above mentioned set of data is exemplary and in no manner limiting the scope of present disclosure. Further, the set of data may include any other data obvious to the person skilled in art to implement the solution of the present disclosure.
  • a storage unit [306] may save at the PEEGN [1088], the set of data related to the one or more nodes in a database [308], Further, in an example, the GET_VNF_DETAILS event is associated with the Attributes such as, VNF Id, VNF Version, VNF Description, Product Id and VNFC/CNFC data.
  • the transceiver unit [302] may send from the PEEGN [1088] a request to get one or more current used resources by each of the one or more nodes, the allocated resource quota for each of the one or more nodes and one or more available resources for the one or more nodes, to a physical and virtual inventory manager (PVIM) [1050],
  • the PVIM [1050] maintains the complete inventory including the physical and virtual resources as well as the VNFs/CNFs instantiated by the platform.
  • the VNF/CNF inventory is created as and when they are instantiated. It maintains the total resources and its detail which are reserved for the VNF/CNF during the instantiation.
  • the PVIM [1050] detects addition of any new physical resources added to the VIMs, the added physical resources are translated to virtual resources and are added to the free resource pool maintained at the PVIM [1050],
  • the PEEGN [1088] may send a PROVIDE VIM AZ HA DETAIL to PVIM [1050] to get the one or more current used resources by each of the one or more nodes, the allocated resource quota for each of the one or more nodes and the one or more available resources for the one or more nodes.
  • the one or more current used resources by each of the one or more nodes may refer to the resources currently used by the one or more node from the allocated resources.
  • the allocated resource quota for each of the one or more nodes may refer to the predefined limit of the resources allocated to the one or more nodes.
  • PEEGN After successful instantiation of a VNF or a CNF instance, PEEGN [1088] allocates resources for the VNF/CNF components and asks inventory to reserve the resources for the same.
  • the resources reserved at the time of instantiation define the allocated resource quote for each VNF and CNF instance.
  • the one or more available resources for the one or more nodes may represent the amount of available resources in the network for the one or more node components.
  • the PEEGN [1088] sends PROVIDE VIM AZ HA DETAIL request to PVIM [1050] to provide available PVIM details against each Availability Zone (AZ) and Host Aggregate (HA) and the used & free resources in each HA.
  • the Availability zones (AZ) are end user visible logical abstractions for partitioning of the cloud services.
  • the logical partition comprises block storage, compute services and network services.
  • the logical partition requires a particular host to be present in an Availability Zone.
  • AZ are isolated or separated data centres located within specific regions in which cloud services originate and operate.
  • AZ refers to a specific or an isolated location in a data centre or in a cloud environment.
  • the Host Aggregate refers to an aggregate or group of physical hosts in a virtualised environment. Further, HA are used to define where specific virtual network functions (VNFs) can be deployed. HA can be created based on the hardware profile of the physical hosts. Further, each Availability zone may have an association of multiple host aggregates, which in turn may have a list of hosts associated with it.
  • the attributes of the PROVIDE VIM AZ HA DETAIL event comprise VIM Id, VNF Id and VNF Version.
  • VIM Id refers to identifier of a Virtualized Infrastructure Manager (VIM) instance on which the VNF/VNFC/CNF/CNFC is to be spawned for scale-out or from which the VNF/VNFC/CNF/CNFC instance needs to be removed for scalein.
  • VIP Virtualized Infrastructure Manager
  • the processing unit [304] may analyse at the PEEGN [1088], a demand for one or more resources, for automatic scaling of the one or more nodes, based on the one or more current used resources by each of the one or more nodes, the allocated resource quota for each of the one or more nodes, the one or more available resources for the one or more nodes, a set of automatic scaling constraints data, and the automatic scaling policy.
  • the automatic scale constraints is the total virtual resource that a particular VNF/VNFC/CNF/CNFC can consume considering its instantiations as well as automatic scaling requirements.
  • the system [300] ensures that the VNF/VNFC/CNF/CNFC do not consume more than the resources specified under any circumstances.
  • the set of automatic scaling constraints data comprises at least one of a total number of CPUs, a virtual memory size, and a disk size which can be allocated to the one or more nodes to scale.
  • the processing unit [304] after the analysis at the PEEGN [1088], may use the automatic scaling policy to scale with the required resources based on one or more affinities, one or more anti-affinities, all dependents, and deployment flavors.
  • VNF/CNF During the onboarding of VNF/CNF, policy rules specific to VNF/CNF instantiation, and VNFC/CNFC scaling, VNF/CNF healing, VNFC/CNFC dependencies, VNFC/CNFC affinity / anti-affinity and VNF/CNF termination are created. These rules are persisted by the MANO platform as shown in FIG. 1. As would be understood, the one or more affinities, the one or more anti -affinities, all dependents, and the deployment flavors are essential rules that may manage the placement of resources in a scalable, fault tolerant and optimized network environment.
  • the one or more anti-affinities, all dependents, and the deployment flavors may determine that one or more resources may run together, one or more resources may not run together, and one or more resources are dependent on other one or more resources.
  • the deployment flavor refers to the compute, memory, and storage capacity required by a VNF/CNF instance. Therefore, based on the one or more anti-affinities, all dependents, and the deployment flavors, the automatic scaling policy may be invoked for the optimized performance of all the resources or the one or more nodes in the network..
  • the transceiver unit [302] may transmit from the PEEGN [1088], to the PVIM [1050], a request for one of reserving and unreserving of the one or more resources.
  • the PEEGN [1088] shall trigger PVIM [1050] to reserve the resources in its inventory using RESERVE RESOURCES IN VIM AZ HA event.
  • the request for reserving the one or more resources may be a request to allocate the one or more resources to the one or more nodes to ensure the performance, and the availability of the one or more nodes.
  • the request for unreserving the one or more resources may be a request to de-allocate the one or more resources to avoid flapping of the resources or to migrate the one or more resources from one instance (or site) to another instance.
  • PVIM [1050] reserves resources or unreserves resources on that VIM which was selected by PEEGN [1088] and sends response to PEEGN [1088], Further, the response from the PVIM [1050] on the request for one of the reserving and the unreserving of the one or more resources comprises one or more tokens for each of the one or more nodes.
  • the token is a header that may be used to validate identity of the nodes.
  • the response may have token that may be validated, for e.g. tk55f9al-19a6-adf3-8514-b0bl50asdfd0 (UUID) is an example of a token which is a universal unique identifier.
  • the transceiver unit [302] may trigger from the PEEGN [1088], the automatic scaling request to a node manager, based on a response from the PVIM [1050] on the request for one of the reserving and the unreserving of the one or more resources.
  • the node manager may be a VNF/CNF life cycle manager (VNF- LM [1042]/CNF-LM [1052]).
  • VNF-LM [1042] may be a microservice responsible for lifecycle management of VNF instances.
  • the VNF-LM [1042] may instantiate/terminate/scale resources.
  • the CNF-LM [1052] may be responsible for creating a CNF or individual CNFC instances. Also, it may be responsible for healing and Scaling out CNF’s or individual CNFC’s.
  • the trigger, by the transceiver unit [302], from the PEEGN [1088], for the automatic scaling request to the node manager comprises the one or more tokens for each of the one or more nodes.
  • the PEEGN [1088] upon successful receipt of the response from the PVIM [1050], may send the one or more tokens, received from the PVIM [1050], to the node manager (i.e., VNF-LM [1042]/CNF-LM [1052]) to automatic scale the one or more nodes.
  • the trigger may be an event, such as TRIGGER VNF SCALING/TRIGGER VNFC SCALING event.
  • the trigger events will comprise the tokens received from the PVIM [1050] in the response. These tokens are passed to the VNF-LM [1042]/CNF-LM [1052] to validate the request.
  • the transceiver unit [302] may receive at the PEEGN [1088], an acknowledgement response from the node manager, and the transceiver unit [302] may transmit from the PEEGN [1088], a response to the NPDA [1096], of the automatic scaling of the one or more nodes.
  • the method [400] is performed by the system [300], Also, as shown in FIG. 4, the method [400] initiates at step [402], [0080] At step [404], the method [400] comprises receiving, by a transceiver unit [302] at a policy execution engine (PEEGN) [1088], from a Network Function Virtualization platform decision and analytics (NPDA) [1096], a request for executing an automatic scaling policy for the one or more nodes.
  • PEEGN policy execution engine
  • NPDA Network Function Virtualization platform decision and analytics
  • the one or more nodes may comprise at least one of one or more virtual network functions (VNFs), one or more virtual network function components (VNFCs), one or more container network functions (CNFs), and one or more container network function components (CNFCs).
  • VNFs virtual network functions
  • VNFCs virtual network function components
  • CNFs container network functions
  • CNFCs container network function components
  • the policy and the set of data related to historical instances of breach event of the VNF/CNF instances may be retrieved by the NPDA [1096], Based on the retrieved policy and the set of data, the NPDA [1096] evaluates a hysteresis for the breach event. Further, the NPDA [1096] based on the scaling policy of the VNF/VNFC/CNF/CNFC for which the threshold breach event is detected, executes a hysteresis. If the hysteresis meets the criteria, the NPDA [1096] requests the PEEGN [1088] to execute a scaling policy.
  • the PEEGN [1088] is a system that may create, manage and enforce the polices and rules to regulate the behaviour and the operations of the network function. Further, the PEEGN [1088] may ensure that the network function and its components may function and operate as per the predefined polices and rules.
  • the PEEGN [1088] calculates required resources for any VNF /CNF, does quota check and updates PVIM [1050] based on affinity/anti- affinity and other policies which are defined at PEEGN [1088],
  • the request received by the transceiver unit [302] at the PEEGN [1088] may be an INVOKE POLICY event. It is to be noted that the events are generated based on the predefined policies and rules.
  • the INVOKE POLICY event may be a trigger that may initiate any pre-defined policy to be executed for the one or more nodes. Considering an example, traffic load on a node increases and crosses a predefined threshold. The INVOKE POLICY event may be triggered, and more resources may be allocated to handle the extra load. Thereby, ensuring the optimal performance of the node.
  • the automatic scaling policy may refer to rules and policies that may automatically and/or dynamically allocate or de-allocate resources based on the demand to ensure optimal performance of the one or more nodes.
  • automatic-scaling of the one or more nodes comprises at least one of scale-in and scale-out of the one or more nodes, as would be understood, the scale-in may refer to a process to reduce the number of active instances and the resources allocated to the network function, in response to the decreased demand and usage of the resources.
  • the scale-out may refer to the process where new instances are created to handle the workload on the existing instances as demand of the resources may increase.
  • the method [400] comprises fetching, by the transceiver unit [302] at the PEEGN [1088], a set of data relating to the one or more nodes. Further, to fetch the set of data relating to the one or more nodes, the transceiver unit [302] may transmit from the PEEGN [1088] a request to one or more node components associated with the one or more nodes to fetch the set of data related to the one or more nodes.
  • the request to fetch the set of data related to the one or more nodes may be a GET VNF DETAILS event.
  • the PEEGN [1088] may send the GET VNF DETAILS to the one or more node catalogues to fetch details related to the one or more nodes and the one or more node components.
  • the one or more node catalogues are a VNF catalogues, or a CNF catalogues.
  • the set data may include such as, but not limited to, performance status, workload, capacity and resource consumption. It is to be noted that the above mentioned set of data is exemplary and in no manner limiting the scope of present disclosure.
  • a storage unit [306] may save at the PEEGN [1088], the set of data related to the one or more nodes in a database [308],
  • the method [400] comprises sending, by the transceiver unit [302], from the PEEGN [1088] a request to get one or more current used resources by each of the one or more nodes, the allocated resource quota for each of the one or more nodes, and one or more available resources for the one or more nodes, to a physical and virtual inventory manager (PVIM) [1050],
  • PVIM physical and virtual inventory manager
  • the PEEGN [1088] may send a PROVIDE VIM AZ HA DETAIL to PVIM [1050] to get the one or more current used resources by each of the one or more nodes, the allocated resource quota for each of the one or more nodes and the one or more available resources for the one or more nodes.
  • the one or more current used resources by each of the one or more nodes may refer to the resources currently used by the one or more node from the allocated resources.
  • the allocated resource quota for each of the one or more nodes may refer to the predefined limit of the resources allocated to the one or more nodes.
  • PEEGN After successful instantiation of a VNF or a CNF instance, PEEGN [1088] allocates resources for the VNF/CNF components and asks inventory to reserve the resources for the same.
  • the resources reserved at the time of instantiation define the allocated resource quote for each VNF and CNF instance.
  • the one or more available resources for the one or more nodes may represent the amount of available resources in the network for the one or more node components.
  • the PEEGN [1088] sends PROVIDE VIM AZ HA DETAIL request to PVIM [1050] to provide available PVIM details against each Availability Zone (AZ) and Host Aggregate (HA) and the used & free resources in each HA.
  • the method [400] comprises analysing, by a processing unit [304] at the PEEGN [1088], a demand for one or more resources, for automatic scaling of the one or more nodes, based on the one or more current used resources by each of the one or more nodes, the allocated resource quota for each of the one or more nodes, the one or more available resources for the one or more nodes, a set of automatic scaling constraints data, and the automatic scaling policy.
  • the automatic scale constraints is the total virtual resource that a particular VNF/VNFC/CNF/CNFC can consume considering its instantiations as well as automatic scaling requirements.
  • the system [300] ensures that the VNF/VNFC/CNF/CNFC do not consume more than the resources specified under any circumstances. This is to ensure that no VNF/VNFC/CNF/CNFC instance is able to hog complete network resource.
  • the set of automatic scaling constraints data comprises at least one of a total number of CPUs, a virtual memory size, and a disk size for the one or more nodes to scale.
  • the processing unit [304] after the analysis at the PEEGN [1088], may use the automatic scaling policy to scale with the required resources based on one or more affinities, one or more anti-affinities, all dependents, and deployment flavors.
  • policy rules specific to VNF/CNF instantiation, and VNFC/CNFC scaling, VNF/CNF healing, VNFC/CNFC dependencies, VNFC/CNFC affinity / anti-affinity and VNF/CNF termination are created. These rules are persisted by the platform.
  • the one or more affinities, the one or more anti-affinities, all dependents, and the deployment flavors are essential tools that may manage the placement of resources in a scalable, fault tolerant and optimized network environment. Further, the one or more anti -affinities, all dependents, and the deployment flavors may determine that one or more resources may run together, one or more resources may not run together, and one or more resources are dependent on other one or more resources.
  • the deployment flavor refers to the compute, memory, and storage capacity required by a VNF/CNF instance. Therefore, based on the one or more anti-affinities, all dependents, and the deployment flavors, the automatic scaling policy may be invoked for the optimized performance of all the resources or the one or more nodes in the network.
  • the method [400] comprises transmitting, by the transceiver unit [302] from the PEEGN [1088], to the PVIM [1050], a request for one of reserving and unreserving the one or more resources.
  • the request for reserving the one or more resources may be a request to allocate the one or more resources to the one or more nodes to ensure the performance, and the availability of the one or more nodes.
  • the request for unreserving the one or more resources may be a request to de-allocate the one or more resources to avoid flapping of the resources or to migrate the one or more resources from one instance (or site) to another instance.
  • the response from the PVIM [1050] on the request for one of the reserving and the unreserving of the one or more resources comprises one or more tokens for each of the one or more nodes.
  • the token is a header that may be used to validate one or more nodes.
  • the response may have token that may be validated, for e.g. tk55f9al-19a6-adf3-8514-b0bl50asdfd0 (UUID).
  • UUID tk55f9al-19a6-adf3-8514-b0bl50asdfd0
  • the trigger events will comprise the tokens received from the PVIM [1050] in the response. These tokens are passed to the VNF-LM [1042]/CNF-LM [1052] to validate the request.
  • the method [400] comprises triggering, by the transceiver unit [302], from the PEEGN [1088], the automatic scaling request to a node manager, based on a response from the PVIM [1050] on the request for one of the reserving and the unreserving of the one or more resources.
  • the node manager may be a VNF/CNF life cycle manager (VNF- LM [1042]/CNF-LM [1052]).
  • VNF-LM [1042] may be a microservice responsible for lifecycle management of VNF instances.
  • the VNF-LM [1042] may instantiate/terminate/scale resources.
  • the CNF-LM [1052] may be responsible for creating a CNF or individual CNFC instances. Also, it may be responsible for healing and scaling out CNF’s or individual CNFC’s.
  • the trigger, by the transceiver unit [302], from the PEEGN [1088], for the automatic scaling request to the node manager comprises the one or more tokens for each of the one or more nodes.
  • the PEEGN [1088] upon successful receipt of the response from the PVIM [1050], may send the one or more tokens, received from the PVIM [1050], to the node manager (i.e., VNF-LM [1042]/CNF-LM [1052]) to automatic scale the one or more nodes.
  • the trigger may be an event, such as TRIGGER VNF SCALING/TRIGGER VNFC SCALING event.
  • the transceiver unit [302] may receive at the PEEGN [1088], an acknowledgement response from the node manager and the transceiver unit [302] may transmit from the PEEGN [1088], a response to the NPDA [1096], of the automatic scaling of the one or more nodes.
  • FIG. 5 an exemplary of session flow diagram for automatic scaling of one or more nodes, in accordance with exemplary implementations of the present disclosure is illustrated.
  • the session [500] is performed at the system [300],
  • the policy execution engine (PEEGN) [1088] may receive a request for reservation and allocation from an Analytics.
  • the Analytics may be a Network Function Virtualization Platform Decision and Analytics (NPDA) [1096]
  • NPDA Network Function Virtualization Platform Decision and Analytics
  • the NPDA [1096] evaluates a policy relating to the breach event. It may be noted that the policy and the set of data related to historical instances of breach event of the VNF/CNF instances may be retrieved by the NPDA [1096], Based on the retrieved policy and the set of data, the NPDA [1096] evaluates a hysteresis for the breach event.
  • the NPDA [1096] based on the scaling policy of the VNF/VNFC/CNF/CNFC for which the threshold breach event is detected, executes a hysteresis. If the hysteresis meets the criteria, the NPDA [1096] requests the PEEGN [1088] to execute a scaling policy.
  • the PEEGN [1088] may handle events for fetching the VNF/CNF details and resource details from the Inventory (PVIM [1050]).
  • the inventory of the VNF/CNF may maintain the details of the VNF/CNF.
  • the details of the VNF/CNF may include such as, but not limited to, a VNF/CNF name, a VNF/CNF version, etc.
  • the request to fetch the set of data related to the one or more nodes may be a GET VNF DETAILS event.
  • the PEEGN [1088] may send the GET VNF DETAILS to the one or more node catalogues to fetch details related to the one or more nodes and the one or more node components.
  • the one or more node catalogues are a VNF catalogues, or a CNF catalogues.
  • the set data may include such as, but not limited to, performance status, workload, capacity and resource consumption.
  • the PEEGN [1088] may save the response in the database [308] for further processing.
  • the PEEGN [1088] may send request to PVIM [1050] for resources reservation, allocation, or unreserve resources.
  • the PEEGN [1088] may consult the PVIM [1050] to check the current used resources against the total allocated Quota. For this the PEEGN [1088] may send a PROVIDE VIM AZ HA DETAIL to the PVIM [1050], Further, in an implementation, the PEEGN [1088] sends PROVIDE VIM AZ HA DETAIL request to PVIM [1050] to provide available PVIM details against each Availability Zone (AZ) and Host Aggregate (HA) and the used & free resources in each HA.
  • AZ Availability Zone
  • HA Host Aggregate
  • the PEEGN [1088] may allow the logical handling of automatic scale constraints.
  • the automatic scale constraints may include in an example, such as, but not limited to, total No Of CPU: 120 (in cores), vMemory Size: 100 (gb), diskSize: 23.
  • the PEEGN [1088] after processing the scaling request based on automatic scale constraints, may save the updated details for the VNF/CNF in the database [308] for further processing. Also, the PEEGN [1088] may repeat the entire session until the request of all the events is served.
  • the PEEGN [1088] may receive and generate tokenizer response for the automatic scaling response.
  • the tokenizer response may comprise one or more tokens.
  • the token is header that is used to validate one or more nodes. All the responses will have token that will be validated.
  • the PEEGN [1088] may send the acknowledgement response to the NPDA [1096],
  • the present disclosure may further relate to a non-transitory computer readable storage medium storing one or more instructions for automatic scaling of one or more nodes, the instructions include executable code which, when executed by one or more units of a system [300], causes a transceiver unit [302] of the system [300] to receive at a policy execution engine (PEEGN) [1088], from a Network Function Virtualization platform decision and analytics (NPDA) [1096], a request for executing an automatic scaling policy for the one or more nodes. Further, the executable code when executed causes the transceiver unit [302] to fetch at the PEEGN [1088], a set of data relating to the one or more nodes.
  • PEEGN policy execution engine
  • NPDA Network Function Virtualization platform decision and analytics
  • the executable code when executed causes the transceiver unit [302] to send from the PEEGN [1088] a request to get one or more current used resources by each of the one or more nodes, the allocated resource quota for each of the one or more nodes and one or more available resources for the one or more nodes, to a physical and virtual inventory manager (PVIM) [1050],
  • the executable code when further executed causes a processing unit [304] of the system [300] to analyse at the PEEGN [1088], a demand for one or more resources, for automatic scaling of the one or more nodes, based on the one or more current used resources by each of the one or more nodes, the allocated resource quota for each of the one or more nodes, the one or more available resources for the one or more nodes a set of automatic scaling constraints data, and the automatic scaling policy.
  • the executable code when executed causes the transceiver unit [302] to transmit from the PEEGN [1088], to the PVIM [1050], a request for reserving the one or more resources.
  • the executable code when executed causes the transceiver unit [302] to trigger from the PEEGN [1088], the automatic scaling request to a node manager, based on a response from the PVIM [1050] on the request for reserving the one or more resources.
  • the present disclosure provides a technically advanced solution for automatic scaling of one or more nodes. More particularly, the present solution applies automatic scale constraints based on policies that are applicable for VNF/VNFC/CNC/CNFC for automatic scaling of resources. Further, the present solution leads to zero data loss policies while VNF/VNFC/CNC/CNFC resources are scaling up. The present solution also supports event driven scaling. Additionally, automatic scale constraints address several critical problems in the MANO architecture. Below are some key problems that are solved by automatic scale constraints:

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The present disclosure relates to a method and system for automatic scaling of nodes The method comprises receiving a request for executing an automatic scaling policy for the nodes and fetching a set of data relating to the nodes. The method comprises sending a request to get current used resources, the allocated resource quota, and available resources and analysing a demand for one or more resources, for automatic scaling, based on the current used resources, the allocated resource quota, the available resources for the nodes, a set of automatic scaling constraints data, and the automatic scaling policy. The method further comprises transmitting a request for one of reserving and unreserving of the one or more resources and triggering the automatic scaling request based on a response on the request for one of the reserving and the unreserving of the one or more resources.

Description

METHOD AND SYSTEM FOR AUTOMATIC SCALING OF ONE OR MORE NODES
FIELD OF THE DISCLOSURE
[0001] Embodiment of the present disclosure generally relates to the field of network management. More particularly, the present disclosure may relate to method and system for automatic scaling of one or more nodes.
BACKGROUND
[0002] The following description of related art is intended to provide background information pertaining to the field of the disclosure. This section may include certain aspects of the art that may be related to various features of the present disclosure. However, it should be appreciated that this section be used only to enhance the understanding of the reader with respect to the present disclosure, and not as admissions of prior art.
[0003] In communication network such as 5G communication network, different microservices perform different services, jobs and tasks in the network. Different microservices have to perform their jobs based on operational parameters and policies, in such a way, that it does not affect microservices’ own operations and service network operations. However, in MANO system architecture, during service operations, for fulfilling the requirements of policies and operational parameters, it is required to provide sufficient resources for managing the virtual network functions (VNF/VNFC) and/or containerized functions (CNF/CNFC) component to handle service requests coming in the network. Policy Execution Engine (PEEGN) provides functionality to support dynamic requirements of resource management and network service orchestration in the virtualized and containerized network. PEEGN service stores and provides policies for resource security, availability, and scalability of VNFs. It executes automatic scaling and healing functionality of VNF and automatic scaling of CNF. For implementing proper resources allocations, there are some challenges, such as excessive provisioning of resources, insufficient provisioning of resources, resource failures, resource mismanagement, performance degradation, conflict while reservation and allocation of resources, unavailability of Policy Execution Engine Service, time consumed in reservation and allocation of VNF/VNFC/CNFC/CNF resources and cost increment, which may happen in the network and affects the network performance and operational efficiency. [0004] Hence, in view of these and other existing limitations, there arises an imperative need to provide an efficient solution to overcome the above-mentioned and other limitations and to provide a method and a system for automatic scaling of one or more nodes with automatic scaling constraints.
SUMMARY
[0005] This section is provided to introduce certain aspects of the present disclosure in a simplified form that are further described below in the detailed description. This summary is not intended to identify the key features or the scope of the claimed subject matter.
[0006] An aspect of the present disclosure may relate to a method for automatic scaling of one or more nodes. The method comprises receiving, by a transceiver unit at a Policy Execution Engine (PEEGN), from a Network Function Virtualization Platform Decision and Analytics (NPDA), a request for executing an automatic scaling policy for the one or more nodes. Further, the method comprises fetching, by the transceiver unit at the PEEGN, a set of data relating to the one or more nodes. The method further comprises sending, by the transceiver unit, from the PEEGN a request to get one or more current used resources by each of the one or more nodes, the allocated resource quota for each of the one or more nodes, and one or more available resources for the one or more nodes, to a physical and virtual inventory manager (PVIM). Further, the method comprises analysing, by a processing unit at the PEEGN, a demand for one or more resources, for automatic scaling of the one or more nodes, based on the one or more current used resources by each of the one or more nodes, the allocated resource quota for each of the one or more nodes, the one or more available resources for the one or more nodes, a set of automatic scaling constraints data, and the automatic scaling policy. Furthermore, the method comprises transmitting, by the transceiver unit from the PEEGN, to the PVIM, a request for one of reserving and unreserving the one or more resources. Thereafter, the method comprises triggering, by the transceiver unit, from the PEEGN, the automatic scaling request to a node manager, based on a response from the PVIM on the request for one of the reserving and the unreserving of the one or more resources.
[0007] In an exemplary aspect of the present disclosure, the one or more nodes comprise at least one of virtual network functions (VNFs), virtual network function components (VNFCs), container network functions (CNFs), and container network function components (CNFCs). [0008] In an exemplary aspect of the present disclosure, fetching, by the transceiver unit at the PEEGN node, the set of data relating to the one or more nodes comprises at least one of transmitting, by the transceiver unit, from the PEEGN a request to one or more node components associated with the one or more nodes to fetch the set of data related to the one or more nodes and saving, by a storage unit, at the PEEGN, the set of data related to the one or more nodes in a database.
[0009] In an exemplary aspect of the present disclosure, the set of automatic scaling constraints data comprises at least one of a total number of CPUs, a virtual memory size, and a disk size with the one or more nodes.
[0010] In an exemplary aspect of the present disclosure, automatic-scaling of the one or more nodes comprises at least one of scale-in and scale-out of the one or more nodes.
[0011] In an exemplary aspect of the present disclosure, the response from the PVIM on the request for one of the reserving and the unreserving of the one or more resources comprises one or more tokens for each of the one or more nodes.
[0012] In an exemplary aspect of the present disclosure, the triggering, by the transceiver unit, from the PEEGN, the automatic scaling request to a node manager comprises the one or more tokens for each of the one or more nodes.
[0013] In an exemplary aspect of the present disclosure, prior to transmitting, by the transceiver unit from the PEEGN, to the PVIM, a request for reserving the one or more resources, the method comprises updating, by the storage unit, at the PEEGN, the one or more current used resources by each of the one or more nodes, in the database.
[0014] In an exemplary aspect of the present disclosure, the method further comprises receiving, by the transceiver unit, at the PEEGN, an acknowledgement response from the node manager and transmitting, by the transceiver unit, from the PEEGN, a response to the NPDA, of the automatic scaling of the one or more nodes.
[0015] Another aspect of the present disclosure may relate to a system for automatic scaling of one or more nodes. The system comprises a transceiver unit configured to receive at a Policy Execution Engine (PEEGN), from a Network Function Virtualization Platform Decision and Analytics (NPDA), a request for executing an automatic scaling policy for the one or more nodes. The transceiver unit is further configured to fetch at the PEEGN, a set of data relating to the one or more nodes. Further, the transceiver unit is configured to send from the PEEGN a request to get one or more current used resources by each of the one or more nodes, the allocated resource quota for each of the one or more nodes and one or more available resources for the one or more nodes, to a physical and virtual inventory manager (PVIM). Further, the system comprises a processing unit, configured to analyse at the PEEGN, a demand for one or more resources, for automatic scaling of the one or more nodes, based on the one or more current used resources by each of the one or more nodes, the allocated resource quota for each of the one or more nodes, the one or more available resources for the one or more nodes a set of automatic scaling constraints data, and the automatic scaling policy. Furthermore, the transceiver unit is configured to transmit from the PEEGN, to the PVIM, a request for one of reserving and unreserving the one or more resources. Moreover, the transceiver unit is configured to trigger from the PEEGN, the automatic scaling request to a node manager, based on a response from the PVIM on the request for one of the reserving and the unreserving of the one or more resources.
[0016] Yet another aspect of the present disclosure may relate to a non-transitory computer readable storage medium storing one or more instructions for automatic scaling of one or more nodes, the instructions include executable code which, when executed by one or more units of a system, causes a transceiver unit of the system to receive at a Policy Execution Engine (PEEGN), from a Network Function Virtualization Platform Decision and Analytics (NPDA), a request for executing an automatic scaling policy for the one or more nodes. Further, the executable code when executed causes the transceiver unit to fetch at the PEEGN, a set of data relating to the one or more nodes. Further, the executable code when executed causes the transceiver unit to send from the PEEGN a request to get one or more current used resources by each of the one or more nodes, the allocated resource quota for each of the one or more nodes and one or more available resources for the one or more nodes, to a physical and virtual inventory manager (PVIM). The executable code when further executed causes a processing unit of the system to analyse at the PEEGN, a demand for one or more resources, for automatic scaling of the one or more nodes, based on the one or more current used resources by each of the one or more nodes, the allocated resource quota for each of the one or more nodes, the one or more available resources for the one or more nodes a set of automatic scaling constraints data, and the automatic scaling policy. Furthermore, the executable code when executed causes the transceiver unit to transmit from the PEEGN, to the PVIM, a request for one of reserving and unreserving the one or more resources. Moreover, the executable code when executed causes the transceiver unit to trigger from the PEEGN, the automatic scaling request to a node manager, based on a response from the PVIM on the request for one of the reserving and the unreserving of the one or more resources.
OBJECT OF THE DISCLOSURE
[0017] Some of the objects of the present disclosure, which at least one embodiment disclosed herein satisfies are listed herein below.
[0018] It is an object of the present disclosure to provide a method and a system for automatic scaling of one or more nodes.
[0019] It is another object of the present disclosure to provide a solution to apply automatic scale constraints based on policies that are applicable for VNF/VNFC/CNC/CNFC for automatic scaling of resources.
[0020] It is yet another object of the present disclosure to provide a solution to apply automatic scale constraints based on affinity, anti -affinity, dependent and deployment flavor.
[0021] It is yet another object of the present disclosure to provide a solution that leads to zero data loss policies while VNF/VNFC/CNC/CNFC resources are scaling up.
[0022] It is yet another object of the present disclosure to provide a solution that supports event driven scaling.
BRIEF DESCRIPTION OF THE DRAWINGS
[0023] The accompanying drawings, which are incorporated herein, and constitute a part of this disclosure, illustrate exemplary embodiments of the disclosed methods and systems in which like reference numerals refer to the same parts throughout the different drawings. Components in the drawings are not necessarily to scale, emphasis instead being placed upon clearly illustrating the principles of the present disclosure. Also, the embodiments shown in the figures are not to be construed as limiting the disclosure, but the possible variants of the method and system according to the disclosure are illustrated herein to highlight the advantages of the disclosure. It will be appreciated by those skilled in the art that disclosure of such drawings includes disclosure of electrical components or circuitry commonly used to implement such components. [0024] FIG. 1 illustrates an exemplary block diagram of a management and orchestration (MANO) architecture, in accordance with exemplary implementation of the present disclosure.
[0025] FIG. 2 illustrates an exemplary block diagram of a computing device upon which the features of the present disclosure may be implemented in accordance with exemplary implementation of the present disclosure.
[0026] FIG. 3 illustrates an exemplary block diagram of a system for automatic scaling of one or more nodes, in accordance with exemplary implementations of the present disclosure.
[0027] FIG. 4 illustrates an exemplary flow diagram of a method for automatic scaling of one or more nodes, in accordance with exemplary implementations of the present disclosure.
[0028] FIG. 5 illustrates an exemplary of session flow diagram for automatic scaling of one or more nodes, in accordance with exemplary implementations of the present disclosure.
[0029] The foregoing shall be more apparent from the following more detailed description of the disclosure.
DETAILED DESCRIPTION
[0030] In the following description, for the purposes of explanation, various specific details are set forth in order to provide a thorough understanding of embodiments of the present disclosure. It will be apparent, however, that embodiments of the present disclosure may be practiced without these specific details. Several features described hereafter may each be used independently of one another or with any combination of other features. An individual feature may not address any of the problems discussed above or might address only some of the problems discussed above.
[0031] The ensuing description provides exemplary embodiments only, and is not intended to limit the scope, applicability, or configuration of the disclosure. Rather, the ensuing description of the exemplary embodiments will provide those skilled in the art with an enabling description for implementing an exemplary embodiment. It should be understood that various changes may be made in the function and arrangement of elements without departing from the spirit and scope of the disclosure as set forth. [0032] Specific details are given in the following description to provide a thorough understanding of the embodiments. However, it will be understood by one of ordinary skill in the art that the embodiments may be practiced without these specific details. For example, circuits, systems, processes, and other components may be shown as components in block diagram form in order not to obscure the embodiments in unnecessary detail.
[0033] Also, it is noted that individual embodiments may be described as a process which is depicted as a flowchart, a flow diagram, a data flow diagram, a structure diagram, or a block diagram. Although a flowchart may describe the operations as a sequential process, many of the operations may be performed in parallel or concurrently. In addition, the order of the operations may be re-arranged. A process is terminated when its operations are completed but could have additional steps not included in a figure.
[0034] The word “exemplary” and/or “demonstrative” is used herein to mean serving as an example, instance, or illustration. For the avoidance of doubt, the subject matter disclosed herein is not limited by such examples. In addition, any aspect or design described herein as “exemplary” and/or “demonstrative” is not necessarily to be construed as preferred or advantageous over other aspects or designs, nor is it meant to preclude equivalent exemplary structures and techniques known to those of ordinary skill in the art. Furthermore, to the extent that the terms “includes,” “has,” “contains,” and other similar words are used in either the detailed description or the claims, such terms are intended to be inclusive — in a manner similar to the term “comprising” as an open transition word — without precluding any additional or other elements.
[0035] As used herein, a “processing unit” or “processor” or “operating processor” includes one or more processors, wherein processor refers to any logic circuitry for processing instructions. A processor may be a general -purpose processor, a special purpose processor, a conventional processor, a digital signal processor, a plurality of microprocessors, one or more microprocessors in association with a (Digital Signal Processing) DSP core, a controller, a microcontroller, Application Specific Integrated Circuits, Field Programmable Gate Array circuits, any other type of integrated circuits, etc. The processor may perform signal coding data processing, input/output processing, and/or any other functionality that enables the working of the system according to the present disclosure. More specifically, the processor or processing unit is a hardware processor. [0036] As used herein, “a user equipment”, “a user device”, “a smart-user-device”, “a smartdevice”, “an electronic device”, “a mobile device”, “a handheld device”, “a wireless communication device”, “a mobile communication device”, “a communication device” may be any electrical, electronic and/or computing device or equipment, capable of implementing the features of the present disclosure. The user equipment/device may include, but is not limited to, a mobile phone, smart phone, laptop, a general-purpose computer, desktop, personal digital assistant, tablet computer, wearable device or any other computing device which is capable of implementing the features of the present disclosure. Also, the user device may contain at least one input means configured to receive an input from at least one of a transceiver unit, a processing unit, a storage unit, a detection unit and any other such unit(s) which are required to implement the features of the present disclosure.
[0037] As used herein, “storage unit” or “memory unit” refers to a machine or computer-readable medium including any mechanism for storing information in a form readable by a computer or similar machine. For example, a computer-readable medium includes read-only memory (“ROM”), random access memory (“RAM”), magnetic disk storage media, optical storage media, flash memory devices or other types of machine-accessible storage media. The storage unit stores at least the data that may be required by one or more units of the system to perform their respective functions.
[0038] As used herein “interface” or “user interface” refers to a shared boundary across which two or more separate components of a system exchange information or data. The interface may also refer to a set of rules or protocols that define communication or interaction of one or more modules or one or more units with each other, which also includes the methods, functions, or procedures that may be called.
[0039] All modules, units, components used herein, unless explicitly excluded herein, may be software modules or hardware processors, the processors being a general -purpose processor, a special purpose processor, a conventional processor, a digital signal processor (DSP), a plurality of microprocessors, one or more microprocessors in association with a DSP core, a controller, a microcontroller, Application Specific Integrated Circuits (ASIC), Field Programmable Gate Array circuits (FPGA), any other type of integrated circuits, etc. [0040] As used herein the transceiver unit includes at least one receiver and at least one transmitter configured respectively for receiving and transmitting data, signals, information or a combination thereof between units/components within the system and/or connected with the system.
[0041] As used herein, Physical and Virtual Inventory Manager (PVIM) module maintains the inventory and its resources. After getting a request to reserve resources from PEEGN, PVIM adds up the resources consumed by particular network function as used resources and removes them from free resources. Further, the PVIM updates this in the NoSQL database.
[0042] As used herein, Container Network Function (CNF) Life Cycle Manager (CNF-LM) may capture the details of vendors, CNFs, and Container Network Function Components (CNFCs) via create, read, and update APIs exposed by the service itself. The captured details are stored in a database and can be further used by SA service. CNF-LM may create CNF or individual CNFC instances. CNFLM may scale-out the CNFs or individual CNFCs.
[0043] As used herein, Policy Execution Engine (PEEGN) provides a network function virtualisation (NFV) software defined network (SDN) platform functionality to support dynamic requirements of resource management and network service orchestration in the virtualized network. Further, the PEEGN is involved during CNF instantiation flow to check for CNF policy and to reserve resources required to instantiate CNF at PVIM. The PEEGN supports scaling policy for CNFC.
[0044] As used herein, Capacity Manager Platform (CMP) creates a task to monitor the performance metrics data received for that VNF, VNFC and CNFC. Wherever there is a threshold breach, CMP sends a trigger to NFV Platform and Decision Analytics (NPDA).
[0045] As discussed in the background section, the current known solutions have several shortcomings. For implementing proper resources allocations, there are some challenges, such as excessive provisioning of resources, insufficient provisioning of resources, resource failures, resource mismanagement, performance degradation, conflict while reservation and allocation of resources, unavailability of Policy Execution Engine Service, time consumed in reservation and allocation of VNF/VNFC7CNFC7CNF resources and cost increment. The present disclosure aims to overcome the above-mentioned and other existing problems in this field of technology by providing method and system for automatic scaling of one or more nodes. More particularly, the present disclosure provides a solution to apply automatic scale constraints based on policies that are applicable for VNF/VNFC/CNC/CNFC for automatic scaling of resources. Further, the present disclosure provides a solution to apply automatic scale constraints based on affinity, anti-affinity, dependent and deployment flavor. Further, the present disclosure provides a solution that leads to zero data loss policies while VNF/VNFC/CNC/CNFC resources are scaling up. Furthermore, the present disclosure provides a solution that supports event driven scaling.
[0046] Hereinafter, exemplary embodiments of the present disclosure will be described with reference to the accompanying drawings.
[0047] Referring to FIG. 1 an exemplary block diagram representation of a management and orchestration (MANO) architecture/ platform [100], in accordance with exemplary implementation of the present disclosure is illustrated. The MANO architecture [100] may be developed for managing telecom cloud infrastructure automatically, managing design or deployment design, managing instantiation of network node(s)/ service(s) etc. The MANO architecture [100] deploys the network node(s) in the form of Virtual Network Function (VNF) and Cloud-native/ Container Network Function (CNF). The system as provided by the present disclosure may comprise one or more components of the MANO architecture [100], The MANO architecture [100] may be used to auto-instantiate the VNFs into the corresponding environment of the present disclosure so that it could help in onboarding other vendor(s) CNFs and VNFs to the platform.
[0048] As shown in FIG. 1, the MANO architecture [100] comprises a user interface layer [102], a Network Function Virtualization (NFV) and Software Defined Network (SDN) Design Function module [104], a Platform Foundation Services module [106], a Platform Core Services module [108] and a Platform Resource Adapters and Utilities module [112], All the components are assumed to be connected to each other in a manner as obvious to the person skilled in the art for implementing features of the present disclosure.
[0049] The NFV and SDN design function module [104] comprises a VNF lifecycle manager (compute) [1042], a VNF catalogue [1044], a network services catalogue [1046], a network slicing and service chaining manager [1048], a physical and virtual resource manager [1050] and a CNF lifecycle manager [1052], The VNF lifecycle manager (compute) [1042] may be responsible for deciding on which server of the communication network, and the microservice will be instantiated. The VNF lifecycle manager (compute) [1042] may manage the overall flow of incoming/ outgoing requests during interaction with the user. The VNF lifecycle manager (compute) [1042] may be responsible for determining which sequence to be followed for executing the process. For e.g., in an AMF network function of the communication network (such as a 5G network), sequence for execution of processes Pl and P2 etc. The VNF catalogue [1044] stores the metadata of all the VNFs (also CNFs in some cases). The network services catalogue [1046] stores the information of the services that need to be run. The network slicing and service chaining manager [1048] manages the slicing (an ordered and connected sequence of network service/ network functions (NFs)) that must be applied to a specific networked data packet. The physical and virtual inventory manager (PVIM) [1050] stores the logical and physical inventory of the VNFs. Just like the VNF lifecycle manager (compute) [1042], the CNF lifecycle manager [1052] may be used for the CNFs lifecycle management.
[0050] The platforms foundation services module [106] comprises a microservices elastic load balancer [1062], an identity & access manager [1064], a command line interface (CLI) [1066], a central logging manager [1068], and an event routing manager [1070], The microservices elastic load balancer [1062] may be used for maintaining the load balancing of the request for the services. The identity and access manager [1064] may be used for logging purposes. The command line interface (CLI) [1066] may be used to provide commands to execute certain processes which require changes during the run time. The central logging manager [1068] may be responsible for keeping the logs of every service. These logs are generated by the MANO platform [100], These logs are used for debugging purposes. The event routing manager [1070] may be responsible for routing the events i.e., the application programming interface (API) hits to the corresponding services.
[0051] The platforms core services module [108] comprises NFV infrastructure monitoring manager [1082], an assure manager [1084], a performance manager [1086], a policy execution engine [1088], a capacity monitoring manager (CMM) [1090], a release management (mgmt.) repository [1092], a configuration manager & GCT [1094], an NFV platform decision analytics [1096], a platform NoSQL DB [1098]; a Platform Schedulers and Cron Jobs (PSC) service [1100], a VNF backup & Restore manager [1102], a microservice auditor [1104], and a platform operations, administration and maintenance manager [1106], The NFV infrastructure monitoring manager [1082] monitors the infrastructure part of the NFs. For e.g., any metrics such as CPU utilization by the VNF. The assure manager [1084] may be responsible for supervising the alarms the vendor may be generating. The performance manager [1086] may be responsible for managing the performance counters. The Policy Execution Engine (PEEGN) [1088] may be responsible for managing all of the policies. The capacity monitoring manager (CMM) [1090] may be responsible for sending the request to the PEEGN [1088], The release management (mgmt.) repository (RMR) [1092] may be responsible for managing the releases and the images of all of the vendor's network nodes. The configuration manager & (GCT) [1094] manages the configuration and GCT of all the vendors. The NFV Platform Decision Analytics (NPDA) [1096] helps in deciding the priority of using the network resources. It may be further noted that the policy execution engine (PEEGN) [1088], the configuration manager & GCT [1094] and the NPDA [1096] work together. The platform NoSQL DB [1098] may be a database for storing all the inventory (both physical and logical) as well as the metadata of the VNFs and CNF. The platform schedulers and cron jobs (PSC) service [1100] schedules the task such as but not limited to triggering of an event, traverse the network graph etc. The VNF backup & restore manager [1102] takes backup of the images, binaries of the VNFs and the CNFs and produces those backups on demand in case of server failure. The microservice auditor [1104] audits the microservices. For e.g., in a hypothetical case, instances not being instantiated by the MANO architecture [100] may be using the network resources. In such cases, the microservice auditor [1104] audits and informs the same so that resources can be released for services running in the MANO architecture [100], The audit assures that the services only run on the MANO platform [100], The platform operations, administration and maintenance manager [1106] may be used for newer instances that are spawning.
[0052] The platform resource adapters and utilities module [112] further comprises a platform external API adapter and gateway [1122], a generic decoder and indexer (XML, CSV, JSON) [1124], a service adapter [1126], an API adapter [1128], and a NFV gateway [1130], The platform external API adapter and gateway [1122] may be responsible for handling the external services (to the MANO platform [100]) that require the network resources. The generic decoder and indexer (XML, CSV, JSON) [1124] gets directly the data of the vendor system in the XML, CSV, JSON format. The service adapter [1126] may be the interface provided between the telecom cloud and the MANO architecture [100] for communication. The API adapter [1128] may be used to connect with the virtual machines (VMs). The NFV gateway [1130] may be responsible for providing the path to each service going to/incoming from the MANO architecture [100],
[0053] The service adapter (SA) [1126] is a microservices-based system designed to deploy and manage Container Network Functions (CNFs) and their components (CNFCs) across nodes. The SA [1126] offers REST endpoints for key operations, including uploading container images to a registry, terminating CNFC instances, and creating volumes and networks. CNFs, which are network functions packaged as containers, may consist of multiple CNFCs. The SA [1126] facilitates the deployment, configuration, and management of these components by interacting with API, ensuring proper setup and scalability within a containerized environment. This approach provides a modular and flexible framework for handling network functions in a virtualized network setup.
[0054] Referring to FIG. 2, an exemplary block diagram of a computing device [200] (also referred herein as a computer system [200]) upon which the features of the present disclosure may be implemented in accordance with exemplary implementation of the present disclosure, is illustrated. In an implementation, the computing device [200] may also implement a method for performing one or more corrective actions on one or more Network Functions (NFs) utilising the system. In another implementation, the computing device [200] itself implements the method for performing one or more corrective actions on one or more Network Functions (NFs) using one or more units configured within the computing device [200], wherein said one or more units are capable of implementing the features as disclosed in the present disclosure.
[0055] The computing device [200] may include a bus [202] or other communication mechanism for communicating information, and a hardware processor [204] coupled with the bus [202] for processing information. The hardware processor [204] may be, for example, a general -purpose microprocessor. The computing device [200] may also include a main memory [206], such as a random-access memory (RAM), or other dynamic storage device, coupled to the bus [202] for storing information and instructions to be executed by the processor [204], The main memory [206] also may be used for storing temporary variables or other intermediate information during execution of the instructions to be executed by the processor [204], Such instructions, when stored in non-transitory storage media accessible to the processor [204], render the computing device [200] into a special-purpose machine that is customized to perform the operations specified in the instructions. The computing device [200] further includes a read only memory (ROM) [208] or other static storage device coupled to the bus [202] for storing static information and instructions for the processor [204],
[0056] A storage device [210], such as a magnetic disk, optical disk, or solid-state drive is provided and coupled to the bus [202] for storing information and instructions. The computing device [200] may be coupled via the bus [202] to a display [212], such as a cathode ray tube (CRT), Liquid crystal Display (LCD), Light Emitting Diode (LED) display, Organic LED (OLED) display, etc. for displaying information to a computer user. An input device [214], including alphanumeric and other keys, touch screen input means, etc. may be coupled to the bus [202] for communicating information and command selections to the processor [204], Another type of user input device may be a cursor controller [216], such as a mouse, a trackball, or cursor direction keys, for communicating direction information and command selections to the processor [204], and for controlling cursor movement on the display [212], The input device typically has two degrees of freedom in two axes, a first axis (e.g., x) and a second axis (e.g., y), that allow the device to specify positions in a plane.
[0057] The computing device [200] may implement the techniques described herein using customized hard-wired logic, one or more ASICs or FPGAs, firmware and/or program logic which in combination with the computing device [200] causes or programs the computing device [200] to be a special-purpose machine. According to one implementation, the techniques herein are performed by the computing device [200] in response to the processor [204] executing one or more sequences of one or more instructions contained in the main memory [206], Such instructions may be read into the main memory [206] from another storage medium, such as the storage device [210], Execution of the sequences of instructions contained in the main memory [206] causes the processor [204] to perform the process steps described herein. In alternative implementations of the present disclosure, hard-wired circuitry may be used in place of or in combination with software instructions.
[0058] The computing device [200] also may include a communication interface [218] coupled to the bus [202], The communication interface [218] provides a two-way data communication coupling to a network link [220] that is connected to a local network [222], For example, the communication interface [218] may be an integrated services digital network (ISDN) card, cable modem, satellite modem, or a modem to provide a data communication connection to a corresponding type of telephone line. As another example, the communication interface [218] may be a local area network (LAN) card to provide a data communication connection to a compatible LAN. Wireless links may also be implemented. In any such implementation, the communication interface [218] sends and receives electrical, electromagnetic or optical signals that carry digital data streams representing various types of information.
[0059] The computing device [200] can send messages and receive data, including program code, through the network(s), the network link [220] and the communication interface [218], In the Internet example, a server [230] might transmit a requested code for an application program through the Internet [228], the ISP [226], the local network [222], a host [224] and the communication interface [218], The received code may be executed by the processor [204] as it is received, and/or stored in the storage device [210], or other non-volatile storage for later execution. [0060] Referring to FIG. 3 an exemplary block diagram of a system for automatic scaling of one or more nodes, in accordance with exemplary implementations of the present disclosure is illustrated. The system comprises at least one policy execution engine (PEEGN) [1088] and at least one database [308], Further, the PEEGN [1088] comprises at least one transceiver unit [302], at least one processing unit [304] and at least one storage unit [306], Also, all of the components/ units of the system [300] are assumed to be connected to each other unless otherwise indicated below. Also, in FIG. 3 only a few units are shown, however, the system [300] may comprise multiple such units or the system [300] may comprise any such numbers of said units, as required to implement the features of the present disclosure. In an implementation, the system [300] may reside in a server or a network entity. In yet another implementation, the system [300] may reside partly in the server/ network entity.
[0061] The system [300] is configured for automatic scaling of one or more nodes, with the help of the interconnection between the components/units of the system [300],
[0062] In operation, the transceiver unit [302] may receive at the policy execution engine (PEEGN) [1088], from a Network Function Virtualization Platform Decision and Analytics (NPDA) [1096], a request for executing an automatic scaling policy for the one or more nodes. In an implementation, the one or more nodes may comprise at least one of one or more virtual network functions (VNFs), one or more virtual network function components (VNFCs), one or more container network functions (CNFs), and one or more container network function components (CNFCs). The system [300] provides an intelligent scaling framework which helps to scale the VNFC/CNFC instances as per the traffic requirements. Whenever a breach event is detected related to VNF/CNF instances, the NPDA [1096] evaluates a policy relating to the breach event. It may be noted that the policy and the set of data related to historical instances of breach event of the VNF/CNF instances may be retrieved by the NPDA [1096], Based on the retrieved policy and the set of data, the NPDA [1096] evaluates a hysteresis for the breach event. Further, the NPDA [1096] based on the scaling policy of the VNF/VNFC/CNF/CNFC for which the threshold breach event is detected, executes a hysteresis. If the hysteresis meets the criteria, the NPDA [1096] requests the PEEGN [1088] to execute a scaling policy. Further, in an example, the attributes associated with a policy as defined by a user comprise policyld, policyVersion, instanceld. [0063] As would be understood, the PEEGN [1088] is a system that may create, manage and enforce the polices and rules to regulate the behaviour and the operations of the network function. Further, the PEEGN [1088] may ensure that the network function and its components may function and operate as per the predefined polices and rules. The PEEGN [1088] calculates required resources for any VNF /CNF, does quota check and updates PVIM [1050] based on affinity/anti- aflfmity and other policies which are defined at PEEGN [1088],
[0064] In an implementation, the request received by the transceiver unit [302] at the PEEGN [1088] may be an INVOKE POLICY event. It is to be noted that the events are generated based on the predefined policies and rules. The INVOKE POLICY event may be a trigger that may initiate any pre-defined policy to be executed for the one or more nodes. Considering an example, traffic load on a node increases and crosses a predefined threshold. The INVOKE POLICY event may be triggered, and more resources may be allocated to handle the extra load. Thereby, ensuring the optimal performance of the node. In an example, the INVOKE POLICY event comprises attributes such as: policy Id, VIM Id, VNF Id, VNF Version, VNF Instance id, host Id, policy Action (e.g., VNF-scale-out/VNFC-scale-out/healing /manual-VNFC-scale-out). Here, VIM Id refers to identifier of a Virtualized Infrastructure Manager (VIM) instance on which the VNF/VNFC/CNF/CNFC is to be spawned for scale-out or from which the VNF/VNFC/CNF/CNFC instance needs to be removed for scale-in.
[0065] Further, as would be understood the automatic scaling policy may refer to rules and policies that may automatically and/or dynamically allocate or de-allocate resources based on the demand to ensure optimal performance of the one or more nodes. Further, automatic-scaling of the one or more nodes comprises at least one of scale-in and scale-out of the one or more nodes. As would be understood, the scale-in may refer to a process to reduce the number of active instances and the resources allocated to the network function, in response to the decreased demand and usage of the resources. Whereas the scale-out may refer to the process where new instances are created to handle the workload on the existing instances as demand of the resources may increase.
[0066] Continuing further, the transceiver unit [302] may fetch at the PEEGN [1088], a set of data relating to the one or more nodes. Further, to fetch the set of data relating to the one or more nodes, the transceiver unit [302] may transmit from the PEEGN [1088] a request to one or more node components associated with the one or more nodes to fetch the set of data related to the one or more nodes. [0067] In an implementation, the request to fetch the set of data related to the one or more nodes may be a GET VNF DETAILS event. The PEEGN [1088] may send the GET VNF DETAILS to the one or more node catalogues to fetch details related to the one or more nodes and the one or more node components. In an implementation, the one or more node catalogues are a VNF catalogues, or a CNF catalogues. Also, the set data may include such as, but not limited to, performance status, workload, capacity and resource consumption. It is to be noted that the above mentioned set of data is exemplary and in no manner limiting the scope of present disclosure. Further, the set of data may include any other data obvious to the person skilled in art to implement the solution of the present disclosure. Furthermore, a storage unit [306] may save at the PEEGN [1088], the set of data related to the one or more nodes in a database [308], Further, in an example, the GET_VNF_DETAILS event is associated with the Attributes such as, VNF Id, VNF Version, VNF Description, Product Id and VNFC/CNFC data.
[0068] Continuing further, the transceiver unit [302] may send from the PEEGN [1088] a request to get one or more current used resources by each of the one or more nodes, the allocated resource quota for each of the one or more nodes and one or more available resources for the one or more nodes, to a physical and virtual inventory manager (PVIM) [1050], The PVIM [1050] maintains the complete inventory including the physical and virtual resources as well as the VNFs/CNFs instantiated by the platform. The VNF/CNF inventory is created as and when they are instantiated. It maintains the total resources and its detail which are reserved for the VNF/CNF during the instantiation. The PVIM [1050] detects addition of any new physical resources added to the VIMs, the added physical resources are translated to virtual resources and are added to the free resource pool maintained at the PVIM [1050],
[0069] In an implementation, the PEEGN [1088] may send a PROVIDE VIM AZ HA DETAIL to PVIM [1050] to get the one or more current used resources by each of the one or more nodes, the allocated resource quota for each of the one or more nodes and the one or more available resources for the one or more nodes. As would be understood, the one or more current used resources by each of the one or more nodes may refer to the resources currently used by the one or more node from the allocated resources. Further, the allocated resource quota for each of the one or more nodes may refer to the predefined limit of the resources allocated to the one or more nodes. After successful instantiation of a VNF or a CNF instance, PEEGN [1088] allocates resources for the VNF/CNF components and asks inventory to reserve the resources for the same. The resources reserved at the time of instantiation define the allocated resource quote for each VNF and CNF instance. Furthermore, the one or more available resources for the one or more nodes may represent the amount of available resources in the network for the one or more node components.
[0070] Further, in an implementation, the PEEGN [1088] sends PROVIDE VIM AZ HA DETAIL request to PVIM [1050] to provide available PVIM details against each Availability Zone (AZ) and Host Aggregate (HA) and the used & free resources in each HA. The Availability zones (AZ) are end user visible logical abstractions for partitioning of the cloud services. The logical partition comprises block storage, compute services and network services. The logical partition requires a particular host to be present in an Availability Zone. In other words, AZ are isolated or separated data centres located within specific regions in which cloud services originate and operate. Moreover, AZ refers to a specific or an isolated location in a data centre or in a cloud environment. The isolated location ensures that in case of failure of one zone, services in another zone may remain functional or operational. In an implementation, the Host Aggregate (HA) refers to an aggregate or group of physical hosts in a virtualised environment. Further, HA are used to define where specific virtual network functions (VNFs) can be deployed. HA can be created based on the hardware profile of the physical hosts. Further, each Availability zone may have an association of multiple host aggregates, which in turn may have a list of hosts associated with it. In an example, the attributes of the PROVIDE VIM AZ HA DETAIL event comprise VIM Id, VNF Id and VNF Version. Here, VIM Id refers to identifier of a Virtualized Infrastructure Manager (VIM) instance on which the VNF/VNFC/CNF/CNFC is to be spawned for scale-out or from which the VNF/VNFC/CNF/CNFC instance needs to be removed for scalein.
[0071] Continuing further, the processing unit [304] may analyse at the PEEGN [1088], a demand for one or more resources, for automatic scaling of the one or more nodes, based on the one or more current used resources by each of the one or more nodes, the allocated resource quota for each of the one or more nodes, the one or more available resources for the one or more nodes, a set of automatic scaling constraints data, and the automatic scaling policy. The automatic scale constraints is the total virtual resource that a particular VNF/VNFC/CNF/CNFC can consume considering its instantiations as well as automatic scaling requirements. The system [300] ensures that the VNF/VNFC/CNF/CNFC do not consume more than the resources specified under any circumstances. This is to ensure that no VNF/VNFC/CNF/CNFC instance is able to hog complete network resource. Further, in an implementation, the set of automatic scaling constraints data comprises at least one of a total number of CPUs, a virtual memory size, and a disk size which can be allocated to the one or more nodes to scale. [0072] In an implementation, the processing unit [304], after the analysis at the PEEGN [1088], may use the automatic scaling policy to scale with the required resources based on one or more affinities, one or more anti-affinities, all dependents, and deployment flavors. During the onboarding of VNF/CNF, policy rules specific to VNF/CNF instantiation, and VNFC/CNFC scaling, VNF/CNF healing, VNFC/CNFC dependencies, VNFC/CNFC affinity / anti-affinity and VNF/CNF termination are created. These rules are persisted by the MANO platform as shown in FIG. 1. As would be understood, the one or more affinities, the one or more anti -affinities, all dependents, and the deployment flavors are essential rules that may manage the placement of resources in a scalable, fault tolerant and optimized network environment. Further, the one or more anti-affinities, all dependents, and the deployment flavors may determine that one or more resources may run together, one or more resources may not run together, and one or more resources are dependent on other one or more resources. The deployment flavor refers to the compute, memory, and storage capacity required by a VNF/CNF instance. Therefore, based on the one or more anti-affinities, all dependents, and the deployment flavors, the automatic scaling policy may be invoked for the optimized performance of all the resources or the one or more nodes in the network..
[0073] Continuing further, the transceiver unit [302] may transmit from the PEEGN [1088], to the PVIM [1050], a request for one of reserving and unreserving of the one or more resources. After receiving response from PVIM and analyzed that there are enough resources for VNF/VNFC based on scaling/dependent VNFC/affmity/anti-affmity policies and deployment flavor to scale, the PEEGN [1088] shall trigger PVIM [1050] to reserve the resources in its inventory using RESERVE RESOURCES IN VIM AZ HA event.
[0074] As would be understood, the request for reserving the one or more resources, may be a request to allocate the one or more resources to the one or more nodes to ensure the performance, and the availability of the one or more nodes. Whereas the request for unreserving the one or more resources, may be a request to de-allocate the one or more resources to avoid flapping of the resources or to migrate the one or more resources from one instance (or site) to another instance. PVIM [1050] reserves resources or unreserves resources on that VIM which was selected by PEEGN [1088] and sends response to PEEGN [1088], Further, the response from the PVIM [1050] on the request for one of the reserving and the unreserving of the one or more resources comprises one or more tokens for each of the one or more nodes. As would be understood, the token is a header that may be used to validate identity of the nodes. Furthermore, the response may have token that may be validated, for e.g. tk55f9al-19a6-adf3-8514-b0bl50asdfd0 (UUID) is an example of a token which is a universal unique identifier.
[0075] Continuing further, the transceiver unit [302], may trigger from the PEEGN [1088], the automatic scaling request to a node manager, based on a response from the PVIM [1050] on the request for one of the reserving and the unreserving of the one or more resources.
[0076] In an implementation, the node manager may be a VNF/CNF life cycle manager (VNF- LM [1042]/CNF-LM [1052]). Further, the VNF-LM [1042] may be a microservice responsible for lifecycle management of VNF instances. The VNF-LM [1042] may instantiate/terminate/scale resources. Furthermore, the CNF-LM [1052] may be responsible for creating a CNF or individual CNFC instances. Also, it may be responsible for Healing and Scaling out CNF’s or individual CNFC’s.
[0077] In an implementation, the trigger, by the transceiver unit [302], from the PEEGN [1088], for the automatic scaling request to the node manager comprises the one or more tokens for each of the one or more nodes. The PEEGN [1088], upon successful receipt of the response from the PVIM [1050], may send the one or more tokens, received from the PVIM [1050], to the node manager (i.e., VNF-LM [1042]/CNF-LM [1052]) to automatic scale the one or more nodes. Further, the trigger may be an event, such as TRIGGER VNF SCALING/TRIGGER VNFC SCALING event. The trigger events will comprise the tokens received from the PVIM [1050] in the response. These tokens are passed to the VNF-LM [1042]/CNF-LM [1052] to validate the request.
[0078] Continuing further, the transceiver unit [302] may receive at the PEEGN [1088], an acknowledgement response from the node manager, and the transceiver unit [302] may transmit from the PEEGN [1088], a response to the NPDA [1096], of the automatic scaling of the one or more nodes.
[0079] Referring to FIG. 4 an exemplary flow diagram of a method for automatic scaling of one or more nodes, in accordance with exemplary implementations of the present disclosure is illustrated. In an implementation the method [400] is performed by the system [300], Also, as shown in FIG. 4, the method [400] initiates at step [402], [0080] At step [404], the method [400] comprises receiving, by a transceiver unit [302] at a policy execution engine (PEEGN) [1088], from a Network Function Virtualization platform decision and analytics (NPDA) [1096], a request for executing an automatic scaling policy for the one or more nodes. In an implementation, the one or more nodes may comprise at least one of one or more virtual network functions (VNFs), one or more virtual network function components (VNFCs), one or more container network functions (CNFs), and one or more container network function components (CNFCs). The system [300] provides an intelligent scaling framework which helps to scale the VNFC/CNFC instances as per the traffic requirements. Whenever a breach event is detected related to VNF/CNF instances, the NPDA [1096] evaluates a policy relating to the breach event. It may be noted that the policy and the set of data related to historical instances of breach event of the VNF/CNF instances may be retrieved by the NPDA [1096], Based on the retrieved policy and the set of data, the NPDA [1096] evaluates a hysteresis for the breach event. Further, the NPDA [1096] based on the scaling policy of the VNF/VNFC/CNF/CNFC for which the threshold breach event is detected, executes a hysteresis. If the hysteresis meets the criteria, the NPDA [1096] requests the PEEGN [1088] to execute a scaling policy.
[0081] As would be understood, the PEEGN [1088] is a system that may create, manage and enforce the polices and rules to regulate the behaviour and the operations of the network function. Further, the PEEGN [1088] may ensure that the network function and its components may function and operate as per the predefined polices and rules. The PEEGN [1088] calculates required resources for any VNF /CNF, does quota check and updates PVIM [1050] based on affinity/anti- affinity and other policies which are defined at PEEGN [1088],
[0082] In an implementation, the request received by the transceiver unit [302] at the PEEGN [1088] may be an INVOKE POLICY event. It is to be noted that the events are generated based on the predefined policies and rules. The INVOKE POLICY event may be a trigger that may initiate any pre-defined policy to be executed for the one or more nodes. Considering an example, traffic load on a node increases and crosses a predefined threshold. The INVOKE POLICY event may be triggered, and more resources may be allocated to handle the extra load. Thereby, ensuring the optimal performance of the node.
[0083] Further, as would be understood the automatic scaling policy may refer to rules and policies that may automatically and/or dynamically allocate or de-allocate resources based on the demand to ensure optimal performance of the one or more nodes. Further, automatic-scaling of the one or more nodes comprises at least one of scale-in and scale-out of the one or more nodes, as would be understood, the scale-in may refer to a process to reduce the number of active instances and the resources allocated to the network function, in response to the decreased demand and usage of the resources. Whereas the scale-out may refer to the process where new instances are created to handle the workload on the existing instances as demand of the resources may increase.
[0084] Next, at step [406], the method [400] comprises fetching, by the transceiver unit [302] at the PEEGN [1088], a set of data relating to the one or more nodes. Further, to fetch the set of data relating to the one or more nodes, the transceiver unit [302] may transmit from the PEEGN [1088] a request to one or more node components associated with the one or more nodes to fetch the set of data related to the one or more nodes.
[0085] In an implementation, the request to fetch the set of data related to the one or more nodes may be a GET VNF DETAILS event. The PEEGN [1088] may send the GET VNF DETAILS to the one or more node catalogues to fetch details related to the one or more nodes and the one or more node components. In an implementation, the one or more node catalogues are a VNF catalogues, or a CNF catalogues. Also, the set data may include such as, but not limited to, performance status, workload, capacity and resource consumption. It is to be noted that the above mentioned set of data is exemplary and in no manner limiting the scope of present disclosure. Further, the set of data may include any other data obvious to the person skilled in art to implement the solution of the present disclosure. Furthermore, a storage unit [306] may save at the PEEGN [1088], the set of data related to the one or more nodes in a database [308],
[0086] Further, at step [408], the method [400] comprises sending, by the transceiver unit [302], from the PEEGN [1088] a request to get one or more current used resources by each of the one or more nodes, the allocated resource quota for each of the one or more nodes, and one or more available resources for the one or more nodes, to a physical and virtual inventory manager (PVIM) [1050],
[0087] In an implementation, the PEEGN [1088] may send a PROVIDE VIM AZ HA DETAIL to PVIM [1050] to get the one or more current used resources by each of the one or more nodes, the allocated resource quota for each of the one or more nodes and the one or more available resources for the one or more nodes. As would be understood, the one or more current used resources by each of the one or more nodes may refer to the resources currently used by the one or more node from the allocated resources. Further, the allocated resource quota for each of the one or more nodes may refer to the predefined limit of the resources allocated to the one or more nodes. After successful instantiation of a VNF or a CNF instance, PEEGN [1088] allocates resources for the VNF/CNF components and asks inventory to reserve the resources for the same. The resources reserved at the time of instantiation define the allocated resource quote for each VNF and CNF instance. Furthermore, the one or more available resources for the one or more nodes may represent the amount of available resources in the network for the one or more node components.
[0088] Further, in an implementation, the PEEGN [1088] sends PROVIDE VIM AZ HA DETAIL request to PVIM [1050] to provide available PVIM details against each Availability Zone (AZ) and Host Aggregate (HA) and the used & free resources in each HA.
[0089] Further, at step [410], the method [400] comprises analysing, by a processing unit [304] at the PEEGN [1088], a demand for one or more resources, for automatic scaling of the one or more nodes, based on the one or more current used resources by each of the one or more nodes, the allocated resource quota for each of the one or more nodes, the one or more available resources for the one or more nodes, a set of automatic scaling constraints data, and the automatic scaling policy. The automatic scale constraints is the total virtual resource that a particular VNF/VNFC/CNF/CNFC can consume considering its instantiations as well as automatic scaling requirements. The system [300] ensures that the VNF/VNFC/CNF/CNFC do not consume more than the resources specified under any circumstances. This is to ensure that no VNF/VNFC/CNF/CNFC instance is able to hog complete network resource. Further, in an implementation, the set of automatic scaling constraints data comprises at least one of a total number of CPUs, a virtual memory size, and a disk size for the one or more nodes to scale.
[0090] In an implementation, the processing unit [304], after the analysis at the PEEGN [1088], may use the automatic scaling policy to scale with the required resources based on one or more affinities, one or more anti-affinities, all dependents, and deployment flavors. During the onboarding of VNF/CNF, policy rules specific to VNF/CNF instantiation, and VNFC/CNFC scaling, VNF/CNF healing, VNFC/CNFC dependencies, VNFC/CNFC affinity / anti-affinity and VNF/CNF termination are created. These rules are persisted by the platform. As would be understood, the one or more affinities, the one or more anti-affinities, all dependents, and the deployment flavors are essential tools that may manage the placement of resources in a scalable, fault tolerant and optimized network environment. Further, the one or more anti -affinities, all dependents, and the deployment flavors may determine that one or more resources may run together, one or more resources may not run together, and one or more resources are dependent on other one or more resources. The deployment flavor refers to the compute, memory, and storage capacity required by a VNF/CNF instance. Therefore, based on the one or more anti-affinities, all dependents, and the deployment flavors, the automatic scaling policy may be invoked for the optimized performance of all the resources or the one or more nodes in the network.
[0091] Further, at step [412], the method [400] comprises transmitting, by the transceiver unit [302] from the PEEGN [1088], to the PVIM [1050], a request for one of reserving and unreserving the one or more resources. As, would be understood, the request for reserving the one or more resources, may be a request to allocate the one or more resources to the one or more nodes to ensure the performance, and the availability of the one or more nodes. Whereas the request for unreserving the one or more resources, may be a request to de-allocate the one or more resources to avoid flapping of the resources or to migrate the one or more resources from one instance (or site) to another instance. Further, the response from the PVIM [1050] on the request for one of the reserving and the unreserving of the one or more resources comprises one or more tokens for each of the one or more nodes. As would be understood, the token is a header that may be used to validate one or more nodes. Furthermore, the response may have token that may be validated, for e.g. tk55f9al-19a6-adf3-8514-b0bl50asdfd0 (UUID). The trigger events will comprise the tokens received from the PVIM [1050] in the response. These tokens are passed to the VNF-LM [1042]/CNF-LM [1052] to validate the request.
[0092] Furthermore, at step [414], the method [400] comprises triggering, by the transceiver unit [302], from the PEEGN [1088], the automatic scaling request to a node manager, based on a response from the PVIM [1050] on the request for one of the reserving and the unreserving of the one or more resources.
[0093] In an implementation, the node manager may be a VNF/CNF life cycle manager (VNF- LM [1042]/CNF-LM [1052]). Further, the VNF-LM [1042] may be a microservice responsible for lifecycle management of VNF instances. The VNF-LM [1042] may instantiate/terminate/scale resources. Furthermore, the CNF-LM [1052] may be responsible for creating a CNF or individual CNFC instances. Also, it may be responsible for healing and scaling out CNF’s or individual CNFC’s.
[0094] In an implementation, the trigger, by the transceiver unit [302], from the PEEGN [1088], for the automatic scaling request to the node manager comprises the one or more tokens for each of the one or more nodes. The PEEGN [1088], upon successful receipt of the response from the PVIM [1050], may send the one or more tokens, received from the PVIM [1050], to the node manager (i.e., VNF-LM [1042]/CNF-LM [1052]) to automatic scale the one or more nodes. Further, the trigger may be an event, such as TRIGGER VNF SCALING/TRIGGER VNFC SCALING event.
[0095] Continuing further, the transceiver unit [302] may receive at the PEEGN [1088], an acknowledgement response from the node manager and the transceiver unit [302] may transmit from the PEEGN [1088], a response to the NPDA [1096], of the automatic scaling of the one or more nodes.
[0096] Thereafter, the method [400] may terminate at step [416],
[0097] Referring to FIG. 5 an exemplary of session flow diagram for automatic scaling of one or more nodes, in accordance with exemplary implementations of the present disclosure is illustrated. In an implementation the session [500] is performed at the system [300],
[0098] At step 502, the policy execution engine (PEEGN) [1088] may receive a request for reservation and allocation from an Analytics. In an exemplary implementation, the Analytics may be a Network Function Virtualization Platform Decision and Analytics (NPDA) [1096], Whenever a breach event is detected related to VNF/CNF instances, the NPDA [1096] evaluates a policy relating to the breach event. It may be noted that the policy and the set of data related to historical instances of breach event of the VNF/CNF instances may be retrieved by the NPDA [1096], Based on the retrieved policy and the set of data, the NPDA [1096] evaluates a hysteresis for the breach event. Further, the NPDA [1096] based on the scaling policy of the VNF/VNFC/CNF/CNFC for which the threshold breach event is detected, executes a hysteresis. If the hysteresis meets the criteria, the NPDA [1096] requests the PEEGN [1088] to execute a scaling policy.
[0099] Next, at step 504, the PEEGN [1088] may handle events for fetching the VNF/CNF details and resource details from the Inventory (PVIM [1050]). In an exemplary implementation, the inventory of the VNF/CNF may maintain the details of the VNF/CNF. Further, the details of the VNF/CNF may include such as, but not limited to, a VNF/CNF name, a VNF/CNF version, etc. In an implementation, the request to fetch the set of data related to the one or more nodes may be a GET VNF DETAILS event. The PEEGN [1088] may send the GET VNF DETAILS to the one or more node catalogues to fetch details related to the one or more nodes and the one or more node components. In an implementation, the one or more node catalogues are a VNF catalogues, or a CNF catalogues. Also, the set data may include such as, but not limited to, performance status, workload, capacity and resource consumption.
[0100] Further, at step 506, after receiving all the information for the VNF/CNF based on events, the PEEGN [1088], may save the response in the database [308] for further processing.
[0101] Further, at step 508, the PEEGN [1088] may send request to PVIM [1050] for resources reservation, allocation, or unreserve resources. The PEEGN [1088] may consult the PVIM [1050] to check the current used resources against the total allocated Quota. For this the PEEGN [1088] may send a PROVIDE VIM AZ HA DETAIL to the PVIM [1050], Further, in an implementation, the PEEGN [1088] sends PROVIDE VIM AZ HA DETAIL request to PVIM [1050] to provide available PVIM details against each Availability Zone (AZ) and Host Aggregate (HA) and the used & free resources in each HA.
[0102] Furthermore, at step 510, upon receipt of the response of corresponding event, the PEEGN [1088] may allow the logical handling of automatic scale constraints. The automatic scale constraints may include in an example, such as, but not limited to, total No Of CPU: 120 (in cores), vMemory Size: 100 (gb), diskSize: 23.
[0103] Thereafter, at step 512, the PEEGN [1088] after processing the scaling request based on automatic scale constraints, may save the updated details for the VNF/CNF in the database [308] for further processing. Also, the PEEGN [1088] may repeat the entire session until the request of all the events is served.
[0104] Moreover, at step 514, the PEEGN [1088] may receive and generate tokenizer response for the automatic scaling response. The tokenizer response may comprise one or more tokens. The token is header that is used to validate one or more nodes. All the responses will have token that will be validated. Finally, after receiving the response the PEEGN [1088] may send the acknowledgement response to the NPDA [1096],
[0105] The present disclosure may further relate to a non-transitory computer readable storage medium storing one or more instructions for automatic scaling of one or more nodes, the instructions include executable code which, when executed by one or more units of a system [300], causes a transceiver unit [302] of the system [300] to receive at a policy execution engine (PEEGN) [1088], from a Network Function Virtualization platform decision and analytics (NPDA) [1096], a request for executing an automatic scaling policy for the one or more nodes. Further, the executable code when executed causes the transceiver unit [302] to fetch at the PEEGN [1088], a set of data relating to the one or more nodes. Further, the executable code when executed causes the transceiver unit [302] to send from the PEEGN [1088] a request to get one or more current used resources by each of the one or more nodes, the allocated resource quota for each of the one or more nodes and one or more available resources for the one or more nodes, to a physical and virtual inventory manager (PVIM) [1050], The executable code when further executed causes a processing unit [304] of the system [300] to analyse at the PEEGN [1088], a demand for one or more resources, for automatic scaling of the one or more nodes, based on the one or more current used resources by each of the one or more nodes, the allocated resource quota for each of the one or more nodes, the one or more available resources for the one or more nodes a set of automatic scaling constraints data, and the automatic scaling policy. Furthermore, the executable code when executed causes the transceiver unit [302] to transmit from the PEEGN [1088], to the PVIM [1050], a request for reserving the one or more resources. Moreover, the executable code when executed causes the transceiver unit [302] to trigger from the PEEGN [1088], the automatic scaling request to a node manager, based on a response from the PVIM [1050] on the request for reserving the one or more resources.
[0106] As is evident from the above, the present disclosure provides a technically advanced solution for automatic scaling of one or more nodes. More particularly, the present solution applies automatic scale constraints based on policies that are applicable for VNF/VNFC/CNC/CNFC for automatic scaling of resources. Further, the present solution leads to zero data loss policies while VNF/VNFC/CNC/CNFC resources are scaling up. The present solution also supports event driven scaling. Additionally, automatic scale constraints address several critical problems in the MANO architecture. Below are some key problems that are solved by automatic scale constraints:
• Excessive provisioning of resources.
• Insufficient provisioning of resources.
• Resource failures.
• Resource Mismanagement.
• Performance degradation
• Conflict while reservation and allocation of resources.
• Unavailability of Policy Execution Engine Service
• Time consumed in reservation and allocation of VNF/VNFC/CNFC/CNF resources.
• Increased Cost. [0107] While considerable emphasis has been placed herein on the disclosed implementations, it will be appreciated that many implementations can be made and that many changes can be made to the implementations without departing from the principles of the present disclosure. These and other changes in the implementations of the present disclosure will be apparent to those skilled in the art, whereby it is to be understood that the foregoing descriptive matter to be implemented is illustrative and non-limiting.
[0108] Further, in accordance with the present disclosure, it is to be acknowledged that the functionality described for the various components/units can be implemented interchangeably.
While specific embodiments may disclose a particular functionality of these units for clarity, it is recognized that various configurations and combinations thereof are within the scope of the disclosure. The functionality of specific units as disclosed in the disclosure should not be construed as limiting the scope of the present disclosure. Consequently, alternative arrangements and substitutions of units, provided they achieve the intended functionality described herein, are considered to be encompassed within the scope of the present disclosure.

Claims

We Claim:
1. A method for automatic scaling of one or more nodes, the method comprising:
- receiving, by a transceiver unit [302] at a policy execution engine (PEEGN) [1088], from a Network Function Virtualization platform decision and analytics (NPDA) [1096], a request for executing an automatic scaling policy for the one or more nodes;
- fetching, by the transceiver unit [302] at the PEEGN [1088], a set of data relating to the one or more nodes;
- sending, by the transceiver unit [302], from the PEEGN [1088] a request to get one or more current used resources by each of the one or more nodes, the allocated resource quota for each of the one or more nodes, and one or more available resources for the one or more nodes, to a physical and virtual inventory manager (PVIM) [1050];
- analysing, by a processing unit [304] at the PEEGN [1088], a demand for one or more resources, for automatic scaling of the one or more nodes, based on the one or more current used resources by each of the one or more nodes, the allocated resource quota for each of the one or more nodes, the one or more available resources for the one or more nodes, a set of automatic scaling constraints data, and the automatic scaling policy;
- transmitting, by the transceiver unit [302] from the PEEGN [1088], to the PVIM [1050], a request for one of reserving and unreserving of the one or more resources;
- triggering, by the transceiver unit [302], from the PEEGN [1088], the automatic scaling request to a node manager, based on a response from the PVIM [1050] on the request for one of the reserving and the unreserving of the one or more resources.
2. The method as claimed in claim 1, wherein the one or more nodes comprise at least one of one or more virtual network functions (VNFs), one or more virtual network function components (VNFCs), one or more container network functions (CNFs), and one or more container network function components (CNFCs).
3. The method as claimed in claim 1, wherein fetching, by the transceiver unit [302] at the PEEGN [1088], the set of data relating to the one or more nodes comprises at least one of:
- transmitting, by the transceiver unit [302], from the PEEGN [1088] a request to one or more node catalogues associated with the one or more nodes to fetch the set of data related to the one or more nodes; and - saving, by a storage unit [306], at the PEEGN [1088], the set of data related to the one or more nodes in a database [308],
4. The method as claimed in claim 1, wherein the set of automatic scaling constraints data comprises at least one of a total number of CPUs, a virtual memory size, and a disk size with the one or more nodes.
5. The method as claimed in claim 1, wherein automatic-scaling of the one or more nodes comprises at least one of scale-in and scale-out of the one or more nodes.
6. The method as claimed in claim 1, wherein the response from the PVIM [1050] on the request for one of the reserving and the unreserving of the one or more resources comprises one or more tokens for each of the one or more nodes.
7. The method as claimed in claim 6, wherein the triggering, by the transceiver unit [302], from the PEEGN [1088], the automatic scaling request to the node manager comprises the one or more tokens for each of the one or more nodes.
8. The method as claimed in claim 1 & 3, wherein prior to transmitting, by the transceiver unit [302] from the PEEGN [1088], to the PVIM [1050], a request for one of the reserving and the unreserving of the one or more resources, the method comprises:
- updating, by the storage unit [306], at the PEEGN [1088], the one or more current used resources by each of the one or more nodes, in the database [308],
9. The method as claimed in claim 1, further comprises:
- receiving, by the transceiver unit [302], at the PEEGN [1088], an acknowledgement response from the node manager; and
- transmitting, by the transceiver unit [302], from the PEEGN [1088], a response to the NPDA [1096], of the automatic scaling of the one or more nodes.
10. A system for automatic scaling of one or more nodes, the system comprising:
- a transceiver unit [302] configured to receive at a policy execution engine (PEEGN) [1088], from a Network Function Virtualization platform decision and analytics (NPDA) [1096], a request for executing an automatic scaling policy for the one or more nodes; - the transceiver unit [302] configured to fetch at the PEEGN [1088], a set of data relating to the one or more nodes;
- the transceiver unit [302], configured to send from the PEEGN [1088] a request to get one or more current used resources by each of the one or more nodes, the allocated resource quota for each of the one or more nodes and one or more available resources for the one or more nodes, to a physical and virtual inventory manager (PVIM) [1050];
- a processing unit [304], configured to analyse at the PEEGN [1088], a demand for one or more resources, for automatic scaling of the one or more nodes, based on the one or more current used resources by each of the one or more nodes, the allocated resource quota for each of the one or more nodes, the one or more available resources for the one or more nodes a set of automatic scaling constraints data, and the automatic scaling policy;
- the transceiver unit [302], configured to transmit from the PEEGN [1088], to the PVIM [1050], a request for one of reserving and unreserving of the one or more resources;
- the transceiver unit [302], configured to trigger from the PEEGN [1088], the automatic scaling request to a node manager, based on a response from the PVIM [1050] on the request for one of reserving and unreserving of the one or more resources.
11. The system as claimed in claim 10, wherein the one or more nodes comprise at least one of one or more virtual network functions (VNFs), one or more virtual network function components (VNFCs), one or more container network functions (CNFs), and one or more container network function components (CNFCs).
12. The system as claimed in claim 10, wherein fetching, by the transceiver unit [302] at the PEEGN [1088], the set of data relating to the one or more nodes comprises:
- the transceiver unit [302], configured to transmit from the PEEGN [1088] a request to one or more node catalogues associated with the one or more nodes to fetch the set of data related to the one or more nodes; and
- a storage unit [306], configured to save at the PEEGN [1088], the set of data related to the one or more nodes in a database [308],
13. The system as claimed in claim 10, wherein the set of automatic scaling constraints data comprises at least one of a total number of CPUs, a virtual memory size, and a disk size with the one or more nodes.
14. The system as claimed in claim 10, wherein automatic-scaling of the one or more nodes comprises at least one of scale-in and scale-out of the one or more nodes.
15. The system as claimed in claim 10, wherein the response from the PVIM [1050] on the request for one of the reserving and the unreserving of the one or more resources comprises one or more tokens for each of the one or more nodes.
16. The system as claimed in claim 15, wherein the triggering, by the transceiver unit [302], from the PEEGN [1088], the automatic scaling request to the node manager comprises the one or more tokens for each of the one or more nodes.
17. The system as claimed in claim 10 & 12, wherein prior to transmitting, by the transceiver unit [302] from the PEEGN [1088], to the PVIM [1050], a request for one of the reserving and the unreserving of the one or more resources, the system comprises:
- the storage unit [306], configured to update at the PEEGN [1088], the one or more current used resources by each of the one or more nodes in the database [308],
18. The system as claimed in claim 10, wherein the system further comprises:
- the transceiver unit [302], configured to receive at the PEEGN [1088], an acknowledgement response from the node manager; and
- the transceiver unit [302], configured to transmit from the PEEGN [1088], a response to the NPDA [1096], of the automatic scaling of the one or more nodes.
19. A non-transitory computer-readable storage medium storing instructions for automatic scaling of one or more nodes, the storage medium comprising executable code which, when executed by one or more units of a system, causes:
- a transceiver unit [302] to receive at a policy execution engine (PEEGN) [1088], from a Network Function Virtualization platform decision and analytics (NPDA) [1096], a request for executing an automatic scaling policy for the one or more nodes;
- the transceiver unit [302] to fetch at the PEEGN [1088], a set of data relating to the one or more nodes; - the transceiver unit [302] to send from the PEEGN [1088] a request to get one or more current used resources by each of the one or more nodes, the allocated resource quota for each of the one or more nodes and one or more available resources for the one or more nodes, to a physical and virtual inventory manager (PVIM) [1050]; - a processing unit [304] to analyse at the PEEGN [1088], a demand for one or more resources, for automatic scaling of the one or more nodes, based on the one or more current used resources by each of the one or more nodes, the allocated resource quota for each of the one or more nodes, the one or more available resources for the one or more nodes a set of automatic scaling constraints data, and the automatic scaling policy;
- the transceiver unit [302] to transmit from the PEEGN [1088], to the PVIM [1050], a request for one of reserving and unreserving of the one or more resources;
- the transceiver unit [302] to trigger from the PEEGN [1088], the automatic scaling request to a node manager, based on a response from the PVIM [1050] on the request for one of reserving and unreserving of the one or more resources.
PCT/IN2024/051963 2023-10-04 2024-10-04 Method and system for automatic scaling of one or more nodes Pending WO2025074404A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
IN202321066607 2023-10-04
IN202321066607 2023-10-04

Publications (1)

Publication Number Publication Date
WO2025074404A1 true WO2025074404A1 (en) 2025-04-10

Family

ID=95284351

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IN2024/051963 Pending WO2025074404A1 (en) 2023-10-04 2024-10-04 Method and system for automatic scaling of one or more nodes

Country Status (1)

Country Link
WO (1) WO2025074404A1 (en)

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20180000204A (en) * 2016-06-22 2018-01-02 한국전자통신연구원 Method, apparatus and system for providing auto scale

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20180000204A (en) * 2016-06-22 2018-01-02 한국전자통신연구원 Method, apparatus and system for providing auto scale

Similar Documents

Publication Publication Date Title
US9128773B2 (en) Data processing environment event correlation
US8988998B2 (en) Data processing environment integration control
US9053580B2 (en) Data processing environment integration control interface
WO2025069101A1 (en) Method and system for managing network resources
WO2025062464A1 (en) Method and system for routing events in a network environment
WO2025074404A1 (en) Method and system for automatic scaling of one or more nodes
WO2025069065A1 (en) Method and system for performing corrective actions on one or more network functions
WO2025074437A1 (en) Method and system for monitoring resource usage by network node components
WO2025062461A1 (en) Method and system for executing at least one service task
WO2025069094A1 (en) Method and system to perform resource management for virtual network function / vnf component instantiation
WO2025069103A1 (en) METHOD AND SYSTEM FOR MANAGING OPERATION OF CONTAINER NETWORK FUNCTION COMPONENTS (CNFCs)
WO2025069111A1 (en) Method and system for managing policies
WO2025062465A1 (en) System and method for load balancing of requests
WO2025069099A1 (en) Method and system for managing virtual network function (vnf) resources
WO2025069047A1 (en) Method and system for reserving resources for instantiation of a network function
Lavacca Scheduling Jobs on Federation of Kubernetes Clusters
WO2025069110A1 (en) Method and system for providing virtual network function information at a policy execution engine
WO2025057239A1 (en) Method and system for receiving a set of target configuration parameters
WO2025069113A1 (en) Method and system for managing one or more container network function (cnf) nodes
WO2025074405A1 (en) Method and system for managing one or more services
WO2025069085A1 (en) Method and system for performing one or more operations on one or more network functions in a network
WO2025069082A1 (en) Methods and systems for allocation of one or more network resources in a telecommunication network
WO2025069108A1 (en) Method and system for managing inventory of a network
WO2025074400A1 (en) Method and system for resource reservation in a network
WO2025062457A1 (en) Method and system for routing an event request

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 24874228

Country of ref document: EP

Kind code of ref document: A1

DPE1 Request for preliminary examination filed after expiration of 19th month from priority date (pct application filed from 20040101)