[go: up one dir, main page]

WO2025069064A1 - Method and system for managing a host for container network function components - Google Patents

Method and system for managing a host for container network function components Download PDF

Info

Publication number
WO2025069064A1
WO2025069064A1 PCT/IN2024/051850 IN2024051850W WO2025069064A1 WO 2025069064 A1 WO2025069064 A1 WO 2025069064A1 IN 2024051850 W IN2024051850 W IN 2024051850W WO 2025069064 A1 WO2025069064 A1 WO 2025069064A1
Authority
WO
WIPO (PCT)
Prior art keywords
node
cnfcs
host
new host
cnflm
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
PCT/IN2024/051850
Other languages
French (fr)
Inventor
Aayush Bhatnagar
Ankit Murarka
Rizwan Ahmad
Kapil Gill
Arpit Jain
Shashank Bhushan
Jugal Kishore
Meenakshi Sarohi
Kumar Debashish
Supriya Kaushik DE
Gaurav Kumar
Kishan Sahu
Gaurav Saxena
Vinay Gayki
Mohit Bhanwria
Durgesh KUMAR
Rahul Kumar
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jio Platforms Ltd
Original Assignee
Jio Platforms Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jio Platforms Ltd filed Critical Jio Platforms Ltd
Publication of WO2025069064A1 publication Critical patent/WO2025069064A1/en
Pending legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/08Configuration management of networks or network elements
    • H04L41/0895Configuration of virtualised networks or elements, e.g. virtualised network function or OpenFlow elements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/40Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks using virtualisation of network functions or resources, e.g. SDN or NFV entities
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/45562Creating, deleting, cloning virtual machine instances
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/4557Distribution of virtual machine instances; Migration and load balancing
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/45575Starting, stopping, suspending or resuming virtual machine instances
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/45595Network integration; Enabling network access in virtual machine instances

Definitions

  • Embodiment of the present disclosure generally relate to a field of wireless communication. More particularly, the present disclosure relates to a method and a system for managing a host for container network function components.
  • Wireless communication technology has rapidly evolved over the past few decades, with each generation bringing significant improvements and advancements.
  • the first generation of wireless communication technology was based on analog technology and offered only voice services.
  • 2G second generation
  • 3G third generation
  • 4G fourth generation
  • the fourth generation (4G) technology revolutionized wireless communication with faster data speeds, better network coverage, and improved security.
  • 5G fifth generation
  • wireless communication technology has become more advanced, sophisticated, and capable of delivering more services to its users.
  • the 5G core networks are based on service-based architecture (SB A) that is centred around network function (NF) services.
  • SBA Service-Based Architecture
  • NFs network function
  • SBA Service-Based Architecture
  • NFs Network Functions
  • NRF Network Repository Function
  • the network functions may include, but not limited to, a containerized network function (CNF) and a virtual network function (VNF).
  • the CNFs are a set of small, independent, and loosely coupled services such as microservices. These microservices work independently, which may increase speed and flexibility while reducing deployment risk.
  • cloud-native 5G network offers the fully digitized architecture necessary for deploying new cloud services and taking full advantage of cloud-native 5G features such as edge computing, as well as network slicing and other services.
  • the VNFs may run in virtual machines (VMs) on common virtualization infrastructure.
  • the VNFs may be created on top of network function virtualization infrastructure (NF VI) which may allocate resources like compute, storage, and networking efficiently among the VNFs.
  • NF VI network function virtualization infrastructure
  • CNFs and Containerized Network Function Components (CNFCs) instances run on multiple host or servers for providing services in the network. There may be multiple CNFs or CNFCs instances running or working on a single host. When any host or server becomes faulty, it may be non-operational. Therefore, the CNFs or CNFCs instantiated on that faulty or non-operational host or server may also stop to run or work. Further, all CNFCs need to be restarted manually after the commissioning of new hosts. Due to new hosts activation, there may be inventory data mismatch as new host commissioning leads to change of host IP’s and IDs etc. In traditional way, for CNFs or CNFCs to become operational or active, operational team has to operate manually by login on server and up the CNFs or CNFCs services to resolve the problem. This process is not efficient and is more timeconsuming task.
  • An aspect of the present disclosure may relate to a method for managing a host for container network function components (CNFCs).
  • the method comprises receiving, by a processing unit via a user interface (UI), and at a container network function lifecycle manager (CNFLM) node, a request for replacement of a host of the one or more CNFCs.
  • the method further comprises transmitting, by the processing unit via the CNFLM node, an instruction to a docker service adapter (DSA) node to re-instantiate the one or more CNFCs to a new host.
  • DSA docker service adapter
  • the method comprises re-instantiating, by the processing unit via the DSA node, the one or more CNFCs to the new host. Further, the method comprises transmitting, by the processing unit via the DSA node, a success response to the CNFLM node, in response to the re-instantiation of the one or more CNFCs to the new host. Furthermore, the method comprises transmitting, by the processing unit via the CNFLM node, and to a physical and virtual resource manager (PVIM) node, a set of details related to the new host.
  • PVIM physical and virtual resource manager
  • the method further comprises transmitting, by the processing unit via the CNFLM node, and to the UI, a success response indicative of replacement of the host and re-instantiation of the one or more CNFCs to the new host.
  • the method further comprises displaying, by the processing unit at the UI, a plurality of new hosts. Further, the method comprises receiving, by the processing unit via the UI, based on an input from a user, a selection of the new host from the plurality of new hosts, for re-instantiating the one or more CNFCs to the new host, wherein the instruction to the DSA node to re-instantiate the one or more CNFCs is based on the selection of the new host.
  • the PVIM node is communicably coupled to a database, and wherein the method comprises updating, by the processing unit via the PVIM node, the database with the set of details related to the new host.
  • the set of details related to the new host comprises a new host name and a new host internet protocol (IP) address.
  • IP internet protocol
  • the method further comprises displaying, by the processing unit via the CNFLM node, and at the UI, the set of details related to the new host.
  • Another aspect of the present disclosure may relate to a system for managing a host for one or more container network function components (CNFCs).
  • the system comprises a processing unit configured to receive, via a user interface (UI), and at a container network function lifecycle manager (CNFLM) node, a request for replacement of a host of the one or more CNFCs.
  • the processing unit is further configured to transmit, via the CNFLM node, an instruction to a docker service adapter (DSA) node to instantiate the one or more CNFCs to a new host.
  • DSA docker service adapter
  • the processing unit is configured to re-instantiate, via the DSA node, the one or more CNFCs to the new host. Further, the processing unit is configured to transmit, via the DSA node, a success response to the CNFLM node, in response to the re-instantiation of the one or more CNFCs to the new host. Furthermore, the processing unit is configured to transmit, via the CNFLM node, and to a physical and virtual resource manager (PVIM) node, a set of details related to the new host.
  • PVIM physical and virtual resource manager
  • Non-transitory computer readable storage medium storing one or more instructions for managing a host for one or more container network function components (CNFCs).
  • the instructions include executable code which, when executed by one or more units of a system, causes a processing unit of the system to receive, via a user interface (UI), and at a container network function lifecycle manager (CNFLM) node, a request for replacement of a host of the one or more CNFCs. Further, the executable code when executed causes the processing unit to transmit, via the CNFLM node, an instruction to a docker service adapter (DSA) node to instantiate the one or more CNFCs to a new host.
  • DSA docker service adapter
  • the executable code when further executed causes the processing unit to re-instantiate, via the DSA node, the one or more CNFCs to the new host. Further, the executable code when executed causes the processing unit to transmit, via the DSA node, a success response to the CNFLM node, in response to the reinstantiation of the one or more CNFCs to the new host. Furthermore, the executable code when executed causes the processing unit to transmit, via the CNFLM node, and to a physical and virtual resource manager (PVIM) node, a set of details related to the new host.
  • PVIM physical and virtual resource manager
  • CNFCs container network function components
  • It yet another object of the present disclosure to provide an optimal solution for the user to re-instantiate same CNFC’s and also keeps inventory in sync.
  • FIG. 1 illustrates an exemplary block diagram representation of management and orchestration (MANO) architecture/ platform, in accordance with exemplary implementation of the present disclosure.
  • MANO management and orchestration
  • FIG. 2 illustrates an exemplary block diagram of a computing device upon which the features of the present disclosure may be implemented, in accordance with exemplary implementation of the present disclosure.
  • the present disclosure aims to overcome the above-mentioned and other existing problems in this field of technology by providing a method and a system for managing a host for one or more container network function components (CNFCs). More particularly, the present disclosure provides a solution, where no manual intervention at backend is needed to re-instantiate CNFC instances. Further, the present disclosure provides a solution to keep inventory in sync by updating the inventory. Furthermore, the present disclosure provides an optimal solution for the user to re-instantiate same CNFC’s and also keeps inventory in sync.
  • CNFCs container network function components
  • FIG. 1 illustrates an exemplary block diagram representation of a management and orchestration (MANO) architecture/ platform [100], in accordance with exemplary implementation of the present disclosure.
  • the MANO architecture [100] is developed for managing telecom cloud infrastructure automatically, managing design or deployment design, managing instantiation of network node(s)/ service(s) etc.
  • the MANO architecture [100] deploys the network node(s) in the form of Virtual Network Function (VNF) and Cloud-native/ Container Network Function (CNF).
  • VNF Virtual Network Function
  • CNF Cloud-native/ Container Network Function
  • the system may comprise one or more components of the MANO architecture [100]
  • the MANO architecture [100] is used to auto-instantiate the VNFs into the corresponding environment of the present disclosure so that it could help in onboarding other vendor(s) CNFs and VNFs to the platform.
  • the MANO architecture [100] comprises a user interface layer, a network function virtualization (NFV) and software defined network (SDN) design function module [104], a platforms foundation services module [106], a platform core services module [108] and a platform resource adapters and utilities module [112], All the components are assumed to be connected to each other in a manner as obvious to the person skilled in the art for implementing features of the present disclosure.
  • NFV network function virtualization
  • SDN software defined network
  • the NFV and SDN (NFVSDN) design function module [104] comprises a VNF lifecycle manager (compute) [1042], a VNF catalogue [1044], a network services catalogue [1046], a network slicing and service chaining manager [1048], a physical and virtual resource manager [1050] and a CNF lifecycle manager [1052],
  • the VNF lifecycle manager (compute) [1042] is responsible for deciding on which server of the communication network, the microservice will be instantiated.
  • the VNF lifecycle manager (compute) [1042] may manage the overall flow of incoming/ outgoing requests during interaction with the user.
  • the platforms foundation services module [106] comprises a microservices elastic load balancer [1062], an identify & access manager [1064], a command line interface (CLI) [1066], a central logging manager [1068], and an event routing manager [1070],
  • the microservices elastic load balancer [1062] is used for maintaining the load balancing of the request for the services.
  • the identify & access manager [1064] is used for logging purposes.
  • the command line interface (CLI) [1066] is used to provide commands to execute certain processes which require changes during the run time.
  • the central logging manager [1068] is responsible for keeping the logs of every service. These logs are generated by the MANO platform [100], These logs are used for debugging purposes.
  • the event routing manager [1070] is responsible for routing the events i.e., the application programming interface (API) hits to the corresponding services.
  • API application programming interface
  • the platforms core services module [108] comprises NFV infrastructure monitoring manager [1082], an assure manager [1084], a performance manager [1086], a policy execution engine [1088], a capacity monitoring manager [1090], a release management (mgmt.) repository [1092], a configuration manager & GCT [1094], an NFV platform decision analytics [1096], a platform NoSQL DB [1098]; a platform schedulers and cron jobs [1100], a VNF backup & upgrade manager [1102], a micro service auditor [1104], and a platform operations, administration and maintenance manager [1106],
  • the NFV infrastructure monitoring manager [1082] monitors the infrastructure part of the NFs.
  • the assure manager [1084] is responsible for supervising the alarms the vendor is generating.
  • the performance manager [1086] is responsible for managing the performance counters.
  • the policy execution engine (PEEGN) [1088] is responsible for all the managing the policies.
  • the capacity monitoring manager (CMM) [1090] is responsible for sending the request to the PEEGN [1088],
  • the release management (mgmt.) repository (RMR) [1092] is responsible for managing the releases and the images of all the vendor network node.
  • the configuration manager & (GCT) [1094] manages the configuration and GCT of all the vendors.
  • the NFV platform decision analytics (NPDA) [1096] helps in deciding the priority of using the network resources. It is further noted that the policy execution engine (PEEGN) [1088], the configuration manager & GCT [1094] and the NPDA [1096] work together.
  • the platform NoSQL DB [1098] is a database for storing all the inventory (both physical and logical) as well as the metadata of the VNFs and CNF.
  • the platform schedulers and cron jobs [1100] schedules the task such as but not limited to triggering of an event, traverse the network graph etc.
  • the VNF backup & upgrade manager [1102] takes backup of the images, binaries of the VNFs and the CNFs and produces that backup on demand in case of server failure.
  • the micro service auditor [1104] audits the microservices. For e.g., in a hypothetical case, instances not being instantiated by the MANO architecture [100] using the network resources then the micro service auditor [1104] audits and informs the same so that resources can be released for services running in the MANO architecture [100], thereby assuring the services only run on the MANO platform [100],
  • the platform operations, administration and maintenance manager [1106] is used for newer instances that are spawning.
  • the platform resource adapters and utilities module [112] further comprises a platform external API adaptor and gateway [1122]; a generic decoder and indexer (XML, CSV, JSON) [1124]; a docker service adaptor [1126]; an API adapter [1128]; and an NFV gateway [1130],
  • the platform external API adaptor and gateway [1122] is responsible for handling the external services (to the MANO platform [100]) that requires the network resources.
  • the generic decoder and indexer (XML, CSV, JSON) [1124] gets directly the data of the vendor system in the XML, CSV, JSON format.
  • the docker service adaptor [1126] is the interface provided between the telecom cloud and the MANO architecture [100] for communication.
  • the API adapter [1128] is used to connect with virtual machines (VMs).
  • VMs virtual machines
  • the NFV gateway [1130] is responsible for providing the path to each service going to/incoming from the MANO architecture [100],
  • the computing device [200] may include a bus [202] or other communication mechanism for communicating information, and a hardware processor [204] coupled with bus [202] for processing information.
  • the hardware processor [204] may be, for example, a general-purpose microprocessor.
  • the computing device [200] may also include a main memory [206], such as a random-access memory (RAM), or other dynamic storage device, coupled to the bus [202] for storing information and instructions to be executed by the processor [204],
  • the main memory [206] also may be used for storing temporary variables or other intermediate information during execution of the instructions to be executed by the processor [204], Such instructions, when stored in non-transitory storage media accessible to the processor [204], render the computing device [200] into a special-purpose machine that is customized to perform the operations specified in the instructions.
  • the computing device [200] further includes a read only memory (ROM) [208] or other static storage device coupled to the bus [202] for storing static information and instructions for the processor [204],
  • ROM read only memory
  • a storage device [210] such as a magnetic disk, optical disk, or solid-state drive is provided and coupled to the bus [202] for storing information and instructions.
  • the computing device [200] may be coupled via the bus [202] to a display [212], such as a cathode ray tube (CRT), Liquid crystal Display (LCD), Light Emitting Diode (LED) display, Organic LED (OLED) display, etc. for displaying information to a computer user.
  • An input device [214] including alphanumeric and other keys, touch screen input means, etc.
  • a cursor controller [216] such as a mouse, a trackball, or cursor direction keys, for communicating direction information and command selections to the processor [204], and for controlling cursor movement on the display [212].
  • the input device typically has two degrees of freedom in two axes, a first axis (e.g., x) and a second axis (e.g., y), that allow the device to specify positions in a plane.
  • the computing device [200] may implement the techniques described herein using customized hard-wired logic, one or more ASICs or FPGAs, firmware and/or program logic which in combination with the computing device [200] causes or programs the computing device [200] to be a special-purpose machine.
  • the techniques herein are performed by the computing device [200] in response to the processor [204] executing one or more sequences of one or more instructions contained in the main memory [206], Such instructions may be read into the main memory [206] from another storage medium, such as the storage device [210], Execution of the sequences of instructions contained in the main memory [206] causes the processor [204] to perform the process steps described herein.
  • hard-wired circuitry may be used in place of or in combination with software instructions.
  • the computing device [200] also may include a communication interface [218] coupled to the bus [202], The communication interface [218] provides a two-way data communication coupling to a network link [220] that is connected to a local network [222],
  • the communication interface [218] may be an integrated services digital network (ISDN) card, cable modem, satellite modem, or a modem to provide a data communication connection to a corresponding type of telephone line.
  • the communication interface [218] may be a local area network (LAN) card to provide a data communication connection to a compatible LAN.
  • LAN local area network
  • Wireless links may also be implemented.
  • the communication interface [218] sends and receives electrical, electromagnetic or optical signals that carry digital data streams representing various types of information.
  • the computing device [200] can send messages and receive data, including program code, through the network(s), the network link [220] and the communication interface [218],
  • a server [230] might transmit a requested code for an application program through the Internet [228], the ISP [226], a host [224], the local network [222] and the communication interface [218],
  • the received code may be executed by the processor [204] as it is received, and/or stored in the storage device [210], or other non-volatile storage for later execution.
  • FIG. 3 an exemplary block diagram of a system [300] for managing a host for one or more container network function components (CNFCs), in accordance with exemplary implementation of the present disclosure is illustrated.
  • the system [300] may be in communication with other network entities/components known to a person skilled in the art. Such network entities/components have not been depicted in FIG. 3 and have not been explained here for the sake of brevity.
  • FIG. 4 an exemplary signalling flow diagram [400] for managing a host for one or more container network function components (CNFCs), in accordance with exemplary implementation of the present disclosure is illustrated.
  • CNFCs container network function components
  • FIG. 3 and FIG. 4 have been explained simultaneously and may be read in conjunction with each other.
  • the system [300] comprises at least one processing unit [302] and at least one display unit [304], Also, all of the components/ units of the system [300] are assumed to be connected to each other unless otherwise indicated below. As shown in the FIG. 3 all units shown within the system [300] should also be assumed to be connected to each other. Also, in FIG. 3 only a few units are shown, however, the system [300] may comprise multiple such units or the system [300] may comprise any such number of said units, as required to implement the features of the present disclosure. Further, in an implementation, the system [300] may reside in a server or the network entity or the system [300] may be in communication with the network entity to implement the features as disclosed in the present disclosure.
  • the system [300] is configured for managing a host for one or more container network function components (CNFCs) with the help of the interconnection between the components/units of the system [300],
  • the container network function (CNF) may be a network function that may be implemented within a containerized environment using technologies such as, but not limited to, a docker, and the like.
  • the said container network functions that may be implemented within the containerized environment may be cloud-native network functions.
  • the cloud-native network functions may not rely on a dedicated hardware or virtual machines for implementation, they may be implemented within a container.
  • the containerization of the network function may make it possible to manage how and when the network function may run across a cluster in the environment.
  • the container network function components (CNFCs) may include components such as, but not limited to, a container runtime, an operating system, lifecycle management component, storage component, etc.
  • the host on which CNF runs may be a physical or a virtual machine that may have all the necessary components such as, but not limited to, networking, storage and security.
  • the host further ensures that CNFs may operate efficiently within a containerized environment or a cloud-native environment.
  • the processing unit [302] may receive, via a user interface (UI), at a container network function lifecycle manager (CNFLM) node, a request for replacement of a host of the one or more CNFCs. This has been depicted by Step [402] in FIG. 4.
  • UI user interface
  • CFLM container network function lifecycle manager
  • the CNFLM node may manage the lifecycle of the container.
  • the management of the lifecycle of the container is a crucial process, where the CNFLM node may oversee the creation, deployment and operation of the container until the container may be eventually decommissioned.
  • a user may receive all details related to plurality of hosts at the UI. Once the user receives all the details related to the plurality of hosts, the user may select one or more faulty hosts from the plurality of hosts, at the UI, to be replaced with one or more new hosts. Further, the details related to the plurality of hosts may include such as, but not limited to, a host name and a host internet protocol (IP) address.
  • IP internet protocol
  • hosts or servers There are a variety of issues that could cause hosts or servers to go down or become faulty, including hardware failure, viruses, power outages, as well as natural or physical disasters like fires or floods.
  • a host or server may also go down because of corrupted files, or misconfigurations.
  • the details related to the CNFCs on the one or more faulty hosts may be displayed on the UI.
  • the details related to the CNFCs on the faulty hosts may include a CNF name, a CNF version, a CNF ID, a CNFC Name, a CNFC ID, and a container ID.
  • the display unit [304] may display at the UI, a plurality of new hosts.
  • the display unit [304] may further display, via the CNFLM node, and at the UI, the set of details relating to the new host.
  • the details related to the new host may comprise one or more new host names and a new host IP address.
  • the processing unit [302] may transmit, via the CNFLM node, an instruction to a docker service adapter (DSA) node to instantiate the one or more CNFCs to the new host. This has been depicted by Step [404] in FIG. 4.
  • DSA docker service adapter
  • the DSA is a component of the system [300] that may have been designed to interface between the docker services and the other components of the system [300], Further, as would be understood instantiate may be a process to create an instance of the CNFCs and to make the CNFCs operational on the selected new host.
  • the processing unit [302] may re-instantiate, via the DSA node, the one or more CNFCs to the new host. This has been depicted by Step [406] in FIG. 4. [0069] Furthermore, the processing unit [302] may transmit from the new host or the server, via the DSA node (shown as step [408]), a success response to the CNFLM node, in response to the re-instantiation of the one or more CNFCs to the new host. This has been depicted by Step [410] of FIG. 4. The success response may indicate that the one or more CNFCs operationalised on the faulty host are now successfully operationalised on the selected new host.
  • the processing unit [302] may transmit, via the CNFLM node, to a physical and virtual resource manager (PVIM) node, a set of details related to the new host. This has been depicted by Step [412] of FIG. 4.
  • PVIM physical and virtual resource manager
  • the PVIM service maintains the virtual inventory, such as virtual machines, and limited physical inventory, such as servers. It maintains the relation between physical and virtual resources (w.r.t overlay). Also, it describes physical and virtual resources with respect to different attributes using updates from external micro-services, such as the CNFLM microservice or node.
  • the PVIM node is communicably coupled to a database.
  • the processing unit [302] may update, via the PVIM node, the database with the set of details related to the new host.
  • the database communicably coupled with the PVIM, may store all the details related to the selected new host for the said one or more CNFCs.
  • the set of details related to the new host comprises a new host name and a new host internet protocol (IP) address.
  • IP internet protocol
  • the processing unit [302] may transmit, via the CNFLM node, to the UI, a success response indicative of replacement of the host and re-instantiation of the one or more CNFCs to the new host. This has been depicted by step 414 of FIG. 4. Once the success response is received at the UI, the updated details related to the new host may be displayed, at the UI of the display unit [304], to the user.
  • FIG. 5 an exemplary flow diagram of a method [500] for managing a host for one or more container network function components (CNFCs), in accordance with exemplary implementation of the present disclosure is illustrated.
  • the method [500] is performed by the system [300], Also, as shown in FIG. 5, the method [500] initiates at step [502],
  • the DSA is a component of the system [300] that may have been designed to interface between the docker services and the other components of the system [300], Further, as would be understood instantiate may be a process to create an instance of the CNFCs and to make the CNFCs operational on the selected new host. [0082] Further, at step [508], the method [500] comprises re-instantiating, by the processing unit [302] via the DSA node, the one or more CNFCs to the new host.
  • the method [500] comprises transmitting, by the processing unit [302] via the DSA node, a success response to the CNFLM node, in response to the re-instantiation of the one or more CNFCs to the new host.
  • the success response may indicate that the one or more CNFCs operationalised on the faulty host are now successfully operationalised on the selected new host.
  • the method [500] comprises transmitting, by the processing unit [302] via the CNFLM node, and to a physical and virtual resource manager (PVIM) node, a set of details related to the new host.
  • PVIM physical and virtual resource manager
  • the PVIM may be responsible to manage both the physical resources such as, but not limited to, servers, storage devices, other hardware resources and the virtual resources such as, but not limited to, virtual machines, virtual networks, etc.
  • the UI [602] may display all details related to plurality of hosts. Once the details related to the plurality of hosts are displayed, the user may select one or more faulty hosts from the plurality of hosts that may be replaced with one or more new hosts. Also, the details related to the one or more new hosts may be displayed on the UI [602], Once the user selects the one or more new hosts to replace the one or more faulty hosts, the UI may send an instruction to the CNFLM [604] to replace the one or more faulty hosts with the selected one or more new host.
  • the CNFLM [604] captures the details of Vendors, CNFs and CNFCs via Create, Read, and Update APIs exposed by the CNFLM [604] service. The captured details are stored in an elastic search database and can be further used by the DSA [606], The CNFLM [604] is responsible for creating a CNF or individual CNFC instances. Also, it is responsible for healing and scaling out CNFs or individual CNFCs. The CNFLM [604] further transmits the instructions, to the DSA [606] to re-instantiate the CNFC instances to the selected one or more new hosts.
  • the DSA [606] may be used for creating the containers on Docker Sites as a swarm service.
  • the CNFLM [604] sends the instruction with CNFC details to the DSA [606], Every CNFC may be deployed on different Docker site as per the instructions with at least one replication.
  • a Docker Agent Manager (DAM) sends a response to the DSA [606] per CNF and then the DSA [606] sends a final response to the CNFLM [604],
  • the CNFLM transmits the instructions, to the PVIM [608], to update the inventory based on the re-instantiation of the CNFCs instances on the selected one or more new instances.
  • DAM Docker Agent Manager
  • the PVIM [608] maintains the virtual inventory and limited physical inventory. It maintains relation between physical and virtual resources. Also, it describes physical and virtual resources with respect to different attributes using updates from external micro-services.
  • the PVIM [608] sends a response to the CNFLM [604] about the successful updating of the details related to the selected one or more new hosts.
  • the CNFLM [604] transmits a success response to the UI [602], The details related the updated one or more new hosts may be displayed at the UI [602] for the user.
  • the present disclosure further, discloses a non-transitory computer readable storage medium storing one or more instructions for managing a host for one or more container network function components (CNFCs).
  • the instructions include executable code which, when executed by one or more units of a system [300], causes a processing unit [302] of the system [300] to receive, via a user interface (UI), and at a container network function lifecycle manager (CNFLM) node, a request for replacement of a host of the one or more CNFCs.
  • UI user interface
  • CFLM container network function lifecycle manager
  • the executable code when executed causes the processing unit [302]to transmit, via the CNFLM node, an instruction to a docker service adapter (DSA) node to instantiate the one or more CNFCs to a new host.
  • the executable code when further executed causes the processing unit [302] to re-instantiate, via the DSA node, the one or more CNFCs to the new host.
  • the executable code when executed causes the processing unit [302] to transmit, via the DSA node, a success response to the CNFLM node, in response to the re-instantiation of the one or more CNFCs to the new host.
  • the executable code when executed causes the processing unit [302] to transmit, via the CNFLM node, and to a physical and virtual resource manager (PVIM) node, a set of details related to the new host.
  • PVIM physical and virtual resource manager
  • the present disclosure provides a technically advanced solution for managing a host for one or more container network function components (CNFCs). More particularly, the present solution where no manual intervention at backend is needed to reinstantiate CNFC instances. Further, the present solution keeps inventory in sync by updating the inventory. Furthermore, the present solution provides easy one-click operation for the user to reinstantiate same CNFC’s and also keeps inventory in sync.
  • CNFCs container network function components

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The present disclosure relates to method and system for managing a host for container network function components (CNFCs) The method comprises receiving, at a container network function lifecycle manager (CNFLM) node, a request for replacement of a host of the one or more CNFCs. The method further comprises transmitting an instruction to a docker service adapter (DSA) node to re-instantiate the one or more CNFCs to a new host. Further, the method comprises re- instantiating the one or more CNFCs to the new host. Further, the method comprises transmitting a success response to the CNFLM node, in response to the re-instantiation of the one or more CNFCs to the new host. Furthermore, the method comprises transmitting, via the CNFLM node, to a physical and virtual resource manager (PVIM) node, a set of details related to the new host.

Description

METHOD AND SYSTEM FOR MANAGING A HOST FOR CONTAINER NETWORK FUNCTION COMPONENTS
FIELD OF THE DISCLOSURE
[0001] Embodiment of the present disclosure generally relate to a field of wireless communication. More particularly, the present disclosure relates to a method and a system for managing a host for container network function components.
BACKGROUND
[0002] The following description of the related art is intended to provide background information pertaining to the field of the disclosure. This section may include certain aspects of the art that may be related to various features of the present disclosure. However, it should be appreciated that this section is used only to enhance the understanding of the reader with respect to the present disclosure, and not as admissions of the prior art.
[0003] Wireless communication technology has rapidly evolved over the past few decades, with each generation bringing significant improvements and advancements. The first generation of wireless communication technology was based on analog technology and offered only voice services. However, with the advent of the second generation (2G) technology, digital communication and data services became possible, and text messaging was introduced. The third generation (3G) technology marked the introduction of high-speed internet access, mobile video calling, and location-based services. The fourth generation (4G) technology revolutionized wireless communication with faster data speeds, better network coverage, and improved security. Currently, the fifth generation (5G) technology is being deployed, promising even faster data speeds, low latency, and the ability to connect multiple devices simultaneously. With each generation, wireless communication technology has become more advanced, sophisticated, and capable of delivering more services to its users.
[0004] The 5G core networks are based on service-based architecture (SB A) that is centred around network function (NF) services. In the said Service-Based Architecture (SBA), a set of interconnected Network Functions (NFs) deliver the control plane functionality and common data repositories of the 5G network, where each NF is authorized to access services of other NFs. Particularly, each NF can register itself and its supported services to a Network Repository Function (NRF), which is used by other NFs for the discovery of NF instances and their services. Further, the network functions may include, but not limited to, a containerized network function (CNF) and a virtual network function (VNF).
[0005] The CNFs are a set of small, independent, and loosely coupled services such as microservices. These microservices work independently, which may increase speed and flexibility while reducing deployment risk. In 5G communication, cloud-native 5G network offers the fully digitized architecture necessary for deploying new cloud services and taking full advantage of cloud-native 5G features such as edge computing, as well as network slicing and other services. Whereas the VNFs may run in virtual machines (VMs) on common virtualization infrastructure. The VNFs may be created on top of network function virtualization infrastructure (NF VI) which may allocate resources like compute, storage, and networking efficiently among the VNFs.
[0006] In communication network such as 5G communication network, CNFs and Containerized Network Function Components (CNFCs) instances run on multiple host or servers for providing services in the network. There may be multiple CNFs or CNFCs instances running or working on a single host. When any host or server becomes faulty, it may be non-operational. Therefore, the CNFs or CNFCs instantiated on that faulty or non-operational host or server may also stop to run or work. Further, all CNFCs need to be restarted manually after the commissioning of new hosts. Due to new hosts activation, there may be inventory data mismatch as new host commissioning leads to change of host IP’s and IDs etc. In traditional way, for CNFs or CNFCs to become operational or active, operational team has to operate manually by login on server and up the CNFs or CNFCs services to resolve the problem. This process is not efficient and is more timeconsuming task.
[0007] Hence, in view of these and other existing limitations, there arises an imperative need to provide an efficient solution to overcome the above-mentioned and other limitations which the present disclosure aims to disclose.
SUMMARY
[0008] This section is provided to introduce certain aspects of the present disclosure in a simplified form that are further described below in the detailed description. This summary is not intended to identify the key features or the scope of the claimed subject matter. [0009] An aspect of the present disclosure may relate to a method for managing a host for container network function components (CNFCs). The method comprises receiving, by a processing unit via a user interface (UI), and at a container network function lifecycle manager (CNFLM) node, a request for replacement of a host of the one or more CNFCs. The method further comprises transmitting, by the processing unit via the CNFLM node, an instruction to a docker service adapter (DSA) node to re-instantiate the one or more CNFCs to a new host. Further, the method comprises re-instantiating, by the processing unit via the DSA node, the one or more CNFCs to the new host. Further, the method comprises transmitting, by the processing unit via the DSA node, a success response to the CNFLM node, in response to the re-instantiation of the one or more CNFCs to the new host. Furthermore, the method comprises transmitting, by the processing unit via the CNFLM node, and to a physical and virtual resource manager (PVIM) node, a set of details related to the new host.
[0010] In an exemplary aspect of the present disclosure, the method further comprises transmitting, by the processing unit via the CNFLM node, and to the UI, a success response indicative of replacement of the host and re-instantiation of the one or more CNFCs to the new host.
[0011] In an exemplary aspect of the present disclosure, the method further comprises displaying, by the processing unit at the UI, a plurality of new hosts. Further, the method comprises receiving, by the processing unit via the UI, based on an input from a user, a selection of the new host from the plurality of new hosts, for re-instantiating the one or more CNFCs to the new host, wherein the instruction to the DSA node to re-instantiate the one or more CNFCs is based on the selection of the new host.
[0012] In an exemplary aspect of the present disclosure, the PVIM node is communicably coupled to a database, and wherein the method comprises updating, by the processing unit via the PVIM node, the database with the set of details related to the new host.
[0013] In an exemplary aspect of the present disclosure, the set of details related to the new host comprises a new host name and a new host internet protocol (IP) address.
[0014] In an exemplary aspect of the present disclosure, the method further comprises displaying, by the processing unit via the CNFLM node, and at the UI, the set of details related to the new host. [0015] Another aspect of the present disclosure may relate to a system for managing a host for one or more container network function components (CNFCs). The system comprises a processing unit configured to receive, via a user interface (UI), and at a container network function lifecycle manager (CNFLM) node, a request for replacement of a host of the one or more CNFCs. The processing unit is further configured to transmit, via the CNFLM node, an instruction to a docker service adapter (DSA) node to instantiate the one or more CNFCs to a new host. Further, the processing unit is configured to re-instantiate, via the DSA node, the one or more CNFCs to the new host. Further, the processing unit is configured to transmit, via the DSA node, a success response to the CNFLM node, in response to the re-instantiation of the one or more CNFCs to the new host. Furthermore, the processing unit is configured to transmit, via the CNFLM node, and to a physical and virtual resource manager (PVIM) node, a set of details related to the new host.
[0016] Yet another aspect of the present disclosure relates a non-transitory computer readable storage medium storing one or more instructions for managing a host for one or more container network function components (CNFCs). The instructions include executable code which, when executed by one or more units of a system, causes a processing unit of the system to receive, via a user interface (UI), and at a container network function lifecycle manager (CNFLM) node, a request for replacement of a host of the one or more CNFCs. Further, the executable code when executed causes the processing unit to transmit, via the CNFLM node, an instruction to a docker service adapter (DSA) node to instantiate the one or more CNFCs to a new host. The executable code when further executed causes the processing unit to re-instantiate, via the DSA node, the one or more CNFCs to the new host. Further, the executable code when executed causes the processing unit to transmit, via the DSA node, a success response to the CNFLM node, in response to the reinstantiation of the one or more CNFCs to the new host. Furthermore, the executable code when executed causes the processing unit to transmit, via the CNFLM node, and to a physical and virtual resource manager (PVIM) node, a set of details related to the new host.
OBJECTS OF THE DISCLOSURE
[0017] Some of the objects of the present disclosure which at least one embodiment disclosed herein satisfies are listed herein below.
[0018] It is an object of the present disclosure to provide a system and method for managing a host for one or more container network function components (CNFCs). [0019] It is another object of the present disclosure to provide solution where no manual intervention at backend is needed to re-instantiate CNFCs.
[0020] It is another object of the present disclosure to provide solution to keep inventory in sync by updating the inventory.
[0021] It yet another object of the present disclosure to provide an optimal solution for the user to re-instantiate same CNFC’s and also keeps inventory in sync.
BREIF DESCRIPTION OF DRAWINGS
[0022] The accompanying drawings, which are incorporated herein, and constitute a part of this disclosure, illustrate exemplary embodiments of the disclosed methods and systems in which like reference numerals refer to the same parts throughout the different drawings. Components in the drawings are not necessarily to scale, emphasis instead being placed upon clearly illustrating the principles of the present disclosure. Also, the embodiments shown in the figures are not to be construed as limiting the disclosure, but the possible variants of the method and system according to the disclosure are illustrated herein to highlight the advantages of the disclosure. It will be appreciated by those skilled in the art that disclosure of such drawings includes disclosure of electrical components or circuitry commonly used to implement such components.
[0023] FIG. 1 illustrates an exemplary block diagram representation of management and orchestration (MANO) architecture/ platform, in accordance with exemplary implementation of the present disclosure.
[0024] FIG. 2 illustrates an exemplary block diagram of a computing device upon which the features of the present disclosure may be implemented, in accordance with exemplary implementation of the present disclosure.
[0025] FIG. 3 illustrates an exemplary block diagram of a system for managing a host for one or more container network function components (CNFCs), in accordance with exemplary implementation of the present disclosure. [0026] FIG. 4 illustrates an exemplary signalling flow diagram for managing a host for one or more container network function components (CNFCs), in accordance with exemplary implementation of the present disclosure.
[0027] FIG. 5 illustrates an exemplary flow diagram of method for managing a host for one or more container network function components (CNFCs), in accordance with exemplary implementation of the present disclosure.
[0028] FIG. 6. illustrates an exemplary diagram of a system architecture for managing a host for one or more container network function components (CNFCs), in accordance with exemplary implementation of the present disclosure.
[0029] The foregoing shall be more apparent from the following more detailed description of the disclosure.
DETAILED DESCRIPTION
[0030] In the following description, for the purposes of explanation, various specific details are set forth in order to provide a thorough understanding of embodiments of the present disclosure. It will be apparent, however, that embodiments of the present disclosure may be practiced without these specific details. Several features described hereafter may each be used independently of one another or with any combination of other features. An individual feature may not address any of the problems discussed above or might address only some of the problems discussed above.
[0031] The ensuing description provides exemplary embodiments only, and is not intended to limit the scope, applicability, or configuration of the disclosure. Rather, the ensuing description of the exemplary embodiments will provide those skilled in the art with an enabling description for implementing an exemplary embodiment. It should be understood that various changes may be made in the function and arrangement of elements without departing from the spirit and scope of the disclosure as set forth.
[0032] Specific details are given in the following description to provide a thorough understanding of the embodiments. However, it will be understood by one of ordinary skill in the art that the embodiments may be practiced without these specific details. For example, circuits, systems, processes, and other components may be shown as components in block diagram form in order not to obscure the embodiments in unnecessary detail.
[0033] Also, it is noted that individual embodiments may be described as a process which is depicted as a flowchart, a flow diagram, a data flow diagram, a structure diagram, or a block diagram. Although a flowchart may describe the operations as a sequential process, many of the operations may be performed in parallel or concurrently. In addition, the order of the operations may be re-arranged. A process is terminated when its operations are completed but could have additional steps not included in a figure.
[0034] The word “exemplary” and/or “demonstrative” is used herein to mean serving as an example, instance, or illustration. For the avoidance of doubt, the subject matter disclosed herein is not limited by such examples. In addition, any aspect or design described herein as “exemplary” and/or “demonstrative” is not necessarily to be construed as preferred or advantageous over other aspects or designs, nor is it meant to preclude equivalent exemplary structures and techniques known to those of ordinary skill in the art. Furthermore, to the extent that the terms “includes,” “has,” “contains,” and other similar words are used in either the detailed description or the claims, such terms are intended to be inclusive — in a manner similar to the term “comprising” as an open transition word — without precluding any additional or other elements.
[0035] As used herein, a “processing unit” or “processor” or “operating processor” includes one or more processors, wherein processor refers to any logic circuitry for processing instructions. A processor may be a general-purpose processor, a special purpose processor, a conventional processor, a digital signal processor, a plurality of microprocessors, one or more microprocessors in association with a Digital Signal Processing (DSP) core, a controller, a microcontroller, Application Specific Integrated Circuits, Field Programmable Gate Array circuits, any other type of integrated circuits, etc. The processor may perform signal coding data processing, input/output processing, and/or any other functionality that enables the working of the system according to the present disclosure. More specifically, the processor or processing unit is a hardware processor.
[0036] As used herein, “a user equipment”, “a user device”, “a smart-user-device”, “a smartdevice”, “an electronic device”, “a mobile device”, “a handheld device”, “a wireless communication device”, “a mobile communication device”, “a communication device” may be any electrical, electronic and/or computing device or equipment, capable of implementing the features of the present disclosure. The user equipment/device may include, but is not limited to, a mobile phone, smart phone, laptop, a general-purpose computer, desktop, personal digital assistant, tablet computer, wearable device or any other computing device which is capable of implementing the features of the present disclosure. Also, the user device may contain at least one input means configured to receive an input from unit(s) which are required to implement the features of the present disclosure.
[0037] As used herein, “storage unit” or “memory unit” refers to a machine or computer-readable medium including any mechanism for storing information in a form readable by a computer or similar machine. For example, a computer-readable medium includes read-only memory (“ROM”), random access memory (“RAM”), magnetic disk storage media, optical storage media, flash memory devices or other types of machine-accessible storage media. The storage unit stores at least the data that may be required by one or more units of the system to perform their respective functions.
[0038] As used herein “interface” or “user interface refers to a shared boundary across which two or more separate components of a system exchange information or data. The interface may also be referred to a set of rules or protocols that define communication or interaction of one or more modules or one or more units with each other, which also includes the methods, functions, or procedures that may be called.
[0039] All modules, units, components used herein, unless explicitly excluded herein, may be software modules or hardware processors, the processors being a general-purpose processor, a special purpose processor, a conventional processor, a digital signal processor (DSP), a plurality of microprocessors, one or more microprocessors in association with a DSP core, a controller, a microcontroller, Application Specific Integrated Circuits (ASIC), Field Programmable Gate Array circuits (FPGA), any other type of integrated circuits, etc.
[0040] As used herein the transceiver unit include at least one receiver and at least one transmitter configured respectively for receiving and transmitting data, signals, information or a combination thereof between units/components within the system and/or connected with the system.
[0041] As discussed in the background section, the current known solutions have several shortcomings. The present disclosure aims to overcome the above-mentioned and other existing problems in this field of technology by providing a method and a system for managing a host for one or more container network function components (CNFCs). More particularly, the present disclosure provides a solution, where no manual intervention at backend is needed to re-instantiate CNFC instances. Further, the present disclosure provides a solution to keep inventory in sync by updating the inventory. Furthermore, the present disclosure provides an optimal solution for the user to re-instantiate same CNFC’s and also keeps inventory in sync.
[0042] Hereinafter, exemplary embodiments of the present disclosure will be described with reference to the accompanying drawings.
[0043] FIG. 1 illustrates an exemplary block diagram representation of a management and orchestration (MANO) architecture/ platform [100], in accordance with exemplary implementation of the present disclosure. The MANO architecture [100] is developed for managing telecom cloud infrastructure automatically, managing design or deployment design, managing instantiation of network node(s)/ service(s) etc. The MANO architecture [100] deploys the network node(s) in the form of Virtual Network Function (VNF) and Cloud-native/ Container Network Function (CNF). The system may comprise one or more components of the MANO architecture [100], The MANO architecture [100] is used to auto-instantiate the VNFs into the corresponding environment of the present disclosure so that it could help in onboarding other vendor(s) CNFs and VNFs to the platform.
[0044] As shown in FIG. 1, the MANO architecture [100] comprises a user interface layer, a network function virtualization (NFV) and software defined network (SDN) design function module [104], a platforms foundation services module [106], a platform core services module [108] and a platform resource adapters and utilities module [112], All the components are assumed to be connected to each other in a manner as obvious to the person skilled in the art for implementing features of the present disclosure.
[0045] The NFV and SDN (NFVSDN) design function module [104] comprises a VNF lifecycle manager (compute) [1042], a VNF catalogue [1044], a network services catalogue [1046], a network slicing and service chaining manager [1048], a physical and virtual resource manager [1050] and a CNF lifecycle manager [1052], The VNF lifecycle manager (compute) [1042] is responsible for deciding on which server of the communication network, the microservice will be instantiated. The VNF lifecycle manager (compute) [1042] may manage the overall flow of incoming/ outgoing requests during interaction with the user. The VNF lifecycle manager (compute) [1042] is responsible for determining which sequence to be followed for executing the process. For e.g. in an AMF network function of the communication network (such as a 5G network), sequence for execution of processes Pl and P2 etc. The VNF catalogue [1044] stores the metadata of all the VNFs (also CNFs in some cases). The network services catalogue [1046] stores the information about the services that need to be run. The network slicing and service chaining manager [1048] manages the slicing (an ordered and connected sequence of network service/ network functions (NFs)) that must be applied to a specific networked data packet. The physical and virtual resource manager [1050] stores the logical and physical inventory of the VNFs. Just like the VNF lifecycle manager (compute) [1042], the CNF lifecycle manager [1052] is used for the CNFs lifecycle management.
[0046] The platforms foundation services module [106] comprises a microservices elastic load balancer [1062], an identify & access manager [1064], a command line interface (CLI) [1066], a central logging manager [1068], and an event routing manager [1070], The microservices elastic load balancer [1062] is used for maintaining the load balancing of the request for the services. The identify & access manager [1064] is used for logging purposes. The command line interface (CLI) [1066] is used to provide commands to execute certain processes which require changes during the run time. The central logging manager [1068] is responsible for keeping the logs of every service. These logs are generated by the MANO platform [100], These logs are used for debugging purposes. The event routing manager [1070] is responsible for routing the events i.e., the application programming interface (API) hits to the corresponding services.
[0047] The platforms core services module [108] comprises NFV infrastructure monitoring manager [1082], an assure manager [1084], a performance manager [1086], a policy execution engine [1088], a capacity monitoring manager [1090], a release management (mgmt.) repository [1092], a configuration manager & GCT [1094], an NFV platform decision analytics [1096], a platform NoSQL DB [1098]; a platform schedulers and cron jobs [1100], a VNF backup & upgrade manager [1102], a micro service auditor [1104], and a platform operations, administration and maintenance manager [1106], The NFV infrastructure monitoring manager [1082] monitors the infrastructure part of the NFs. For e.g., any metrics such as CPU utilization by the VNF. The assure manager [1084] is responsible for supervising the alarms the vendor is generating. The performance manager [1086] is responsible for managing the performance counters. The policy execution engine (PEEGN) [1088] is responsible for all the managing the policies. The capacity monitoring manager (CMM) [1090] is responsible for sending the request to the PEEGN [1088], The release management (mgmt.) repository (RMR) [1092] is responsible for managing the releases and the images of all the vendor network node. The configuration manager & (GCT) [1094] manages the configuration and GCT of all the vendors. The NFV platform decision analytics (NPDA) [1096] helps in deciding the priority of using the network resources. It is further noted that the policy execution engine (PEEGN) [1088], the configuration manager & GCT [1094] and the NPDA [1096] work together. The platform NoSQL DB [1098] is a database for storing all the inventory (both physical and logical) as well as the metadata of the VNFs and CNF. The platform schedulers and cron jobs [1100] schedules the task such as but not limited to triggering of an event, traverse the network graph etc. The VNF backup & upgrade manager [1102] takes backup of the images, binaries of the VNFs and the CNFs and produces that backup on demand in case of server failure. The micro service auditor [1104] audits the microservices. For e.g., in a hypothetical case, instances not being instantiated by the MANO architecture [100] using the network resources then the micro service auditor [1104] audits and informs the same so that resources can be released for services running in the MANO architecture [100], thereby assuring the services only run on the MANO platform [100], The platform operations, administration and maintenance manager [1106] is used for newer instances that are spawning.
[0048] The platform resource adapters and utilities module [112] further comprises a platform external API adaptor and gateway [1122]; a generic decoder and indexer (XML, CSV, JSON) [1124]; a docker service adaptor [1126]; an API adapter [1128]; and an NFV gateway [1130], The platform external API adaptor and gateway [1122] is responsible for handling the external services (to the MANO platform [100]) that requires the network resources. The generic decoder and indexer (XML, CSV, JSON) [1124] gets directly the data of the vendor system in the XML, CSV, JSON format. The docker service adaptor [1126] is the interface provided between the telecom cloud and the MANO architecture [100] for communication. The API adapter [1128] is used to connect with virtual machines (VMs). The NFV gateway [1130] is responsible for providing the path to each service going to/incoming from the MANO architecture [100],
[0049] Referring to FIG. 2 an exemplary block diagram of a computing device [200] upon which the features of the present disclosure may be implemented in accordance with exemplary implementation of the present disclosure is shown. In an implementation, the computing device [200] may implement a method automating management of network traffic at one or more network functions in a network by utilising a system [300], In another implementation, the computing device [200] itself implements the method for automating management of network traffic at one or more network functions in a network using one or more units configured within the computing device [200], wherein said one or more units are capable of implementing the features as disclosed in the present disclosure. [0050] The computing device [200] may include a bus [202] or other communication mechanism for communicating information, and a hardware processor [204] coupled with bus [202] for processing information. The hardware processor [204] may be, for example, a general-purpose microprocessor. The computing device [200] may also include a main memory [206], such as a random-access memory (RAM), or other dynamic storage device, coupled to the bus [202] for storing information and instructions to be executed by the processor [204], The main memory [206] also may be used for storing temporary variables or other intermediate information during execution of the instructions to be executed by the processor [204], Such instructions, when stored in non-transitory storage media accessible to the processor [204], render the computing device [200] into a special-purpose machine that is customized to perform the operations specified in the instructions. The computing device [200] further includes a read only memory (ROM) [208] or other static storage device coupled to the bus [202] for storing static information and instructions for the processor [204],
[0051] A storage device [210], such as a magnetic disk, optical disk, or solid-state drive is provided and coupled to the bus [202] for storing information and instructions. The computing device [200] may be coupled via the bus [202] to a display [212], such as a cathode ray tube (CRT), Liquid crystal Display (LCD), Light Emitting Diode (LED) display, Organic LED (OLED) display, etc. for displaying information to a computer user. An input device [214], including alphanumeric and other keys, touch screen input means, etc. may be coupled to the bus [202] for communicating information and command selections to the processor [204], Another type of user input device may be a cursor controller [216], such as a mouse, a trackball, or cursor direction keys, for communicating direction information and command selections to the processor [204], and for controlling cursor movement on the display [212], The input device typically has two degrees of freedom in two axes, a first axis (e.g., x) and a second axis (e.g., y), that allow the device to specify positions in a plane.
[0052] The computing device [200] may implement the techniques described herein using customized hard-wired logic, one or more ASICs or FPGAs, firmware and/or program logic which in combination with the computing device [200] causes or programs the computing device [200] to be a special-purpose machine. According to one implementation, the techniques herein are performed by the computing device [200] in response to the processor [204] executing one or more sequences of one or more instructions contained in the main memory [206], Such instructions may be read into the main memory [206] from another storage medium, such as the storage device [210], Execution of the sequences of instructions contained in the main memory [206] causes the processor [204] to perform the process steps described herein. In alternative implementations of the present disclosure, hard-wired circuitry may be used in place of or in combination with software instructions.
[0053] The computing device [200] also may include a communication interface [218] coupled to the bus [202], The communication interface [218] provides a two-way data communication coupling to a network link [220] that is connected to a local network [222], For example, the communication interface [218] may be an integrated services digital network (ISDN) card, cable modem, satellite modem, or a modem to provide a data communication connection to a corresponding type of telephone line. As another example, the communication interface [218] may be a local area network (LAN) card to provide a data communication connection to a compatible LAN. Wireless links may also be implemented. In any such implementation, the communication interface [218] sends and receives electrical, electromagnetic or optical signals that carry digital data streams representing various types of information.
[0054] The computing device [200] can send messages and receive data, including program code, through the network(s), the network link [220] and the communication interface [218], In the Internet example, a server [230] might transmit a requested code for an application program through the Internet [228], the ISP [226], a host [224], the local network [222] and the communication interface [218], The received code may be executed by the processor [204] as it is received, and/or stored in the storage device [210], or other non-volatile storage for later execution.
[0055] Referring to FIG. 3 an exemplary block diagram of a system [300] for managing a host for one or more container network function components (CNFCs), in accordance with exemplary implementation of the present disclosure is illustrated. In one example, the system [300] may be in communication with other network entities/components known to a person skilled in the art. Such network entities/components have not been depicted in FIG. 3 and have not been explained here for the sake of brevity.
[0056] Referring to FIG. 4 an exemplary signalling flow diagram [400] for managing a host for one or more container network function components (CNFCs), in accordance with exemplary implementation of the present disclosure is illustrated.
[0057] It may be noted that FIG. 3 and FIG. 4 have been explained simultaneously and may be read in conjunction with each other. [0058] As depicted in FIG. 3 the system [300] comprises at least one processing unit [302] and at least one display unit [304], Also, all of the components/ units of the system [300] are assumed to be connected to each other unless otherwise indicated below. As shown in the FIG. 3 all units shown within the system [300] should also be assumed to be connected to each other. Also, in FIG. 3 only a few units are shown, however, the system [300] may comprise multiple such units or the system [300] may comprise any such number of said units, as required to implement the features of the present disclosure. Further, in an implementation, the system [300] may reside in a server or the network entity or the system [300] may be in communication with the network entity to implement the features as disclosed in the present disclosure.
[0059] The system [300] is configured for managing a host for one or more container network function components (CNFCs) with the help of the interconnection between the components/units of the system [300], The container network function (CNF) may be a network function that may be implemented within a containerized environment using technologies such as, but not limited to, a docker, and the like. The said container network functions that may be implemented within the containerized environment may be cloud-native network functions. The cloud-native network functions may not rely on a dedicated hardware or virtual machines for implementation, they may be implemented within a container. Also, the containerization of the network function may make it possible to manage how and when the network function may run across a cluster in the environment. Furthermore, the container network function components (CNFCs) may include components such as, but not limited to, a container runtime, an operating system, lifecycle management component, storage component, etc.
[0060] Further, as would be understood the host on which CNF runs may be a physical or a virtual machine that may have all the necessary components such as, but not limited to, networking, storage and security. The host further ensures that CNFs may operate efficiently within a containerized environment or a cloud-native environment.
[0061] In operation, the processing unit [302] may receive, via a user interface (UI), at a container network function lifecycle manager (CNFLM) node, a request for replacement of a host of the one or more CNFCs. This has been depicted by Step [402] in FIG. 4.
[0062] As would be understood, the CNFLM node may manage the lifecycle of the container. The management of the lifecycle of the container is a crucial process, where the CNFLM node may oversee the creation, deployment and operation of the container until the container may be eventually decommissioned. [0063] In an example a user may receive all details related to plurality of hosts at the UI. Once the user receives all the details related to the plurality of hosts, the user may select one or more faulty hosts from the plurality of hosts, at the UI, to be replaced with one or more new hosts. Further, the details related to the plurality of hosts may include such as, but not limited to, a host name and a host internet protocol (IP) address. There are a variety of issues that could cause hosts or servers to go down or become faulty, including hardware failure, viruses, power outages, as well as natural or physical disasters like fires or floods. A host or server may also go down because of corrupted files, or misconfigurations.
[0064] Continuing further, once the user selects the one or more faulty hosts from the plurality of hosts, the details related to the CNFCs on the one or more faulty hosts may be displayed on the UI. The details related to the CNFCs on the faulty hosts may include a CNF name, a CNF version, a CNF ID, a CNFC Name, a CNFC ID, and a container ID.
[0065] Continuing further, the display unit [304] may display at the UI, a plurality of new hosts. The display unit [304] may further display, via the CNFLM node, and at the UI, the set of details relating to the new host. The details related to the new host may comprise one or more new host names and a new host IP address. The user may select a new host from the plurality of new hosts displayed at the UI of the display unit [304], The user, to select the new host, may select the new host name from the one or more new host names displayed at the UI of the display unit [304], Once the user selects the new host name from the set of new host names, the new host IP address corresponding to the selected new host name may be displayed, at the UI, on the display unit [304],
[0066] Continuing further, the processing unit [302] may transmit, via the CNFLM node, an instruction to a docker service adapter (DSA) node to instantiate the one or more CNFCs to the new host. This has been depicted by Step [404] in FIG. 4.
[0067] As would be understood, the DSA is a component of the system [300] that may have been designed to interface between the docker services and the other components of the system [300], Further, as would be understood instantiate may be a process to create an instance of the CNFCs and to make the CNFCs operational on the selected new host.
[0068] Continuing further, the processing unit [302] may re-instantiate, via the DSA node, the one or more CNFCs to the new host. This has been depicted by Step [406] in FIG. 4. [0069] Furthermore, the processing unit [302] may transmit from the new host or the server, via the DSA node (shown as step [408]), a success response to the CNFLM node, in response to the re-instantiation of the one or more CNFCs to the new host. This has been depicted by Step [410] of FIG. 4. The success response may indicate that the one or more CNFCs operationalised on the faulty host are now successfully operationalised on the selected new host.
[0070] Thereafter, the processing unit [302] may transmit, via the CNFLM node, to a physical and virtual resource manager (PVIM) node, a set of details related to the new host. This has been depicted by Step [412] of FIG. 4.
[0071] As would be understood the PVIM service maintains the virtual inventory, such as virtual machines, and limited physical inventory, such as servers. It maintains the relation between physical and virtual resources (w.r.t overlay). Also, it describes physical and virtual resources with respect to different attributes using updates from external micro-services, such as the CNFLM microservice or node.
[0072] Continuing further, in an implementation, the PVIM node is communicably coupled to a database. The processing unit [302] may update, via the PVIM node, the database with the set of details related to the new host. The database, communicably coupled with the PVIM, may store all the details related to the selected new host for the said one or more CNFCs. The set of details related to the new host comprises a new host name and a new host internet protocol (IP) address.
[0073] Further, the processing unit [302] may transmit, via the CNFLM node, to the UI, a success response indicative of replacement of the host and re-instantiation of the one or more CNFCs to the new host. This has been depicted by step 414 of FIG. 4. Once the success response is received at the UI, the updated details related to the new host may be displayed, at the UI of the display unit [304], to the user.
[0074] Referring to FIG. 5 an exemplary flow diagram of a method [500] for managing a host for one or more container network function components (CNFCs), in accordance with exemplary implementation of the present disclosure is illustrated. In an implementation the method [500] is performed by the system [300], Also, as shown in FIG. 5, the method [500] initiates at step [502],
[0075] At step [504], the method [500] comprises receiving, by a processing unit [302] via a user interface (UI), and at a container network function lifecycle manager (CNFLM) node, a request for replacement of a host of the one or more CNFCs. [0076] As would be understood, the CNFLM node may mange the lifecycle of the container. The management of the lifecycle of the container is a crucial process, where the CNFLM node may oversee the creation, deployment and operation of the container until the container may be eventually decommissioned.
[0077] In an example a user may receive all details related to plurality of hosts at the UI. Once the user receives all the details related to the plurality of hosts, the user may select one or more faulty hosts from the plurality of hosts, at the UI, to be replaced with one or more new hosts. There are a variety of issues that could cause hosts or servers to go down or become faulty, including hardware failure, viruses, power outages, as well as natural or physical disasters like fires or floods. A host or server may also go down because of corrupted files, or misconfigurations. Further the details related to the plurality of hosts may include such as, but not limited to, a host name and a host internet protocol (IP) address.
[0078] Continuing further, once the user selects the one or more faulty hosts from the plurality of hosts, the details related to the CNFCs on the one or more faulty hosts may be displayed on the UI. The details related to the CNFCs on the faulty hosts may include a CNF name, a CNF version, a CNF ID, a CNFC Name, a CNFC ID, and a container ID.
[0079] Continuing further, the display unit [304] may display at the UI, a plurality of new hosts. The display unit [304] may further display, via the CNFLM node, and at the UI, the set of details relating to the new host. The details related to the new host may comprise one or more new host names and a new host IP address. The user may select a new hosts from the plurality of new hosts displayed at the UI of the display unit [304], The user, to select the new hosts, may selects the new host name from the one or more new host names displayed at the UI of the display unit [304], Once the user selects the new host name from the set of new host names, the new host IP address corresponding to the selected new host name may be displayed, at the UI, on the display unit [304],
[0080] Next, at sept [506], the method [500] comprises transmitting, by the processing unit [302] via the CNFLM node, an instruction to a docker service adapter (DSA) node to re-instantiate the one or more CNFCs to a new host.
[0081] As would be understood the DSA is a component of the system [300] that may have been designed to interface between the docker services and the other components of the system [300], Further, as would be understood instantiate may be a process to create an instance of the CNFCs and to make the CNFCs operational on the selected new host. [0082] Further, at step [508], the method [500] comprises re-instantiating, by the processing unit [302] via the DSA node, the one or more CNFCs to the new host. The processing unit [302] may receive, via the UI, based on the input from the user, the selection of the new host from the plurality of new hosts, for re-instantiating the one or more CNFCs to the new host. Further, the instruction to the DSA node to re-instantiate the one or more CNFCs is based on the selection of the new host. As would be understood to re-instantiate the one or more CNFCs, the processing unit [302] may operationalise the said one or more CNFCs, that may be operationalised on the faulty host, on the selected new host.
[0083] Further, at step [510], the method [500] comprises transmitting, by the processing unit [302] via the DSA node, a success response to the CNFLM node, in response to the re-instantiation of the one or more CNFCs to the new host. The success response may indicate that the one or more CNFCs operationalised on the faulty host are now successfully operationalised on the selected new host.
[0084] Furthermore, at step [512], the method [500] comprises transmitting, by the processing unit [302] via the CNFLM node, and to a physical and virtual resource manager (PVIM) node, a set of details related to the new host. As would be understood the PVIM may be responsible to manage both the physical resources such as, but not limited to, servers, storage devices, other hardware resources and the virtual resources such as, but not limited to, virtual machines, virtual networks, etc.
[0085] In an implementation, the PVIM node is communicably coupled to a database. The processing unit [302] may update, via the PVIM node, the database with the set of details related to the new host. The database, communicably coupled with the PVIM, may store all the details related to the selected new host for the said one or more CNFCs. The details related to the selected new host may comprise a new host name and a new host internet protocol (IP) address.
[0086] Moreover, the processing unit [302] may transmit, via the CNFLM node, and to the UI, a success response indicative of replacement of the host and re-instantiation of the one or more CNFCs to the new host. Once the success response may be received at the UI, the updated details related to the new host may be displayed, at the UI of the display unit [304], to the user.
[0087] Referring to FIG. 6, an exemplary diagram of a system architecture [600] for managing a host for one or more container network function components (CNFCs), in accordance with exemplary implementation of the present disclosure is illustrated. The system architecture [600] comprises a User Interface (UI) [602], a Container Network function Lifecycle Manger (CNFLM) [604], a Docker Service Adapter (DSA) [606], and a Physical and virtual resource manager (PVIM) [608], As shown in FIG. 6, all units shown of the system architecture [600] should also be assumed to be connected to each other. Also, in FIG. 6 only a few units are shown, however, the system architecture [600] may comprise multiple such units or the system architecture [600] may comprise any such numbers of said units, as required to implement the features of the present disclosure.
[0088] The UI [602] may display all details related to plurality of hosts. Once the details related to the plurality of hosts are displayed, the user may select one or more faulty hosts from the plurality of hosts that may be replaced with one or more new hosts. Also, the details related to the one or more new hosts may be displayed on the UI [602], Once the user selects the one or more new hosts to replace the one or more faulty hosts, the UI may send an instruction to the CNFLM [604] to replace the one or more faulty hosts with the selected one or more new host.
[0089] The CNFLM [604] captures the details of Vendors, CNFs and CNFCs via Create, Read, and Update APIs exposed by the CNFLM [604] service. The captured details are stored in an elastic search database and can be further used by the DSA [606], The CNFLM [604] is responsible for creating a CNF or individual CNFC instances. Also, it is responsible for healing and scaling out CNFs or individual CNFCs. The CNFLM [604] further transmits the instructions, to the DSA [606] to re-instantiate the CNFC instances to the selected one or more new hosts. The DSA [606] may be used for creating the containers on Docker Sites as a swarm service.
[0090] Continuing further, the CNFLM [604] sends the instruction with CNFC details to the DSA [606], Every CNFC may be deployed on different Docker site as per the instructions with at least one replication. When container runs successfully, a Docker Agent Manager (DAM) sends a response to the DSA [606] per CNF and then the DSA [606] sends a final response to the CNFLM [604], Once the CNFLM [604] receives the response from the DSA [606], the CNFLM transmits the instructions, to the PVIM [608], to update the inventory based on the re-instantiation of the CNFCs instances on the selected one or more new instances.
[0091] The PVIM [608], maintains the virtual inventory and limited physical inventory. It maintains relation between physical and virtual resources. Also, it describes physical and virtual resources with respect to different attributes using updates from external micro-services. Once the inventory is updated, the PVIM [608] sends a response to the CNFLM [604] about the successful updating of the details related to the selected one or more new hosts. [0092] Finally, the CNFLM [604] transmits a success response to the UI [602], The details related the updated one or more new hosts may be displayed at the UI [602] for the user.
[0093] The present disclosure further, discloses a non-transitory computer readable storage medium storing one or more instructions for managing a host for one or more container network function components (CNFCs). The instructions include executable code which, when executed by one or more units of a system [300], causes a processing unit [302] of the system [300] to receive, via a user interface (UI), and at a container network function lifecycle manager (CNFLM) node, a request for replacement of a host of the one or more CNFCs. Further, the executable code when executed causes the processing unit [302]to transmit, via the CNFLM node, an instruction to a docker service adapter (DSA) node to instantiate the one or more CNFCs to a new host. The executable code when further executed causes the processing unit [302] to re-instantiate, via the DSA node, the one or more CNFCs to the new host. Further, the executable code when executed causes the processing unit [302] to transmit, via the DSA node, a success response to the CNFLM node, in response to the re-instantiation of the one or more CNFCs to the new host. Furthermore, the executable code when executed causes the processing unit [302] to transmit, via the CNFLM node, and to a physical and virtual resource manager (PVIM) node, a set of details related to the new host.
[0094] As is evident from the above, the present disclosure provides a technically advanced solution for managing a host for one or more container network function components (CNFCs). More particularly, the present solution where no manual intervention at backend is needed to reinstantiate CNFC instances. Further, the present solution keeps inventory in sync by updating the inventory. Furthermore, the present solution provides easy one-click operation for the user to reinstantiate same CNFC’s and also keeps inventory in sync.
[0095] While considerable emphasis has been placed herein on the disclosed implementations, it will be appreciated that many implementations can be made and that many changes can be made to the implementations without departing from the principles of the present disclosure. These and other changes in the implementations of the present disclosure will be apparent to those skilled in the art, whereby it is to be understood that the foregoing descriptive matter to be implemented is illustrative and non-limiting.
[0096] Further, in accordance with the present disclosure, it is to be acknowledged that the functionality described for the various components/units can be implemented interchangeably. While specific embodiments may disclose a particular functionality of these units for clarity, it is recognized that various configurations and combinations thereof are within the scope of the disclosure. The functionality of specific units as disclosed in the disclosure should not be construed as limiting the scope of the present disclosure. Consequently, alternative arrangements and substitutions of units, provided they achieve the intended functionality described herein, are considered to be encompassed within the scope of the present disclosure.

Claims

We Claim:
1. A method for managing a host for one or more container network function components
(CNFCs), the method comprising: receiving, by a processing unit [302] via a user interface (UI), and at a container network function lifecycle manager (CNFLM) node, a request for replacement of a host of the one or more CNFCs; transmitting, by the processing unit [302] via the CNFLM node, an instruction to a docker service adapter (DSA) node to re-instantiate the one or more CNFCs to a new host; re-instantiating, by the processing unit [302] via the DSA node, the one or more CNFCs to the new host; transmitting, by the processing unit [302] via the DSA node, a success response to the CNFLM node, in response to the re-instantiation of the one or more CNFCs to the new host; and transmitting, by the processing unit [302] via the CNFLM node, and to a physical and virtual resource manager (PVIM) node, a set of details related to the new host.
2. The method as claimed in claim 1, wherein the method comprises transmitting, by the processing unit [302] via the CNFLM node, and to the UI, a success response indicative of replacement of the host and re-instantiation of the one or more CNFCs to the new host.
3. The method as claimed in claim 1, wherein the method comprises: displaying, by a display unit [304] at the UI, a plurality of new hosts; and receiving, by the processing unit [302] via the UI, based on an input from a user, a selection of the new host from the plurality of new hosts, for re-instantiating the one or more CNFCs to the new host, wherein the instruction to the DSA node to re-instantiate the one or more CNFCs is based on the selection of the new host.
4. The method as claimed in claim 1, wherein the PVIM node is communicably coupled to a database, and wherein the method comprises updating, by the processing unit via the PVIM node, the database with the set of details related to the new host.
5. The method as claimed in claim 1, wherein the set of details related to the new host comprises a new host name and a new host internet protocol (IP) address.
6. The method as claimed in claim 1, the method further comprising: displaying, by the display unit [304] via the CNFLM node, and at the UI, the set of details related to the new host.
7. A system for managing a host for one or more container network function components
(CNFCs), the system comprising: a processing unit [302] configured to: o receive, via a user interface (UI), and at a container network function lifecycle manager (CNFLM) node, a request for replacement of a host of the one or more CNFCs; o transmit, via the CNFLM node, an instruction to a docker service adapter (DSA) node to instantiate the one or more CNFCs to a new host; o re-instantiate, via the DSA node, the one or more CNFCs to the new host; o transmit, via the DSA node, a success response to the CNFLM node, in response to the re-instantiation of the one or more CNFCs to the new host; and o transmit, via the CNFLM node, and to a physical and virtual resource manager (PVIM) node, a set of details related to the new host.
8. The system as claimed in claim 7, wherein the processing unit [302] is configured to transmit, via the CNFLM node, and to the UI, a success response indicative of replacement of the host and re-instantiation of the one or more CNFCs to the new host.
9. The system as claimed in claim 7, wherein the system further comprises: a display unit [304] connected to at least the processing unit [302], the display unit [304] is configured to display, at the UI, a plurality of new hosts; and the processing unit [302] configured to receive, via the UI, based on an input from a user, a selection of the new host from the plurality of new hosts, for re-instantiating the one or more CNFCs to the new host, wherein the instruction to the DSA node to re-instantiate the one or more CNFCs is based on the selection of the new host.
10. The system as claimed in claim 7, wherein the PVIM node is communicably coupled to a database, and wherein the processing unit [302] is configured to update, via the PVIM node, the database with the set of details related to the new host.
11. The system as claimed in claim 7, wherein the set of details related to the new host comprises a new host names and a new host internet protocol (IP) address.
12. The system as claimed in claim 7, wherein the display unit [304] is further configured to: display, via the CNFLM node, and at the UI, the set of details relating to the new host.
13. A non-transitory computer readable storage medium storing one or more instructions for managing a host for one or more container network function components (CNFCs) the storage medium comprising executable code which, when executed by one or more units of a system [300], causes: a processing unit [302] of the system [300] to: o receive, via a user interface (UI), and at a container network function lifecycle manager (CNFLM) node, a request for replacement of a host of the one or more CNFCs; o transmit, via the CNFLM node, an instruction to a docker service adapter (DSA) node to instantiate the one or more CNFCs to a new host; o re-instantiate, via the DSA node, the one or more CNFCs to the new host; o transmit, via the DSA node, a success response to the CNFLM node, in response to the re-instantiation of the one or more CNFCs to the new host; o transmit, via the CNFLM node, and to a physical and virtual resource manager (PVIM) node, a set of details related to the new host.
PCT/IN2024/051850 2023-09-27 2024-09-25 Method and system for managing a host for container network function components Pending WO2025069064A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
IN202321065008 2023-09-27
IN202321065008 2023-09-27

Publications (1)

Publication Number Publication Date
WO2025069064A1 true WO2025069064A1 (en) 2025-04-03

Family

ID=95201380

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IN2024/051850 Pending WO2025069064A1 (en) 2023-09-27 2024-09-25 Method and system for managing a host for container network function components

Country Status (1)

Country Link
WO (1) WO2025069064A1 (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017041525A1 (en) * 2015-09-08 2017-03-16 中兴通讯股份有限公司 Virtual network function reconstruction method and device
US20180026832A1 (en) * 2015-04-09 2018-01-25 Huawei Technologies Co., Ltd Network functions virtualization based fault processing method and apparatus

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180026832A1 (en) * 2015-04-09 2018-01-25 Huawei Technologies Co., Ltd Network functions virtualization based fault processing method and apparatus
WO2017041525A1 (en) * 2015-09-08 2017-03-16 中兴通讯股份有限公司 Virtual network function reconstruction method and device

Similar Documents

Publication Publication Date Title
WO2025062464A1 (en) Method and system for routing events in a network environment
WO2025069101A1 (en) Method and system for managing network resources
WO2025069064A1 (en) Method and system for managing a host for container network function components
WO2025069047A1 (en) Method and system for reserving resources for instantiation of a network function
WO2025069110A1 (en) Method and system for providing virtual network function information at a policy execution engine
WO2025069081A1 (en) Methods and systems for management of database operations in a network
WO2025069103A1 (en) METHOD AND SYSTEM FOR MANAGING OPERATION OF CONTAINER NETWORK FUNCTION COMPONENTS (CNFCs)
WO2025062452A1 (en) Method and system for monitoring operations related to network functions in a network environment
WO2025069099A1 (en) Method and system for managing virtual network function (vnf) resources
WO2025074391A1 (en) Methods and systems for managing dynamic resource management and orchestration in a network
WO2025069108A1 (en) Method and system for managing inventory of a network
WO2025069085A1 (en) Method and system for performing one or more operations on one or more network functions in a network
WO2025062455A1 (en) Method and system for managing resources in a network environment
WO2025074437A1 (en) Method and system for monitoring resource usage by network node components
WO2025069067A1 (en) Method and system for discovery management of one or more microservices
WO2025069082A1 (en) Methods and systems for allocation of one or more network resources in a telecommunication network
WO2025062459A1 (en) Method and system for providing information relating to network resources in a network environment
WO2025069098A1 (en) Method and system to manage virtual network function (vnf) instances in a network
WO2025069114A1 (en) Method and system for instantiation of container network functions on host
WO2025069055A1 (en) Method and system for internet protocol (ip) pool management
WO2025069113A1 (en) Method and system for managing one or more container network function (cnf) nodes
WO2025057234A1 (en) Method and system for implementing one or more corrective actions during an error event
WO2025074401A1 (en) Method and system for synchronizing network inventory
WO2025062450A1 (en) Method and system for implementing corrective actions during a resource threshold error event
WO2025062462A1 (en) Method and system for managing inventory operations in a network environment

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 24871225

Country of ref document: EP

Kind code of ref document: A1