WO2025013048A1 - Method and system for updating parameters for one or more network nodes - Google Patents
Method and system for updating parameters for one or more network nodes Download PDFInfo
- Publication number
- WO2025013048A1 WO2025013048A1 PCT/IN2024/051108 IN2024051108W WO2025013048A1 WO 2025013048 A1 WO2025013048 A1 WO 2025013048A1 IN 2024051108 W IN2024051108 W IN 2024051108W WO 2025013048 A1 WO2025013048 A1 WO 2025013048A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- requests
- work order
- update
- network nodes
- unit
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F8/00—Arrangements for software engineering
- G06F8/60—Software deployment
- G06F8/65—Updates
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L41/00—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
- H04L41/08—Configuration management of networks or network elements
- H04L41/0803—Configuration setting
- H04L41/0813—Configuration setting characterised by the conditions triggering a change of settings
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L41/00—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
- H04L41/08—Configuration management of networks or network elements
- H04L41/0889—Techniques to speed-up the configuration process
Definitions
- Embodiments of the present disclosure generally relate to network performance management systems. More particularly, embodiments of the present disclosure relate to method and system for updating parameters for one or more network nodes.
- Wireless communication technology has rapidly evolved over the past few decades, with each generation bringing significant improvements and advancements.
- the first generation of wireless communication technology was based on analog technology and offered only voice services.
- 2G second-generation
- 3G third generation
- 3G marked the introduction of high-speed internet access, mobile video calling, and location-based services.
- 4G The fourth generation (4G) technology revolutionized wireless communication with faster data speeds, better network coverage, and improved security.
- 5G fifth generation
- wireless communication technology has become more advanced, sophisticated, and capable of delivering more services to its users.
- NMS network management system
- NBS network management system
- NBI northbound interface
- All the changes are made in separate work orders.
- the user might need to access user interface separately and make changes for the particular node. This process may consume a lot of time and effort for the user. Further, raising separate work orders for each change in the node(s) may also lead to high consumption of network resources.
- An aspect of the present disclosure may relate to a method for updating parameters for one or more network nodes.
- the method includes receiving, by a transceiver unit at a network management system (NMS), from an interface, a set of requests comprising one or more update parameters for the one or more network nodes.
- the method further includes validating, by a validation unit at the NMS, each of the request from the set of requests.
- the method includes adding, by the validation unit at the NMS, the validated set of requests in a queue maintained in an input-output (IO) cache.
- the method further encompasses running, by a scheduler unit at the NMS, a scheduler job at a configured interval, for updating the one or more network nodes with the one or more update parameters.
- the set of requests comprises at least multi-update requests and all-updates requests, wherein the multi-update requests are configured to update each of the one or more update parameters on a list of specified NF instances of the one or more network nodes.
- the all-update requests are configured to update each of the NF instance of the one or more network nodes.
- the set of requests is received from the interface in response to a polling by the transceiver unit at the NMS.
- each of the request from the set of requests is associated with a work order identity.
- the validating each of the request from the set of requests comprises validating a schema of a configuration data associated with each of the request from the set of requests.
- each of the request from the validated set of requests added in the queue is grouped based on the work order identity.
- updating the one or more network nodes with the one or more update parameters, by the scheduler unit comprises checking, by the scheduler unit, one of a presence and an absence of a first work order identity associated with a first subset of requests from the set of requests, in the IO cache.
- the method further comprises sending, by the scheduler unit, the first subset of requests associated with the first work order identity, in a batch, to update the one or more nodes, in an event of the presence of the first work order identity associated with the first subset of requests.
- the method comprises sending, by the scheduler unit, a response to the interface.
- the method further comprises removing, by an analysis unit, the first work order identity associated with the first subset of requests from the queue maintained in the IO cache after sending the first subset of requests associated with the first work order identity for updating the one or more network nodes.
- the method further comprises sending, by the scheduler unit, an update response for each of the one or more network nodes, to a database, wherein the database stores status associated with each of the one or more network nodes. Furthermore, the method includes updating, by a processing unit, at the NMS, the status associated with each of the one or more network nodes in the database, with the update response for each of the one or more network nodes.
- the method further includes receiving, by the transceiver unit, an abort request for a second work order identity associated with a second subset of requests from the set of requests. Furthermore, the method includes checking, by the processing unit, one of a presence and an absence of the second work order identity associated with the second subset of requests in the IO cache. The method further encompasses removing, by the processing unit, the second work order identity associated with the second subset of requests from the IO cache, in an event of the presence of the second work order identity associated with the second subset of requests in the IO cache. The method further includes sending, by the transceiver unit, an aborted response to the interface.
- the method further includes sending, by the transceiver unit, at the NMS, to the interface, a failure response, in an event of the absence of the second work order identity associated with the second subset of requests in the IO cache.
- the abort request for the second work order identity associated with the second subset of requests is received in response to a polling, by the transceiver unit, at the NMS.
- the network management system includes a transceiver unit is configured to receive a set of requests comprising one or more update parameters for the one or more network nodes.
- the network management system further includes a validation unit connected to at least the transceiver unit.
- the validation unit is configured to validate, each request from the set of requests.
- the validation unit is further configured to add, the validated set of requests in a queue maintained in an input-output (IO) cache.
- the network management system further includes a scheduler unit, connected to at least the analysis unit, the scheduler unit is configured to run a scheduler job at a configured interval, for updating the one or more network nodes with the one or more update parameters.
- Yet another aspect of the present disclosure may relate to a non-transitory computer readable storage medium storing instructions for updating parameters for one or more network nodes, the instructions include executable code which, when executed by a one or more units of a system, causes: a transceiver unit of the system to receive a set of requests comprising one or more update parameters for the one or more network nodes.
- the instructions include executable code which, when executed, causes a validation unit of the system to validate each request from the set of requests and the validation unit to add the validated set of requests in a queue maintained in an input-output (IO) cache.
- the instructions include executable code which, when executed, causes a scheduler unit to run a scheduler) ob at a configured interval for updating the one or more network nodes with the one or more update parameters.
- FIG. 1 illustrates an exemplary block diagram representation of 5th generation core (5GC) network architecture.
- 5GC 5th generation core
- FIG. 2 illustrates an exemplary block diagram of a computing device upon which the features of the present disclosure may be implemented in accordance with exemplary implementation of the present disclosure.
- FIG. 3 illustrates an exemplary block diagram of a system for updating parameters for one or more network nodes, in accordance with exemplary implementations of the present disclosure.
- FIG. 4 illustrates a method flow diagram for updating parameters for one or more network nodes in accordance with exemplary implementations of the present disclosure.
- FIG. 5 illustrates an exemplary implementation of the system for updating parameters for one or more network nodes, in accordance with exemplary implementations of the present disclosure.
- FIG. 6 illustrates an exemplary representation of the process for updating parameters for one or more network nodes, in accordance with exemplary embodiments of the present disclosure.
- exemplary and/or “demonstrative” is used herein to mean serving as an example, instance, or illustration. For the avoidance of doubt, the subject matter disclosed herein is not limited by such examples.
- any aspect or design described herein as “exemplary” and/or “demonstrative” is not necessarily to be construed as preferred or advantageous over other aspects or designs, nor is it meant to preclude equivalent exemplary structures and techniques known to those of ordinary skill in the art.
- a “processing unit” or “processor” or “operating processor” includes one or more processors, wherein processor refers to any logic circuitry for processing instructions.
- a processor may be a general-purpose processor, a special purpose processor, a conventional processor, a digital signal processor, a plurality of microprocessors, one or more microprocessors in association with a (Digital Signal Processing) DSP core, a controller, a microcontroller, Application Specific Integrated Circuits, Field Programmable Gate Array circuits, any other type of integrated circuits, etc.
- the processor may perform signal coding data processing, input/output processing, and/or any other functionality that enables the working of the system according to the present disclosure. More specifically, the processor or processing unit is a hardware processor.
- a user equipment may be any electrical, electronic and/or computing device or equipment, capable of implementing the features of the present disclosure.
- the user equipment/device may include, but is not limited to, a mobile phone, smart phone, laptop, a general-purpose computer, desktop, personal digital assistant, tablet computer, wearable device or any other computing device which is capable of implementing the features of the present disclosure.
- the user device may contain at least one input means configured to receive an input from at least one of a transceiver unit, a processing unit, a storage unit, a detection unit and any other such unit(s) which are required to implement the features of the present disclosure.
- storage unit or “memory unit” refers to a machine or computer-readable medium including any mechanism for storing information in a form readable by a computer or similar machine.
- a computer-readable medium includes read-only memory (“ROM”), random access memory (“RAM”), magnetic disk storage media, optical storage media, flash memory devices or other types of machine-accessible storage media.
- the storage unit stores at least the data that may be required by one or more units of the system to perform their respective functions.
- interface refers to a shared boundary across which two or more separate components of a system exchange information or data.
- the interface may also be referred to a set of rules or protocols that define communication or interaction of one or more modules or one or more units with each other, which also includes the methods, functions, or procedures that may be called.
- All modules, units, components used herein, unless explicitly excluded herein, may be software modules or hardware processors, the processors being a general-purpose processor, a special purpose processor, a conventional processor, a digital signal processor (DSP), a plurality of microprocessors, one or more microprocessors in association with a DSP core, a controller, a microcontroller, Application Specific Integrated Circuits (ASIC), Field Programmable Gate Array circuits (FPGA), any other type of integrated circuits, etc.
- DSP digital signal processor
- ASIC Application Specific Integrated Circuits
- FPGA Field Programmable Gate Array circuits
- the transceiver unit include at least one receiver and at least one transmitter configured respectively for receiving and transmitting data, signals, information or a combination thereof between units/components within the system and/or connected with the system.
- the present disclosure aims to overcome the above-mentioned and other existing problems in this field of technology by providing method and system of updating parameters for one or more network nodes.
- FIG. 1 illustrates an exemplary block diagram representation of 5th generation core (5GC) network architecture [100], in accordance with exemplary implementation of the present disclosure.
- the 5GC network architecture [100] includes a user equipment (UE) [102], a radio access network (RAN) [104], an access and mobility management function (AMF) [106], a Session Management Function (SMF) [108], a Service Communication Proxy (SCP) [110], an Authentication Server Function (AUSF) [112], a Network Slice Specific Authentication and Authorization Function (NSSAAF) [114], a Network Slice Selection Function (NSSF) [116], a Network Exposure Function (NEF) [118], a Network Repository Function (NRF) [120], a Policy Control Function (PCF) [122], a Unified Data Management (UDM) [124], an application function (AF) [126], a User Plane Function (UPF) [128], a data network (DN) [130], wherein all the components are assumed to
- the Radio Access Network (RAN) [104] is the part of a mobile telecommunications system that connects user equipment (UE) [102] to the core network (CN) and provides access to different types of networks (e.g., 5G network). It consists of radio base stations and the radio access technologies that enable wireless communication.
- the Access and Mobility Management Function (AMF) is a 5G core network function responsible for managing access and mobility aspects, such as UE registration, connection, and reachability. It also handles mobility management procedures like handovers and paging.
- the Session Management Function (SMF) is a 5G core network function responsible for managing session-related aspects, such as establishing, modifying, and releasing sessions. It coordinates with the User Plane Function (UPF) for data forwarding and handles IP address allocation and QoS enforcement.
- UPF User Plane Function
- the Service Communication Proxy (SCP) [110] is a network function in the 5G core network that facilitates communication between other network functions by providing a secure and efficient messaging service. It acts as a mediator for service-based interfaces.
- the Authentication Server Function (AUSF) [112] is a network function in the 5G core responsible for authenticating UEs during registration and providing security services. It generates and verifies authentication vectors and tokens.
- the Network Slice Specific Authentication and Authorization Function (NSSAAF) [114] is a network function that provides authentication and authorization services specific to network slices. It ensures that UEs can access only the slices for which they are authorized.
- NSSAAF Network Slice Specific Authentication and Authorization Function
- the Network Slice Selection Function (NSSF) [116] is a network function responsible for selecting the appropriate network slice for a UE based on factors such as subscription, requested services, and network policies.
- the Network Exposure Function (NEF) [118] is a network function that exposes capabilities and services of the 5G network to external applications, enabling integration with third-party services and applications.
- the Network Repository Function (NRF) [120] is a network function that acts as a central repository for information about available network functions and services. It facilitates the discovery and dynamic registration of network functions.
- the Policy Control Function (PCF) is a network function responsible for policy control decisions, such as QoS, charging, and access control, based on subscriber information and network policies.
- the Unified Data Management (UDM) is a network function that centralizes the management of subscriber data, including authentication, authorization, and subscription information.
- the Application Function (AF) [126] is a network function that represents external applications interfacing with the 5G core network to access network capabilities and services.
- the User Plane Function [128] is a network function responsible for handling user data traffic, including packet routing, forwarding, and QoS enforcement.
- the Data Network (DN) refers to a network that provides data services to user equipment (UE) in a telecommunications system.
- the data services may include but are not limited to Internet services, private data network related services.
- FIG. 2 illustrates an exemplary block diagram of a computing device [200] upon which the features of the present disclosure may be implemented in accordance with exemplary implementation of the present disclosure.
- the computing device [200] may also implement a method for updating parameters for one or more network nodes utilising the system.
- the computing device [200] itself implements the method for updating parameters for one or more network nodes using one or more units configured within the computing device [200], wherein said one or more units are capable of implementing the features as disclosed in the present disclosure.
- the computing device [200] may include a bus [202] or other communication mechanism for communicating information, and a hardware processor [204] coupled with bus [202] for processing information.
- the hardware processor [204] may be, for example, a general-purpose microprocessor.
- the computing device [200] may also include a main memory [206], such as a random-access memory (RAM), or other dynamic storage device, coupled to the bus [202] for storing information and instructions to be executed by the processor [204],
- the main memory [206] also may be used for storing temporary variables or other intermediate information during execution of the instructions to be executed by the processor [204], Such instructions, when stored in non-transitory storage media accessible to the processor [204], render the computing device [200] into a special-purpose machine that is customized to perform the operations specified in the instructions.
- the computing device [200] further includes a read only memory (ROM) [208] or other static storage device coupled to the bus [202] for storing static information and instructions for the processor [204],
- ROM read only memory
- a storage device [210] such as a magnetic disk, optical disk, or solid-state drive is provided and coupled to the bus [202] for storing information and instructions.
- the computing device [200] may be coupled via the bus [202] to a display [212], such as a cathode ray tube (CRT), Liquid crystal Display (LCD), Light Emitting Diode (LED) display, Organic LED (OLED) display, etc. for displaying information to a computer user.
- An input device [214] including alphanumeric and other keys, touch screen input means, etc.
- a cursor controller [216] such as a mouse, a trackball, or cursor direction keys, for communicating direction information and command selections to the processor [204], and for controlling cursor movement on the display [212].
- the input device typically has two degrees of freedom in two axes, a first axis (e.g., x) and a second axis (e.g., y), that allow the device to specify positions in a plane.
- the computing device [200] may implement the techniques described herein using customized hard-wired logic, one or more ASICs or FPGAs, firmware and/or program logic which in combination with the computing device [200] causes or programs the computing device [200] to be a special-purpose machine.
- the techniques herein are performed by the computing device [200] in response to the processor [204] executing one or more sequences of one or more instructions contained in the main memory [206], Such instructions may be read into the main memory [206] from another storage medium, such as the storage device [210], Execution of the sequences of instructions contained in the main memory [206] causes the processor [204] to perform the process steps described herein.
- hard-wired circuitry may be used in place of or in combination with software instructions.
- the computing device [200] also may include a communication interface [218] coupled to the bus [202], The communication interface [218] provides a two-way data communication coupling to a network link [220] that is connected to a local network [222],
- the communication interface [218] may be an integrated services digital network (ISDN) card, cable modem, satellite modem, or a modem to provide a data communication connection to a corresponding type of telephone line.
- the communication interface [218] may be a local area network (LAN) card to provide a data communication connection to a compatible LAN.
- LAN local area network
- Wireless links may also be implemented.
- the communication interface [218] sends and receives electrical, electromagnetic or optical signals that carry digital data streams representing various types of information.
- the computing device [200] can send messages and receive data, including program code, through the network(s), the network link [220] and the communication interface [218],
- a server [230] might transmit a requested code for an application program through the Internet [228], the ISP [226], the host [224], the local network [222] and the communication interface [218],
- the received code may be executed by the processor [204] as it is received, and/or stored in the storage device [210], or other non-volatile storage for later execution.
- the present disclosure is implemented by a system [300] (as shown in FIG. 3).
- the system [300] may include the computing device [200] (as shown in FIG. 2). It is further noted that the computing device [200] is able to perform the steps of a method [400] (as shown in FIG. 4).
- FIG. 3 an exemplary block diagram of a system [300] for updating parameters for one or more network nodes, is shown, in accordance with the exemplary implementations of the present disclosure.
- the system [300] comprises at least one transceiver unit [302], at least one validation unit [304], at least one scheduler unit [306], at least one analysis unit [308], at least one processing unit [310], and at least one database [312], Also, all the components/ units of the system [300] are assumed to be connected to each other unless otherwise indicated below. As shown in the figures all units shown within the system should also be assumed to be connected to each other. Also, in FIG.
- system [300] may comprise multiple such units or the system [300] may comprise any such numbers of said units, as required to implement the features of the present disclosure.
- the system [300] may be present in a user device to implement the features of the present disclosure.
- the system [300] may be a part of the user device / or may be independent of but in communication with the user device (may also referred herein as a UE).
- the system [300] may reside in a server or a network entity.
- the system [300] may reside partly in the server/ network entity and partly in the user device.
- the system [300] is configured for updating parameters for one or more network nodes, with the help of the interconnection between the components/units of the system [300],
- the system [300] includes a network management system (NMS) [320],
- the NMS [320] includes the transceiver unit [302],
- the transceiver unit [302] is configured to receive a set of requests comprising one or more update parameters for the one or more network nodes.
- the one or more update parameters may include an internet protocol address, Quality of Service (QoS), timer, host, or port, log level, context, auto synchronize, throttle, refresh, default paging DRX, slice parameter, download data split primary path, threshold and the like.
- the one or more update parameters may further include a value associated with the one or more associated update parameters.
- the value may be of any type such as a Boolean type, a string type, an integer type, a float type, and the like.
- the set of requests sent at the NMS [320] by the transceiver unit [302] may include a SAP identifier, a node identifier, a parameter name, the value of the one or more update parameters and the like.
- the set of requests comprises at least multi-update requests and all-update requests.
- the at least multi-update requests may be configured to update each of the one or more update parameters on a list of specified Network Function (NF) instances of the one or more network nodes.
- the NF instances of the one or more network nodes refers to an instance of the one or more nodes.
- the NF instances are configured to perform a specific operation in the one or more network nodes.
- the at least all-update requests may be configured to update each of the NF instance of the one or more network nodes in a circle.
- the circle refers to a predefined geographical area, a pre-defined location, a tracking area code (TAC), a cell identity, and the like.
- TAC tracking area code
- the set of requests is received from the interface in response to a polling by the transceiver unit [302] at the NMS [320],
- Each of the request of the set of requests is associated with a work order identity.
- the work order identity is a unique identifier which may be allotted to a request for a work order or task in the telecommunication network. The work order identity may help in managing and tracking the request for the work order or task.
- the NMS [320] may support at least two types of requests for updating the at least one or more network nodes- the multi-update request and the all-update request.
- the multi-update request may update the at least one network node parameter from the list of specified network nodes.
- the all-update request may update every parameter from the list of network nodes.
- the transceiver unit [302] is further configured to perform the polling at the NMS [320], The polling refers to a communication where the transceiver unit [302] may repeatedly send requests to the NMS [320] at fixed intervals to check for updates.
- the NMS [320] further includes the validation unit [304] connected to at least the transceiver unit [302], The validation unit [304] is configured to validate, each request from the set of requests.
- the validation unit [304] is further configured to add, the validated set of requests in a queue maintained in an input-output (IO) cache [504],
- the validation unit [304] is further configured to validate a format associated with each request from the set of requests. Each request from the validated set of requests added in the queue, is grouped based on the work order identity.
- the validation unit [304] checks if the set of requests are valid or not. If the set of requests are not valid requests, the transceiver unit [302] sends a failure response to the user. If the set of requests are valid requests, the set of requests are inserted in the IO cache [504], The set of requests are maintained in a queue in the IO Cache [504] with the work order identity.
- the IO cache [504] refers to a customized cache to store data temporarily and enhancing the performance of the NMS [320], The IO cache [504] may reduce latency by storing the set of requests temporarily.
- the NMS [320] further includes the scheduler unit [306], connected to at least the analysis unit [308], The scheduler unit [306] is configured to run a scheduler job at a configured interval, for updating the one or more network nodes with the one or more update parameters.
- the updating the one or more network nodes with the one or more update parameters, by the scheduler unit [306], comprises the scheduler unit [306], to check one of a presence and an absence of a first work order identity associated with a first subset of requests from the set of requests, in the IO cache [504],
- the updating the one or more network nodes with the one or more update parameters further includes the scheduler unit [306], to send the first subset of requests associated with the first work order identity, in a batch, to update the one or more nodes, in an event of the presence of the first work order identity associated with the first subset of requests.
- the updating the one or more network nodes with the one or more update parameters further includes the scheduler unit [306] to send a response to the interface.
- the configured interval may be determined by a user or the NMS [320], In an embodiment of the present disclosure, the configured interval may be changed in every session.
- the scheduler unit [306] oftheNMS [320] may run the scheduler job at the configured intervals. For instance, the scheduler unit [306] may run the scheduler job after every 5 minutes, as defined by the user.
- the scheduler unit [306] may check for the first work order identities present in the IO Cache [504], If the scheduler unit [306] does not find any queued work order identity, the scheduler unit [306] may assume that no set of requests were initiated, the scheduler unit [306] may not initiate any action.
- the scheduler unit [306] may send the one or more parameter update requests to the node [506] in batches.
- the batch refers to sending the first subset of requests associated with the first work order identity together. For instance, if the NMS [320] can handle sending 100 updates at a time and group the updates into batches of 100.
- the analysis unit [308] is further configured to remove, the first work order identity associated with the first subset of requests from the queue maintained in the IO cache [504] after sending the first subset of requests associated with the first work order identity for updating the one or more network nodes.
- the analysis unit [308] may receive an acknowledgment from the one or more network nodes to confirm that the first subset of requests is updated.
- the analysis unit [308] may access the queue maintained in the IO cache [504] and locate the work order identity that must be removed. Further, the analysis unit [308] may search the queue maintained in the IO cache [504] to find the work order identity. The analysis unit [308] may remove the work order identity from the queue.
- the system [300] further includes the scheduler unit [306], to send an update response for each of the one or more network nodes, to the database [312],
- the database [312] stores status associated with each of the one or more network nodes.
- the system [300] further includes the processing unit [310], at the NMS [320], to update the status associated with each of the one or more network nodes in the database [312], with the update response for each of the one or more network nodes. For instance, there is an update in the QoS parameters of the one or more network nodes.
- the scheduler unit [306] may send the update to the database [312] to store the updated QoS parameters of the one or more network nodes.
- the database [312] may update the QoS parameters of the one or more network nodes accordingly.
- the system [300] is configured to receive, by the transceiver unit [302], an abort request for a second work order identity associated with a second subset of requests from the set of requests.
- the abort request may be sent to the transceiver unit [302] if the set of requests comprising one or more update parameters may disrupt the services of the one or more network nodes. For instance, the value in the one or more update parameters is very high and the one or more network nodes may not be able to handle the value of the update parameter.
- the system is further configured to check, by the processing unit [310], one of a presence and an absence of the second work order identity associated with the second subset of requests in the IO cache [504],
- the processing unit [310] is further configured to remove the second work order identity associated with the second subset of requests from the IO cache [504], in an event of the presence of the second work order identity associated with the second subset of requests in the IO cache [504],
- the transceiver unit [302] is further configured to send an aborted response to the interface.
- the abort request for the second work order identity associated with the second subset of requests is received in response to a polling, by the transceiver unit [302], at the NMS [320],
- the system [300] further includes the transceiver unit [302], at the NMS [320], to send to the interface, a failure response, in an event of the absence of the second work order identity associated with the second subset of requests in the IO cache [504],
- the NMS [320] may check for the second work order identity in the IO cache [504], which may be received in the abort request.
- the processing unit [310] may further insert the set of requests in the IO cache [504], If the second work order identity is found in the IO Cache [504], the second work order identity are removed from the IO queue and send the ‘aborted response’ by the transceiver unit [302], In case the work order identity is missing in the IO Cache [504], the IO cache [504] assumes that the work order identity has already been executed and sends a failure response by the transceiver unit [302] to the interface.
- FIG. 4 an exemplary method flow diagram [400] for updating parameters for one or more network nodes, in accordance with exemplary implementations of the present disclosure is shown.
- the method [400] is performed by the system [300], Further, in an implementation, the system [300] may be present in a server device to implement the features of the present disclosure. Also, as shown in FIG. 4, the method [400] starts at step [402],
- the method [400] comprises receiving, by a transceiver unit [302] at a network management system (NMS) [320], from an interface, a set of requests comprising one or more update parameters for the one or more network nodes.
- the one or more update parameters may include an internet protocol address, Quality of Service (QoS), timer, host, or port, log level, context, auto synchronize, throttle, refresh, default paging DRX, slice parameter, download data split primary path, threshold and the like.
- the one or more update parameters may further include a value associated with the one or more associated update parameters. The value may be in a Boolean type, a string type, an integer type, a float type, and the like.
- the set of requests sent at the NMS [320] by the transceiver unit [302] may include an SAP identifier, a node identifier, a parameter name, the value of the one or more update parameters.
- the set of requests comprises at least multi-update requests and all-update requests.
- the at least multi-update requests may be configured to update each of the one or more update parameters on a list of specified Network Function (NF) instances of the one or more network nodes.
- the NF instances of the one or more network nodes refers to an instance of the one or more nodes.
- the NF instances are configured to perform a specific operation in the one or more network nodes.
- the at least allupdate requests may be configured to update each of the NF instance of the one or more network nodes in a circle.
- the circle refers to a pre-defined geographical area, a pre-defined location, a tracking area code (TAC), a cell identity, and the like.
- TAC tracking area code
- the set of requests is received from the interface in response to a polling by the transceiver unit [302] at the NMS [320],
- Each of the request from the set of requests is associated with a work order identity.
- the work order identity is a unique identifier which may be allotted to a request for a work order or task in the telecommunication network. The work order identity may help in managing and tracking the request for the work order or task.
- the NMS [320] may support at least two types of requests for updating the at least one or more network nodes- the multi-update request and the all-update request.
- the multi-update request may update the at least one network node parameter from the list of specified network nodes.
- the all-update request may update every parameter from the list of network nodes.
- the polling refers to a communication where the transceiver unit [302] may repeatedly send requests to the NMS [320] at fixed intervals to check for updates.
- the method [400] comprises validating, by a validation unit [304] at the NMS [320], each of the request from the set of requests.
- the validation unit [304] is further configured to validate a format associated with each of the request from the set of requests.
- Each of the request from the validated set of requests added in the queue, is grouped based on the work order identity.
- the method [400] encompasses adding, by the validation unit [304] at the NMS [320], the validated set of requests in a queue maintained in an input-output (IO) cache [504],
- the set of requests are checked by the validation unit [304], The validation unit [304] checks if they are valid or not. If the set of requests are not valid requests, a failure response is sent by the transceiver unit [302], If the set of requests are valid requests, the set of requests are inserted in the IO cache [504], The set of requests are maintained in a queue in the IO Cache [504] with the work order identity.
- the IO cache [504] refers to a customized cache to store data temporarily and enhancing the performance of the NMS [320], The IO cache [504] may reduce latency by storing the set of requests temporarily.
- the method [400] encompasses running, by a scheduler unit [306], at the NMS [320], a scheduler job at a configured interval, for updating the one or more network nodes with the one or more update parameters.
- the updating the one or more network nodes with the one or more update parameters, by the scheduler unit [306] includes checking, by the scheduler unit [306], one of a presence and an absence of a first work order identity associated with a first subset of requests from the set of requests, in the IO cache [504],
- the updating the one or more network nodes with the one or more update parameters, by the scheduler unit [306] further includes sending, by the scheduler unit [306], the first subset of requests associated with the first work order identity, in a batch, to update the one or more nodes, in an event of the presence of the first work order identity associated with the first subset of requests.
- the updating the one or more network nodes with the one or more update parameters, by the scheduler unit [306] includes sending, by the scheduler unit [306], a response to the interface.
- the method further includes removing, by an analysis unit [308], the first work order identity associated with the first subset of requests from the queue maintained in the IO cache [504] after sending the first subset of requests associated with the first work order identity for updating the one or more network nodes.
- the configured interval may be determined by a user or the NMS [320], In an embodiment of the present disclosure, the configured interval may be changed in every session.
- the scheduler job at the configured intervals may be run by the scheduler unit [306] of the NMS [320], For instance, the scheduler unit [306] may run the scheduler job after every 5 minutes, as defined by the user.
- the presence of the first work order identities in the IO Cache [504] may be checked by the scheduler unit [306], If any queued work order identity is not found by the scheduler unit [306], the scheduler unit [306] may assume that no set of requests were initiated, the scheduler unit [306] may not initiate any action.
- the scheduler unit [306] may send the one or more parameter update requests to the node [506] in batches.
- the batch refers to sending the first subset of requests associated with the first work order identity together. For instance, if the NMS [320] can handle sending 100 updates at a time and group the updates into batches of 100.
- the method [400] further comprises sending, by the scheduler unit [306], an update response for each of the one or more network nodes, to a database [312],
- the database [312] stores status associated with each of the one or more network nodes.
- the method [400] includes updating, by a processing unit [310], at the NMS [320], the status associated with each of the one or more network nodes in the database [312], with the update response for each of the one or more network nodes. For instance, there is an update in the QoS parameters of the one or more network nodes.
- the scheduler unit [306] may send the update to the database [312] to store the updated QoS parameters of the one or more network nodes.
- the database [312] may update the QoS parameters of the one or more network nodes accordingly.
- an acknowledgment from the one or more network nodes to confirm that the first subset of requests is updated may be received by the analysis unit [308].
- the analysis unit [308] may access the queue maintained in the IO cache [504] and locate the work order identity that must be removed. Further, the analysis unit [308] may search the queue maintained in the IO cache [504] to find the work order identity. The analysis unit [308] may remove the work order identity from the queue.
- the method [400] further comprises receiving, by the transceiver unit [302], an abort request for a second work order identity associated with a second subset of requests from the set of requests.
- the abort request may be sent to the transceiver unit [302] if the set of requests comprising one or more update parameters may disrupt the services of the one or more network nodes. For instance, the value in the one or more update parameters is very high and the one or more network nodes may not be able to handle the value of the update parameter.
- the method [400] encompasses checking, by the processing unit [310], one of a presence and an absence of the second work order identity associated with the second subset of requests in the IO cache [504], Furthermore, the method encompasses removing, by the processing unit [310], the second work order identity associated with the second subset of requests from the IO cache [504], in an event of the presence of the second work order identity associated with the second subset of requests in the IO cache [504], The method [400] further includes sending, by the transceiver unit [302], the aborted response to the interface. The abort request for the second work order identity associated with the second subset of requests, is received in response to a polling, by the transceiver unit [302], at the NMS [320],
- the abort request for a second work order identity in a running request may check for the second work order identity in the IO cache [504], which may be received in the abort request.
- the processing unit [310] may further insert the set of requests in the IO cache [504], If the second work order identity is found in the IO Cache [504], the second work order identity are removed from the IO queue and send the ‘aborted response’ by the transceiver unit [302],
- the method [400] further comprises sending, by the transceiver unit [302], at the NMS [320], to the interface, a failure response, in an event of the absence of the second work order identity associated with the second subset of requests in the IO cache [504],
- the IO cache [504] in case the work order identity is missing in the IO Cache [504], the IO cache [504] assumes that the work order identity has already been executed and sends a failure response by the transceiver unit [302],
- FIG. 5 it illustrates an exemplary implementation of the system for updating parameters for one or more network nodes, in accordance with exemplary implementations of the present disclosure.
- the system [500] comprises at least one Configuration Management System [502], at least one IO Cache [504], at least one north bound interface (NBI) [508], at least one Node [506], and the database [312],
- the system [500] is configured for updating parameters for the one or more nodes.
- the CMS [502] may send the set of requests for updating parameters for the one or more network nodes at the node [506].
- the one or more update parameters may include an internet protocol address, Quality of Service (QoS), timer, host, or port, log level, context, auto synchronize, throttle, refresh, default paging DRX, slice parameter, download data split primary path, threshold and the like.
- the node [506] may further send the update response for each of the one or more network nodes to the database [312],
- the database [312] may store the status associated with each of the one or more network nodes.
- the database [312] may further send the update response to a Northbound Interface (NBI) [508],
- NBI Northbound Interface
- the NBI [508] is an output-oriented interface which may be configured to send outputs to the user.
- the NBI [508] may poll for the set of requests for updating parameters to the IO Cache [504], Polling refers to a communication where the CMS [502] may repeatedly send requests to the IO Cache [504],
- the NMS [320] supports two types of requests for updating parameters: a) Multi type - updates requested parameters on the list of specified network function (NF) instances. b) All type - updates requested parameters for all the NF instances in the circle.
- the circle refers to a pre-defined geographical area, a pre-defined location, a pre-defined tracking area code (TAC), a pre-defined cell identity, and the like.
- the IO cache [504] may validate each request from the set of requests. Each request from the validated set of requests added in the queue, is grouped based on the work order identity.
- the CMS [502] may check for the presence and the absence of the first work order identity associated with the first subset of requests from the set of requests.
- the IO Cache [504] may further insert the validated set of requests in the queue maintained in the IO cache [504],
- the updating parameters in the one or more network nodes further includes the CMS [502] to send the first subset of requests associated with the first work order identity, in a batch, to update the one or more nodes, in an event of the presence of the first work order identity associated with the first subset of requests.
- the CMS [502] may further send a response to the NBI [508] of the updating of the one or more nodes.
- the CMS [502] may assume that no set of requests were initiated, the CMS [502] may not initiate any action. If the queued work order identity is found by the CMS [502], the CMS [502] may send the one or more parameter update requests to the node [506] in batches.
- the batch refers to sending the first subset of requests associated with the first work order identity together. For instance, if the node [506] can handle sending 100 updates at a time and group the updates into batches of 100.
- the NBI [508] may send the abort request for the second work order identity associated with the second subset of requests from the set of requests to the CMS [502],
- the abort request may be sent if the set of requests comprising one or more update parameters may disrupt the services of the one or more network nodes.
- the CMS [502] may check one of a presence and an absence of the second work order identity associated with the second subset of requests in the IO cache [504],
- the CMS [502] may remove the second work order identity associated with the second subset of requests from the IO cache [504], in an event of the presence of the second work order identity associated with the second subset of requests in the IO cache [504],
- the CMS [502] may send the aborted response to the NBI [508] in response to the poll for the abort request by the NBI [508],
- FIG. 6 it illustrates an exemplary representation of the process of updating parameters for nodes, in accordance with exemplary embodiments of the present disclosure. As shown in FIG. 6, the method begins are step [602],
- a set of poll request(s) is received.
- the set of poll requests comprises of parameters for network nodes.
- Each of the poll request from the set of poll requests is associated with a work order identity.
- the validation unit [304] checks if the set of poll requests are valid or not.
- step [0114] If the set of poll requests are not a valid request, the method proceeds to step [608], At step [608], the transceiver unit [302] sends a failure response to the NBI [508],
- step [0115] If the request is a valid request, the method proceeds to step [610], At step [610], the set of poll requests are inserted in the IO cache [504], It maintains a queue in IO Cache [504] with a unique work order identity based on request received.
- the NMS [320] supports two types of requests for updating configuration parameters:
- the scheduler unit [306] of the NMS [320] runs the scheduler job at the configured intervals, where the configured intervals maybe determined by the user or the NMS [320],
- the scheduler unit [306] checks for queued work order identities present in the IO Cache [504],
- step [0119] If the scheduler unit [306] does not find any queued work order identity, the method may proceed to step [616], At step [616], the scheduler unit [306] may assume that no set of poll requests were initiated by the NBI [508], the scheduler unit [306] may not initiate any action.
- step [0120] If the queued work order identity is found by the scheduler unit [306], the method may proceed to step [618],
- step [618] the method starts sending parameter update request to the node [506] in batches.
- the scheduler unit [306] sends the response to the NBI [508].
- the NMS [320] maintains count for all NF instances and number of responses received from node for that work order identity.
- the scheduler unit [306] sends the update response for each of the one or more network nodes to the database [312],
- the database [312] stores status associated with each of the one or more network nodes.
- the processing unit [310] may update at the NMS [320], the status associated with each of the one or more network nodes in the database [312], with the update response for each of the one or more network nodes.
- the NMS [320] also supports for aborting a currently running task, which starts at step [626], At step [626], the CMS [502] polls for an abort request for a work order identity from the NBI [508],
- the NMS [320] checks for the work order identity in the IO cache [504], which was received in the abort request. [0126] The method may then proceed to [610] for inserting the set of poll requests in the IO cache [504], If the work order identity is found in the IO Cache [504], the work order identity is removed from the IO queue and send the ‘aborted response’ to the NBI [508], In case the work order identity is missing in the IO Cache [504], the IO cache [504] assumes that the work order identity has already been executed and sends a failure response to the NBI [508],
- the analysis unit [308] removes the work order identity associated with the set of requests from the queue maintained in the IO cache [504] after sending the set of requests associated with the work order identity for updating the one or more network nodes.
- step 630 the method comes to an end.
- the present disclosure further discloses a non-transitory computer readable storage medium storing instructions for updating parameters for one or more network nodes, the instructions include executable code which, when executed by a one or more units of a system, causes: a transceiver unit [302] of the system [300] to receive a set of requests comprising one or more update parameters for the one or more network nodes. Also, the instructions include executable code which, when executed, causes a validation unit [304] of the system [300] to validate, each request from the set of requests.
- the instructions include executable code which, when executed, causes the validation unit [304] of the system [300] to add, the validated set of requests in a queue maintained in an input-output (IO) cache [504] and a scheduler unit [306] of the system [300] to run a scheduler job at a configured interval, for updating the one or more network nodes with the one or more update parameters.
- IO input-output
- the present disclosure provides a technically advanced solution for updating parameters for nodes.
- the solution of the present invention provides a system and a method for updating parameters for nodes that consumes less time and efforts of the user. Further, implementing the features of the present invention enables one to save network resources. Also, the solution for updating parameters for nodes, as disclosed, supports an abort request functionality, allowing to cancel ongoing parameter update requests which minimizes potential system disruptions.
Landscapes
- Engineering & Computer Science (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Software Systems (AREA)
- General Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Computer Security & Cryptography (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Mobile Radio Communication Systems (AREA)
Abstract
The present disclosure relates to a method and a system for updating parameters for one or more network nodes The method includes receiving, by a transceiver unit [302] at an NMS [320], a set of requests comprising one or more update parameters for the one or more network nodes. The method further includes validating, by a validation unit [304] at the NMS [320], each request from the set of requests. Further, the method includes adding, by the validation unit [304] at the NMS [320], the validated set of requests in a queue maintained in an IO cache [504]. The method further includes running, by a scheduler unit [306], at the NMS [320], a scheduler job at a configured interval, for updating the one or more network nodes with the one or more update parameters.
Description
METHOD AND SYSTEM FOR UPDATING PARAMETERS FOR ONE OR MORE NETWORK NODES
TECHNICAL FIELD
[0001] Embodiments of the present disclosure generally relate to network performance management systems. More particularly, embodiments of the present disclosure relate to method and system for updating parameters for one or more network nodes.
BACKGROUND
[0002] The following description of the related art is intended to provide background information pertaining to the field of the disclosure. This section may include certain aspects of the art that may be related to various features of the present disclosure. However, it should be appreciated that this section is used only to enhance the understanding of the reader with respect to the present disclosure, and not as admissions of the prior art.
[0003] Wireless communication technology has rapidly evolved over the past few decades, with each generation bringing significant improvements and advancements. The first generation of wireless communication technology was based on analog technology and offered only voice services. However, with the advent of the second-generation (2G) technology, digital communication and data services became possible, and text messaging was introduced. The third generation (3G) technology marked the introduction of high-speed internet access, mobile video calling, and location-based services. The fourth generation (4G) technology revolutionized wireless communication with faster data speeds, better network coverage, and improved security. Currently, the fifth generation (5G) technology is being deployed, promising even faster data speeds, low latency, and the ability to connect multiple devices simultaneously. With each generation, wireless communication technology has become more advanced, sophisticated, and capable of delivering more services to its users.
[0004] For improving the performance of nodes, various parameters need to be changed for the nodes. In the process, one parameter may have to be changed on one node, while another parameter may have to be changed on some other node. For making the changes, the network management system (NMS) serves as an intermediary and sends requests to the appropriate nodes inside the network after receiving them through the northbound interface (NBI) and sends back the response
after receiving from nodes. All the changes are made in separate work orders. Also, for making any change on a node, the user might need to access user interface separately and make changes for the particular node. This process may consume a lot of time and effort for the user. Further, raising separate work orders for each change in the node(s) may also lead to high consumption of network resources.
[0005] Thus, there exists an imperative need in the art to provide a method and a system for updating parameters for nodes that consumes less time and efforts of the user, and consumes less amount of network resources, which the present disclosure aims to address.
SUMMARY
[0006] This section is provided to introduce certain aspects of the present disclosure in a simplified form that are further described below in the detailed description. This summary is not intended to identify the key features or the scope of the claimed subject matter.
[0007] An aspect of the present disclosure may relate to a method for updating parameters for one or more network nodes. The method includes receiving, by a transceiver unit at a network management system (NMS), from an interface, a set of requests comprising one or more update parameters for the one or more network nodes. The method further includes validating, by a validation unit at the NMS, each of the request from the set of requests. Furthermore, the method includes adding, by the validation unit at the NMS, the validated set of requests in a queue maintained in an input-output (IO) cache. The method further encompasses running, by a scheduler unit at the NMS, a scheduler job at a configured interval, for updating the one or more network nodes with the one or more update parameters. In an exemplary aspect of the present disclosure, the set of requests comprises at least multi-update requests and all-updates requests, wherein the multi-update requests are configured to update each of the one or more update parameters on a list of specified NF instances of the one or more network nodes. The all-update requests are configured to update each of the NF instance of the one or more network nodes.
[0008] In an exemplary aspect of the present disclosure, the set of requests is received from the interface in response to a polling by the transceiver unit at the NMS.
[0009] In an exemplary aspect of the present disclosure, each of the request from the set of requests is associated with a work order identity.
[0010] In an exemplary aspect of the present disclosure, the validating each of the request from the set of requests comprises validating a schema of a configuration data associated with each of the request from the set of requests.
[0011] In an exemplary aspect of the present disclosure, each of the request from the validated set of requests added in the queue, is grouped based on the work order identity.
[0012] In an exemplary aspect of the present disclosure, updating the one or more network nodes with the one or more update parameters, by the scheduler unit, comprises checking, by the scheduler unit, one of a presence and an absence of a first work order identity associated with a first subset of requests from the set of requests, in the IO cache. The method further comprises sending, by the scheduler unit, the first subset of requests associated with the first work order identity, in a batch, to update the one or more nodes, in an event of the presence of the first work order identity associated with the first subset of requests. Furthermore, the method comprises sending, by the scheduler unit, a response to the interface.
[0013] In an exemplary aspect of the present disclosure, the method further comprises removing, by an analysis unit, the first work order identity associated with the first subset of requests from the queue maintained in the IO cache after sending the first subset of requests associated with the first work order identity for updating the one or more network nodes.
[0014] In an exemplary aspect of the present disclosure, the method further comprises sending, by the scheduler unit, an update response for each of the one or more network nodes, to a database, wherein the database stores status associated with each of the one or more network nodes. Furthermore, the method includes updating, by a processing unit, at the NMS, the status associated with each of the one or more network nodes in the database, with the update response for each of the one or more network nodes.
[0015] In an exemplary aspect of the present disclosure, the method further includes receiving, by the transceiver unit, an abort request for a second work order identity associated with a second subset of requests from the set of requests. Furthermore, the method includes checking, by the processing unit, one of a presence and an absence of the second work order identity associated with the second subset of requests in the IO cache. The method further encompasses removing, by the processing unit, the second work order identity associated with the second subset of requests from the IO cache, in an event of the presence of the second work order identity associated with
the second subset of requests in the IO cache. The method further includes sending, by the transceiver unit, an aborted response to the interface.
[0016] In an exemplary aspect of the present disclosure, the method further includes sending, by the transceiver unit, at the NMS, to the interface, a failure response, in an event of the absence of the second work order identity associated with the second subset of requests in the IO cache.
[0017] In an exemplary aspect of the present disclosure, the abort request for the second work order identity associated with the second subset of requests, is received in response to a polling, by the transceiver unit, at the NMS.
[0018] Another aspect of the present disclosure may relate to a network management system for updating parameters for one or more network nodes. The network management system includes a transceiver unit is configured to receive a set of requests comprising one or more update parameters for the one or more network nodes. The network management system further includes a validation unit connected to at least the transceiver unit. The validation unit is configured to validate, each request from the set of requests. The validation unit is further configured to add, the validated set of requests in a queue maintained in an input-output (IO) cache. The network management system further includes a scheduler unit, connected to at least the analysis unit, the scheduler unit is configured to run a scheduler job at a configured interval, for updating the one or more network nodes with the one or more update parameters.
[0019] Yet another aspect of the present disclosure may relate to a non-transitory computer readable storage medium storing instructions for updating parameters for one or more network nodes, the instructions include executable code which, when executed by a one or more units of a system, causes: a transceiver unit of the system to receive a set of requests comprising one or more update parameters for the one or more network nodes. The instructions include executable code which, when executed, causes a validation unit of the system to validate each request from the set of requests and the validation unit to add the validated set of requests in a queue maintained in an input-output (IO) cache. The instructions include executable code which, when executed, causes a scheduler unit to run a scheduler) ob at a configured interval for updating the one or more network nodes with the one or more update parameters.
OBJECTS OF THE INVENTION
[0020] Some of the objects of the present disclosure, which at least one embodiment disclosed herein satisfies are listed herein below.
[0021] It is an object of the present disclosure to provide a system and a method for updating parameters for nodes that consumes less time and efforts of the user.
[0022] It is another object of the present disclosure to provide a solution for updating parameters for nodes that consumes less amount of network resources.
[0023] It is another object of the present disclosure to provide a solution for updating parameters for nodes that supports an abort request functionality, allowing to cancel ongoing parameter update requests which minimizes potential system disruptions.
[0024] It is another object of the invention to address the limitations of existing NMS workflow by introducing a bidirectional data flow.
[0025] It is another object of the invention to enhance the NBI interface by allowing the NBI interface an opportunity to update node parameters through NMS.
[0026] It is another object of the invention to allow for quick updates to parameters requests through the NBI interface.
[0027] It is another object of the invention to support an abort request functionality thereby allowing to cancel ongoing parameter update requests which minimizes potential system disruptions.
DESCRIPTION OF THE DRAWINGS
[0028] The accompanying drawings, which are incorporated herein, and constitute a part of this disclosure, illustrate exemplary embodiments of the disclosed methods and systems in which like reference numerals refer to the same parts throughout the different drawings. Components in the drawings are not necessarily to scale, emphasis instead being placed upon clearly illustrating the principles of the present disclosure. Also, the embodiments shown in the figures are not to be construed as limiting the disclosure, but the possible variants of the method and system according to the disclosure are illustrated herein to highlight the advantages of the disclosure. It will be
appreciated by those skilled in the art that disclosure of such drawings includes disclosure of electrical components or circuitry commonly used to implement such components.
[0029] FIG. 1 illustrates an exemplary block diagram representation of 5th generation core (5GC) network architecture.
[0030] FIG. 2 illustrates an exemplary block diagram of a computing device upon which the features of the present disclosure may be implemented in accordance with exemplary implementation of the present disclosure.
[0031] FIG. 3 illustrates an exemplary block diagram of a system for updating parameters for one or more network nodes, in accordance with exemplary implementations of the present disclosure.
[0032] FIG. 4 illustrates a method flow diagram for updating parameters for one or more network nodes in accordance with exemplary implementations of the present disclosure.
[0033] FIG. 5 illustrates an exemplary implementation of the system for updating parameters for one or more network nodes, in accordance with exemplary implementations of the present disclosure.
[0034] FIG. 6 illustrates an exemplary representation of the process for updating parameters for one or more network nodes, in accordance with exemplary embodiments of the present disclosure.
[0035] The foregoing shall be more apparent from the following more detailed description of the disclosure.
DETAILED DESCRIPTION
[0036] In the following description, for the purposes of explanation, various specific details are set forth in order to provide a thorough understanding of embodiments of the present disclosure. It will be apparent, however, that embodiments of the present disclosure may be practiced without these specific details. Several features described hereafter may each be used independently of one another or with any combination of other features. An individual feature may not address any of the problems discussed above or might address only some of the problems discussed above.
[0037] The ensuing description provides exemplary embodiments only, and is not intended to limit the scope, applicability, or configuration of the disclosure. Rather, the ensuing description of the exemplary embodiments will provide those skilled in the art with an enabling description for implementing an exemplary embodiment. It should be understood that various changes may be made in the function and arrangement of elements without departing from the spirit and scope of the disclosure as set forth.
[0038] Specific details are given in the following description to provide a thorough understanding of the embodiments. However, it will be understood by one of ordinary skill in the art that the embodiments may be practiced without these specific details. For example, circuits, systems, processes, and other components may be shown as components in block diagram form in order not to obscure the embodiments in unnecessary detail.
[0039] Also, it is noted that individual embodiments may be described as a process which is depicted as a flowchart, a flow diagram, a data flow diagram, a structure diagram, or a block diagram. Although a flowchart may describe the operations as a sequential process, many of the operations may be performed in parallel or concurrently. In addition, the order of the operations may be re-arranged. A process is terminated when its operations are completed but could have additional steps not included in a figure.
[0040] The word “exemplary” and/or “demonstrative” is used herein to mean serving as an example, instance, or illustration. For the avoidance of doubt, the subject matter disclosed herein is not limited by such examples. In addition, any aspect or design described herein as “exemplary” and/or “demonstrative” is not necessarily to be construed as preferred or advantageous over other aspects or designs, nor is it meant to preclude equivalent exemplary structures and techniques known to those of ordinary skill in the art. Furthermore, to the extent that the terms “includes,” “has,” “contains,” and other similar words are used in either the detailed description or the claims, such terms are intended to be inclusive — in a manner similar to the term “comprising” as an open transition word — without precluding any additional or other elements.
[0041] As used herein, a “processing unit” or “processor” or “operating processor” includes one or more processors, wherein processor refers to any logic circuitry for processing instructions. A processor may be a general-purpose processor, a special purpose processor, a conventional processor, a digital signal processor, a plurality of microprocessors, one or more microprocessors in association with a (Digital Signal Processing) DSP core, a controller, a microcontroller,
Application Specific Integrated Circuits, Field Programmable Gate Array circuits, any other type of integrated circuits, etc. The processor may perform signal coding data processing, input/output processing, and/or any other functionality that enables the working of the system according to the present disclosure. More specifically, the processor or processing unit is a hardware processor.
[0042] As used herein, “a user equipment”, “a user device”, “a smart-user-device”, “a smartdevice”, “an electronic device”, “a mobile device”, “a handheld device”, “a wireless communication device”, “a mobile communication device”, “a communication device” may be any electrical, electronic and/or computing device or equipment, capable of implementing the features of the present disclosure. The user equipment/device may include, but is not limited to, a mobile phone, smart phone, laptop, a general-purpose computer, desktop, personal digital assistant, tablet computer, wearable device or any other computing device which is capable of implementing the features of the present disclosure. Also, the user device may contain at least one input means configured to receive an input from at least one of a transceiver unit, a processing unit, a storage unit, a detection unit and any other such unit(s) which are required to implement the features of the present disclosure.
[0043] As used herein, “storage unit” or “memory unit” refers to a machine or computer-readable medium including any mechanism for storing information in a form readable by a computer or similar machine. For example, a computer-readable medium includes read-only memory (“ROM”), random access memory (“RAM”), magnetic disk storage media, optical storage media, flash memory devices or other types of machine-accessible storage media. The storage unit stores at least the data that may be required by one or more units of the system to perform their respective functions.
[0044] As used herein “interface” or “user interface refers to a shared boundary across which two or more separate components of a system exchange information or data. The interface may also be referred to a set of rules or protocols that define communication or interaction of one or more modules or one or more units with each other, which also includes the methods, functions, or procedures that may be called.
[0045] All modules, units, components used herein, unless explicitly excluded herein, may be software modules or hardware processors, the processors being a general-purpose processor, a special purpose processor, a conventional processor, a digital signal processor (DSP), a plurality of microprocessors, one or more microprocessors in association with a DSP core, a controller, a
microcontroller, Application Specific Integrated Circuits (ASIC), Field Programmable Gate Array circuits (FPGA), any other type of integrated circuits, etc.
[0046] As used herein the transceiver unit include at least one receiver and at least one transmitter configured respectively for receiving and transmitting data, signals, information or a combination thereof between units/components within the system and/or connected with the system.
[0047] As discussed in the background section, the current known solutions have several shortcomings. The present disclosure aims to overcome the above-mentioned and other existing problems in this field of technology by providing method and system of updating parameters for one or more network nodes.
[0048] FIG. 1 illustrates an exemplary block diagram representation of 5th generation core (5GC) network architecture [100], in accordance with exemplary implementation of the present disclosure. As shown in FIG. 1, the 5GC network architecture [100] includes a user equipment (UE) [102], a radio access network (RAN) [104], an access and mobility management function (AMF) [106], a Session Management Function (SMF) [108], a Service Communication Proxy (SCP) [110], an Authentication Server Function (AUSF) [112], a Network Slice Specific Authentication and Authorization Function (NSSAAF) [114], a Network Slice Selection Function (NSSF) [116], a Network Exposure Function (NEF) [118], a Network Repository Function (NRF) [120], a Policy Control Function (PCF) [122], a Unified Data Management (UDM) [124], an application function (AF) [126], a User Plane Function (UPF) [128], a data network (DN) [130], wherein all the components are assumed to be connected to each other in a manner as obvious to the person skilled in the art for implementing features of the present disclosure.
[0049] The Radio Access Network (RAN) [104] is the part of a mobile telecommunications system that connects user equipment (UE) [102] to the core network (CN) and provides access to different types of networks (e.g., 5G network). It consists of radio base stations and the radio access technologies that enable wireless communication.
[0050] The Access and Mobility Management Function (AMF) [106] is a 5G core network function responsible for managing access and mobility aspects, such as UE registration, connection, and reachability. It also handles mobility management procedures like handovers and paging.
[0051] The Session Management Function (SMF) [108] is a 5G core network function responsible for managing session-related aspects, such as establishing, modifying, and releasing sessions. It coordinates with the User Plane Function (UPF) for data forwarding and handles IP address allocation and QoS enforcement.
[0052] The Service Communication Proxy (SCP) [110] is a network function in the 5G core network that facilitates communication between other network functions by providing a secure and efficient messaging service. It acts as a mediator for service-based interfaces.
[0053] The Authentication Server Function (AUSF) [112] is a network function in the 5G core responsible for authenticating UEs during registration and providing security services. It generates and verifies authentication vectors and tokens.
[0054] The Network Slice Specific Authentication and Authorization Function (NSSAAF) [114] is a network function that provides authentication and authorization services specific to network slices. It ensures that UEs can access only the slices for which they are authorized.
[0055] The Network Slice Selection Function (NSSF) [116] is a network function responsible for selecting the appropriate network slice for a UE based on factors such as subscription, requested services, and network policies.
[0056] The Network Exposure Function (NEF) [118] is a network function that exposes capabilities and services of the 5G network to external applications, enabling integration with third-party services and applications.
[0057] The Network Repository Function (NRF) [120] is a network function that acts as a central repository for information about available network functions and services. It facilitates the discovery and dynamic registration of network functions.
[0058] The Policy Control Function (PCF) [122] is a network function responsible for policy control decisions, such as QoS, charging, and access control, based on subscriber information and network policies.
[0059] The Unified Data Management (UDM) [124] is a network function that centralizes the management of subscriber data, including authentication, authorization, and subscription information.
[0060] The Application Function (AF) [126] is a network function that represents external applications interfacing with the 5G core network to access network capabilities and services.
[0061] The User Plane Function (UPF) [128] is a network function responsible for handling user data traffic, including packet routing, forwarding, and QoS enforcement.
[0062] The Data Network (DN) [130] refers to a network that provides data services to user equipment (UE) in a telecommunications system. The data services may include but are not limited to Internet services, private data network related services.
[0063] FIG. 2 illustrates an exemplary block diagram of a computing device [200] upon which the features of the present disclosure may be implemented in accordance with exemplary implementation of the present disclosure. In an implementation, the computing device [200] may also implement a method for updating parameters for one or more network nodes utilising the system. In another implementation, the computing device [200] itself implements the method for updating parameters for one or more network nodes using one or more units configured within the computing device [200], wherein said one or more units are capable of implementing the features as disclosed in the present disclosure.
[0064] The computing device [200] may include a bus [202] or other communication mechanism for communicating information, and a hardware processor [204] coupled with bus [202] for processing information. The hardware processor [204] may be, for example, a general-purpose microprocessor. The computing device [200] may also include a main memory [206], such as a random-access memory (RAM), or other dynamic storage device, coupled to the bus [202] for storing information and instructions to be executed by the processor [204], The main memory [206] also may be used for storing temporary variables or other intermediate information during execution of the instructions to be executed by the processor [204], Such instructions, when stored in non-transitory storage media accessible to the processor [204], render the computing device [200] into a special-purpose machine that is customized to perform the operations specified in the instructions. The computing device [200] further includes a read only memory (ROM) [208] or
other static storage device coupled to the bus [202] for storing static information and instructions for the processor [204],
[0065] A storage device [210], such as a magnetic disk, optical disk, or solid-state drive is provided and coupled to the bus [202] for storing information and instructions. The computing device [200] may be coupled via the bus [202] to a display [212], such as a cathode ray tube (CRT), Liquid crystal Display (LCD), Light Emitting Diode (LED) display, Organic LED (OLED) display, etc. for displaying information to a computer user. An input device [214], including alphanumeric and other keys, touch screen input means, etc. may be coupled to the bus [202] for communicating information and command selections to the processor [204], Another type of user input device may be a cursor controller [216], such as a mouse, a trackball, or cursor direction keys, for communicating direction information and command selections to the processor [204], and for controlling cursor movement on the display [212], The input device typically has two degrees of freedom in two axes, a first axis (e.g., x) and a second axis (e.g., y), that allow the device to specify positions in a plane.
[0066] The computing device [200] may implement the techniques described herein using customized hard-wired logic, one or more ASICs or FPGAs, firmware and/or program logic which in combination with the computing device [200] causes or programs the computing device [200] to be a special-purpose machine. According to one implementation, the techniques herein are performed by the computing device [200] in response to the processor [204] executing one or more sequences of one or more instructions contained in the main memory [206], Such instructions may be read into the main memory [206] from another storage medium, such as the storage device [210], Execution of the sequences of instructions contained in the main memory [206] causes the processor [204] to perform the process steps described herein. In alternative implementations of the present disclosure, hard-wired circuitry may be used in place of or in combination with software instructions.
[0067] The computing device [200] also may include a communication interface [218] coupled to the bus [202], The communication interface [218] provides a two-way data communication coupling to a network link [220] that is connected to a local network [222], For example, the communication interface [218] may be an integrated services digital network (ISDN) card, cable modem, satellite modem, or a modem to provide a data communication connection to a corresponding type of telephone line. As another example, the communication interface [218] may be a local area network (LAN) card to provide a data communication connection to a compatible
LAN. Wireless links may also be implemented. In any such implementation, the communication interface [218] sends and receives electrical, electromagnetic or optical signals that carry digital data streams representing various types of information.
[0068] The computing device [200] can send messages and receive data, including program code, through the network(s), the network link [220] and the communication interface [218], In the Internet example, a server [230] might transmit a requested code for an application program through the Internet [228], the ISP [226], the host [224], the local network [222] and the communication interface [218], The received code may be executed by the processor [204] as it is received, and/or stored in the storage device [210], or other non-volatile storage for later execution.
[0069] The present disclosure is implemented by a system [300] (as shown in FIG. 3). In an implementation, the system [300] may include the computing device [200] (as shown in FIG. 2). It is further noted that the computing device [200] is able to perform the steps of a method [400] (as shown in FIG. 4).
[0070] Referring to FIG. 3, an exemplary block diagram of a system [300] for updating parameters for one or more network nodes, is shown, in accordance with the exemplary implementations of the present disclosure. The system [300] comprises at least one transceiver unit [302], at least one validation unit [304], at least one scheduler unit [306], at least one analysis unit [308], at least one processing unit [310], and at least one database [312], Also, all the components/ units of the system [300] are assumed to be connected to each other unless otherwise indicated below. As shown in the figures all units shown within the system should also be assumed to be connected to each other. Also, in FIG. 3 only a few units are shown, however, the system [300] may comprise multiple such units or the system [300] may comprise any such numbers of said units, as required to implement the features of the present disclosure. Further, in an implementation, the system [300] may be present in a user device to implement the features of the present disclosure. The system [300] may be a part of the user device / or may be independent of but in communication with the user device (may also referred herein as a UE). In another implementation, the system [300] may reside in a server or a network entity. In yet another implementation, the system [300] may reside partly in the server/ network entity and partly in the user device.
[0071] Further, in accordance with the present disclosure, it is to be acknowledged that the functionality described for the various the components/units can be implemented interchangeably. While specific embodiments may disclose a particular functionality of these units for clarity, it is
recognized that various configurations and combinations thereof are within the scope of the disclosure. The functionality of specific units as disclosed in the disclosure should not be construed as limiting the scope of the present disclosure. Consequently, alternative arrangements and substitutions of units, provided they achieve the intended functionality described herein, are considered to be encompassed within the scope of the present disclosure.
[0072] The system [300] is configured for updating parameters for one or more network nodes, with the help of the interconnection between the components/units of the system [300],
[0073] The system [300] includes a network management system (NMS) [320], The NMS [320] includes the transceiver unit [302], The transceiver unit [302] is configured to receive a set of requests comprising one or more update parameters for the one or more network nodes. The one or more update parameters may include an internet protocol address, Quality of Service (QoS), timer, host, or port, log level, context, auto synchronize, throttle, refresh, default paging DRX, slice parameter, download data split primary path, threshold and the like. The one or more update parameters may further include a value associated with the one or more associated update parameters. The value may be of any type such as a Boolean type, a string type, an integer type, a float type, and the like. For instance, the set of requests sent at the NMS [320] by the transceiver unit [302] may include a SAP identifier, a node identifier, a parameter name, the value of the one or more update parameters and the like. The set of requests comprises at least multi-update requests and all-update requests. The at least multi-update requests may be configured to update each of the one or more update parameters on a list of specified Network Function (NF) instances of the one or more network nodes. The NF instances of the one or more network nodes refers to an instance of the one or more nodes. The NF instances are configured to perform a specific operation in the one or more network nodes. The at least all-update requests may be configured to update each of the NF instance of the one or more network nodes in a circle. The circle refers to a predefined geographical area, a pre-defined location, a tracking area code (TAC), a cell identity, and the like. The set of requests is received from the interface in response to a polling by the transceiver unit [302] at the NMS [320], Each of the request of the set of requests is associated with a work order identity. The work order identity is a unique identifier which may be allotted to a request for a work order or task in the telecommunication network. The work order identity may help in managing and tracking the request for the work order or task.
[0074] In an implementation of the present disclosure, the NMS [320] may support at least two types of requests for updating the at least one or more network nodes- the multi-update request
and the all-update request. The multi-update request may update the at least one network node parameter from the list of specified network nodes. The all-update request may update every parameter from the list of network nodes. The transceiver unit [302] is further configured to perform the polling at the NMS [320], The polling refers to a communication where the transceiver unit [302] may repeatedly send requests to the NMS [320] at fixed intervals to check for updates.
[0075] The NMS [320] further includes the validation unit [304] connected to at least the transceiver unit [302], The validation unit [304] is configured to validate, each request from the set of requests. The validation unit [304] is further configured to add, the validated set of requests in a queue maintained in an input-output (IO) cache [504], The validation unit [304] is further configured to validate a format associated with each request from the set of requests. Each request from the validated set of requests added in the queue, is grouped based on the work order identity.
[0076] In an implementation of the present disclosure, the validation unit [304] checks if the set of requests are valid or not. If the set of requests are not valid requests, the transceiver unit [302] sends a failure response to the user. If the set of requests are valid requests, the set of requests are inserted in the IO cache [504], The set of requests are maintained in a queue in the IO Cache [504] with the work order identity. The IO cache [504] refers to a customized cache to store data temporarily and enhancing the performance of the NMS [320], The IO cache [504] may reduce latency by storing the set of requests temporarily.
[0077] The NMS [320] further includes the scheduler unit [306], connected to at least the analysis unit [308], The scheduler unit [306] is configured to run a scheduler job at a configured interval, for updating the one or more network nodes with the one or more update parameters. The updating the one or more network nodes with the one or more update parameters, by the scheduler unit [306], comprises the scheduler unit [306], to check one of a presence and an absence of a first work order identity associated with a first subset of requests from the set of requests, in the IO cache [504], The updating the one or more network nodes with the one or more update parameters further includes the scheduler unit [306], to send the first subset of requests associated with the first work order identity, in a batch, to update the one or more nodes, in an event of the presence of the first work order identity associated with the first subset of requests. The updating the one or more network nodes with the one or more update parameters further includes the scheduler unit [306] to send a response to the interface.
[0078] In an implementation of the present disclosure, the configured interval may be determined by a user or the NMS [320], In an embodiment of the present disclosure, the configured interval
may be changed in every session. The scheduler unit [306] oftheNMS [320] may run the scheduler job at the configured intervals. For instance, the scheduler unit [306] may run the scheduler job after every 5 minutes, as defined by the user. The scheduler unit [306] may check for the first work order identities present in the IO Cache [504], If the scheduler unit [306] does not find any queued work order identity, the scheduler unit [306] may assume that no set of requests were initiated, the scheduler unit [306] may not initiate any action. If the queued work order identity is found by the scheduler unit [306], the scheduler unit [306] may send the one or more parameter update requests to the node [506] in batches. The batch refers to sending the first subset of requests associated with the first work order identity together. For instance, if the NMS [320] can handle sending 100 updates at a time and group the updates into batches of 100.
[0079] The analysis unit [308] is further configured to remove, the first work order identity associated with the first subset of requests from the queue maintained in the IO cache [504] after sending the first subset of requests associated with the first work order identity for updating the one or more network nodes.
[0080] In an implementation of the present disclosure, the analysis unit [308] may receive an acknowledgment from the one or more network nodes to confirm that the first subset of requests is updated. The analysis unit [308] may access the queue maintained in the IO cache [504] and locate the work order identity that must be removed. Further, the analysis unit [308] may search the queue maintained in the IO cache [504] to find the work order identity. The analysis unit [308] may remove the work order identity from the queue.
[0081] The system [300] further includes the scheduler unit [306], to send an update response for each of the one or more network nodes, to the database [312], The database [312] stores status associated with each of the one or more network nodes. The system [300] further includes the processing unit [310], at the NMS [320], to update the status associated with each of the one or more network nodes in the database [312], with the update response for each of the one or more network nodes. For instance, there is an update in the QoS parameters of the one or more network nodes. The scheduler unit [306] may send the update to the database [312] to store the updated QoS parameters of the one or more network nodes. The database [312] may update the QoS parameters of the one or more network nodes accordingly.
[0082] The system [300] is configured to receive, by the transceiver unit [302], an abort request for a second work order identity associated with a second subset of requests from the set of requests. The abort request may be sent to the transceiver unit [302] if the set of requests
comprising one or more update parameters may disrupt the services of the one or more network nodes. For instance, the value in the one or more update parameters is very high and the one or more network nodes may not be able to handle the value of the update parameter. The system is further configured to check, by the processing unit [310], one of a presence and an absence of the second work order identity associated with the second subset of requests in the IO cache [504], The processing unit [310] is further configured to remove the second work order identity associated with the second subset of requests from the IO cache [504], in an event of the presence of the second work order identity associated with the second subset of requests in the IO cache [504], The transceiver unit [302] is further configured to send an aborted response to the interface. The abort request for the second work order identity associated with the second subset of requests, is received in response to a polling, by the transceiver unit [302], at the NMS [320],
[0083] The system [300] further includes the transceiver unit [302], at the NMS [320], to send to the interface, a failure response, in an event of the absence of the second work order identity associated with the second subset of requests in the IO cache [504],
[0084] In an implementation of the present disclosure, the NMS [320] may check for the second work order identity in the IO cache [504], which may be received in the abort request. The processing unit [310] may further insert the set of requests in the IO cache [504], If the second work order identity is found in the IO Cache [504], the second work order identity are removed from the IO queue and send the ‘aborted response’ by the transceiver unit [302], In case the work order identity is missing in the IO Cache [504], the IO cache [504] assumes that the work order identity has already been executed and sends a failure response by the transceiver unit [302] to the interface.
[0085] Referring to FIG. 4, an exemplary method flow diagram [400] for updating parameters for one or more network nodes, in accordance with exemplary implementations of the present disclosure is shown. In an implementation the method [400] is performed by the system [300], Further, in an implementation, the system [300] may be present in a server device to implement the features of the present disclosure. Also, as shown in FIG. 4, the method [400] starts at step [402],
[0086] At step [404], the method [400] comprises receiving, by a transceiver unit [302] at a network management system (NMS) [320], from an interface, a set of requests comprising one or more update parameters for the one or more network nodes. The one or more update parameters may include an internet protocol address, Quality of Service (QoS), timer, host, or port, log level,
context, auto synchronize, throttle, refresh, default paging DRX, slice parameter, download data split primary path, threshold and the like. The one or more update parameters may further include a value associated with the one or more associated update parameters. The value may be in a Boolean type, a string type, an integer type, a float type, and the like. For instance, the set of requests sent at the NMS [320] by the transceiver unit [302] may include an SAP identifier, a node identifier, a parameter name, the value of the one or more update parameters. The set of requests comprises at least multi-update requests and all-update requests. The at least multi-update requests may be configured to update each of the one or more update parameters on a list of specified Network Function (NF) instances of the one or more network nodes. The NF instances of the one or more network nodes refers to an instance of the one or more nodes. The NF instances are configured to perform a specific operation in the one or more network nodes. The at least allupdate requests may be configured to update each of the NF instance of the one or more network nodes in a circle. The circle refers to a pre-defined geographical area, a pre-defined location, a tracking area code (TAC), a cell identity, and the like. The set of requests is received from the interface in response to a polling by the transceiver unit [302] at the NMS [320], Each of the request from the set of requests is associated with a work order identity. The work order identity is a unique identifier which may be allotted to a request for a work order or task in the telecommunication network. The work order identity may help in managing and tracking the request for the work order or task.
[0087] In an implementation of the present disclosure, the NMS [320] may support at least two types of requests for updating the at least one or more network nodes- the multi-update request and the all-update request. The multi-update request may update the at least one network node parameter from the list of specified network nodes. The all-update request may update every parameter from the list of network nodes. The polling refers to a communication where the transceiver unit [302] may repeatedly send requests to the NMS [320] at fixed intervals to check for updates.
[0088] Next at step [406], the method [400] comprises validating, by a validation unit [304] at the NMS [320], each of the request from the set of requests. The validation unit [304] is further configured to validate a format associated with each of the request from the set of requests. Each of the request from the validated set of requests added in the queue, is grouped based on the work order identity.
[0089] Next, at step [408], the method [400] encompasses adding, by the validation unit [304] at the NMS [320], the validated set of requests in a queue maintained in an input-output (IO) cache [504],
[0090] In an implementation of the present disclosure, the set of requests are checked by the validation unit [304], The validation unit [304] checks if they are valid or not. If the set of requests are not valid requests, a failure response is sent by the transceiver unit [302], If the set of requests are valid requests, the set of requests are inserted in the IO cache [504], The set of requests are maintained in a queue in the IO Cache [504] with the work order identity. The IO cache [504] refers to a customized cache to store data temporarily and enhancing the performance of the NMS [320], The IO cache [504] may reduce latency by storing the set of requests temporarily.
[0091] Next, at step [410], the method [400] encompasses running, by a scheduler unit [306], at the NMS [320], a scheduler job at a configured interval, for updating the one or more network nodes with the one or more update parameters. The updating the one or more network nodes with the one or more update parameters, by the scheduler unit [306] includes checking, by the scheduler unit [306], one of a presence and an absence of a first work order identity associated with a first subset of requests from the set of requests, in the IO cache [504], The updating the one or more network nodes with the one or more update parameters, by the scheduler unit [306] further includes sending, by the scheduler unit [306], the first subset of requests associated with the first work order identity, in a batch, to update the one or more nodes, in an event of the presence of the first work order identity associated with the first subset of requests. Furthermore, the updating the one or more network nodes with the one or more update parameters, by the scheduler unit [306] includes sending, by the scheduler unit [306], a response to the interface. The method further includes removing, by an analysis unit [308], the first work order identity associated with the first subset of requests from the queue maintained in the IO cache [504] after sending the first subset of requests associated with the first work order identity for updating the one or more network nodes.
[0092] In an implementation of the present disclosure, the configured interval may be determined by a user or the NMS [320], In an embodiment of the present disclosure, the configured interval may be changed in every session. The scheduler job at the configured intervals may be run by the scheduler unit [306] of the NMS [320], For instance, the scheduler unit [306] may run the scheduler job after every 5 minutes, as defined by the user. The presence of the first work order identities in the IO Cache [504] may be checked by the scheduler unit [306], If any queued work order identity is not found by the scheduler unit [306], the scheduler unit [306] may assume that no set of requests were initiated, the scheduler unit [306] may not initiate any action. If the queued
work order identity is found by the scheduler unit [306], the scheduler unit [306] may send the one or more parameter update requests to the node [506] in batches. The batch refers to sending the first subset of requests associated with the first work order identity together. For instance, if the NMS [320] can handle sending 100 updates at a time and group the updates into batches of 100.
[0093] The method [400] further comprises sending, by the scheduler unit [306], an update response for each of the one or more network nodes, to a database [312], The database [312] stores status associated with each of the one or more network nodes. Furthermore, the method [400] includes updating, by a processing unit [310], at the NMS [320], the status associated with each of the one or more network nodes in the database [312], with the update response for each of the one or more network nodes. For instance, there is an update in the QoS parameters of the one or more network nodes. The scheduler unit [306] may send the update to the database [312] to store the updated QoS parameters of the one or more network nodes. The database [312] may update the QoS parameters of the one or more network nodes accordingly.
[0094] In an implementation of the present disclosure, an acknowledgment from the one or more network nodes to confirm that the first subset of requests is updated may be received by the analysis unit [308], The analysis unit [308] may access the queue maintained in the IO cache [504] and locate the work order identity that must be removed. Further, the analysis unit [308] may search the queue maintained in the IO cache [504] to find the work order identity. The analysis unit [308] may remove the work order identity from the queue.
[0095] The method [400] further comprises receiving, by the transceiver unit [302], an abort request for a second work order identity associated with a second subset of requests from the set of requests. The abort request may be sent to the transceiver unit [302] if the set of requests comprising one or more update parameters may disrupt the services of the one or more network nodes. For instance, the value in the one or more update parameters is very high and the one or more network nodes may not be able to handle the value of the update parameter. Further, the method [400] encompasses checking, by the processing unit [310], one of a presence and an absence of the second work order identity associated with the second subset of requests in the IO cache [504], Furthermore, the method encompasses removing, by the processing unit [310], the second work order identity associated with the second subset of requests from the IO cache [504], in an event of the presence of the second work order identity associated with the second subset of requests in the IO cache [504], The method [400] further includes sending, by the transceiver unit [302], the aborted response to the interface. The abort request for the second work order identity
associated with the second subset of requests, is received in response to a polling, by the transceiver unit [302], at the NMS [320],
[0096] In an implementation of the present disclosure, the abort request for a second work order identity in a running request. The NMS [320] may check for the second work order identity in the IO cache [504], which may be received in the abort request. The processing unit [310] may further insert the set of requests in the IO cache [504], If the second work order identity is found in the IO Cache [504], the second work order identity are removed from the IO queue and send the ‘aborted response’ by the transceiver unit [302],
[0097] The method [400] further comprises sending, by the transceiver unit [302], at the NMS [320], to the interface, a failure response, in an event of the absence of the second work order identity associated with the second subset of requests in the IO cache [504],
[0098] In an implementation of the present disclosure, in case the work order identity is missing in the IO Cache [504], the IO cache [504] assumes that the work order identity has already been executed and sends a failure response by the transceiver unit [302],
[0099] The system [300] and method [400] will be explained in detail by an exemplary implementation of the system as shown in FIG. 5. Referring to FIG. 5, it illustrates an exemplary implementation of the system for updating parameters for one or more network nodes, in accordance with exemplary implementations of the present disclosure.
[0100] The system [500] comprises at least one Configuration Management System [502], at least one IO Cache [504], at least one north bound interface (NBI) [508], at least one Node [506], and the database [312],
[0101] The system [500] is configured for updating parameters for the one or more nodes.
[0102] At step 1, the CMS [502] may send the set of requests for updating parameters for the one or more network nodes at the node [506], The one or more update parameters may include an internet protocol address, Quality of Service (QoS), timer, host, or port, log level, context, auto synchronize, throttle, refresh, default paging DRX, slice parameter, download data split primary path, threshold and the like.
[0103] At step 2, the node [506] may further send the update response for each of the one or more network nodes to the database [312], The database [312] may store the status associated with each of the one or more network nodes.
[0104] At step 3, the database [312] may further send the update response to a Northbound Interface (NBI) [508], The NBI [508] is an output-oriented interface which may be configured to send outputs to the user.
[0105] At step 4, the NBI [508] may poll for the set of requests for updating parameters to the IO Cache [504], Polling refers to a communication where the CMS [502] may repeatedly send requests to the IO Cache [504], It is to be noted that the NMS [320] supports two types of requests for updating parameters: a) Multi type - updates requested parameters on the list of specified network function (NF) instances. b) All type - updates requested parameters for all the NF instances in the circle. The circle refers to a pre-defined geographical area, a pre-defined location, a pre-defined tracking area code (TAC), a pre-defined cell identity, and the like.
[0106] At step 5, the IO cache [504] may validate each request from the set of requests. Each request from the validated set of requests added in the queue, is grouped based on the work order identity.
[0107] At step 6, the CMS [502] may check for the presence and the absence of the first work order identity associated with the first subset of requests from the set of requests. The IO Cache [504] may further insert the validated set of requests in the queue maintained in the IO cache [504], The updating parameters in the one or more network nodes further includes the CMS [502] to send the first subset of requests associated with the first work order identity, in a batch, to update the one or more nodes, in an event of the presence of the first work order identity associated with the first subset of requests. The CMS [502] may further send a response to the NBI [508] of the updating of the one or more nodes.
[0108] If the CMS [502] does not find any queued work order identity, the CMS [502] may assume that no set of requests were initiated, the CMS [502] may not initiate any action. If the queued work order identity is found by the CMS [502], the CMS [502] may send the one or more parameter update requests to the node [506] in batches. The batch refers to sending the first subset of requests associated with the first work order identity together. For instance, if the node [506] can handle sending 100 updates at a time and group the updates into batches of 100.
[0109] At step 7, the NBI [508] may send the abort request for the second work order identity associated with the second subset of requests from the set of requests to the CMS [502], The abort request may be sent if the set of requests comprising one or more update parameters may disrupt the services of the one or more network nodes. The CMS [502] may check one of a presence and an absence of the second work order identity associated with the second subset of requests in the IO cache [504], The CMS [502] may remove the second work order identity associated with the second subset of requests from the IO cache [504], in an event of the presence of the second work order identity associated with the second subset of requests in the IO cache [504],
[0110] At step 8, the CMS [502] may send the aborted response to the NBI [508] in response to the poll for the abort request by the NBI [508],
[OHl] The exemplary implementation of the system [500] will be explained in detail by a method flow for the exemplary implementation as shown in FIG. 6. Referring to FIG. 6, it illustrates an exemplary representation of the process of updating parameters for nodes, in accordance with exemplary embodiments of the present disclosure. As shown in FIG. 6, the method begins are step [602],
[0112] At step [604], a set of poll request(s) is received. The set of poll requests comprises of parameters for network nodes. Each of the poll request from the set of poll requests is associated with a work order identity.
[0113] At step [606], the validation unit [304] checks if the set of poll requests are valid or not.
[0114] If the set of poll requests are not a valid request, the method proceeds to step [608], At step [608], the transceiver unit [302] sends a failure response to the NBI [508],
[0115] If the request is a valid request, the method proceeds to step [610], At step [610], the set of poll requests are inserted in the IO cache [504], It maintains a queue in IO Cache [504] with a unique work order identity based on request received.
[0116] The NMS [320] supports two types of requests for updating configuration parameters:
1. Multi request- updates requested parameters on the list of specified NF instances.
2. All type request- updates requested parameters for all the NF instances.
[0117] At step [612], the scheduler unit [306] of the NMS [320] runs the scheduler job at the configured intervals, where the configured intervals maybe determined by the user or the NMS [320],
[0118] At step [614], the scheduler unit [306] checks for queued work order identities present in the IO Cache [504],
[0119] If the scheduler unit [306] does not find any queued work order identity, the method may proceed to step [616], At step [616], the scheduler unit [306] may assume that no set of poll requests were initiated by the NBI [508], the scheduler unit [306] may not initiate any action.
[0120] If the queued work order identity is found by the scheduler unit [306], the method may proceed to step [618],
[0121] At step [618], the method starts sending parameter update request to the node [506] in batches.
[0122] At step [620], once the response is received from the node [506], the scheduler unit [306] sends the response to the NBI [508], In case the request type is an ALL type, the NMS [320] maintains count for all NF instances and number of responses received from node for that work order identity.
[0123] At step [622], the scheduler unit [306] sends the update response for each of the one or more network nodes to the database [312], The database [312] stores status associated with each of the one or more network nodes. The processing unit [310] may update at the NMS [320], the status associated with each of the one or more network nodes in the database [312], with the update response for each of the one or more network nodes.
[0124] The NMS [320] also supports for aborting a currently running task, which starts at step [626], At step [626], the CMS [502] polls for an abort request for a work order identity from the NBI [508],
[0125] Further, at step [628], the NMS [320] checks for the work order identity in the IO cache [504], which was received in the abort request.
[0126] The method may then proceed to [610] for inserting the set of poll requests in the IO cache [504], If the work order identity is found in the IO Cache [504], the work order identity is removed from the IO queue and send the ‘aborted response’ to the NBI [508], In case the work order identity is missing in the IO Cache [504], the IO cache [504] assumes that the work order identity has already been executed and sends a failure response to the NBI [508],
[0127] At step 624, the analysis unit [308] removes the work order identity associated with the set of requests from the queue maintained in the IO cache [504] after sending the set of requests associated with the work order identity for updating the one or more network nodes.
[0128] At step 630, the method comes to an end.
[0129] The present disclosure further discloses a non-transitory computer readable storage medium storing instructions for updating parameters for one or more network nodes, the instructions include executable code which, when executed by a one or more units of a system, causes: a transceiver unit [302] of the system [300] to receive a set of requests comprising one or more update parameters for the one or more network nodes. Also, the instructions include executable code which, when executed, causes a validation unit [304] of the system [300] to validate, each request from the set of requests. The instructions include executable code which, when executed, causes the validation unit [304] of the system [300] to add, the validated set of requests in a queue maintained in an input-output (IO) cache [504] and a scheduler unit [306] of the system [300] to run a scheduler job at a configured interval, for updating the one or more network nodes with the one or more update parameters.
[0130] As is evident from the above, the present disclosure provides a technically advanced solution for updating parameters for nodes. The solution of the present invention provides a system and a method for updating parameters for nodes that consumes less time and efforts of the user. Further, implementing the features of the present invention enables one to save network resources. Also, the solution for updating parameters for nodes, as disclosed, supports an abort request functionality, allowing to cancel ongoing parameter update requests which minimizes potential system disruptions.
[0131] While considerable emphasis has been placed herein on the disclosed implementations, it will be appreciated that many implementations can be made and that many changes can be made to the implementations without departing from the principles of the present disclosure. These and
other changes in the implementations of the present disclosure will be apparent to those skilled in the art, whereby it is to be understood that the foregoing descriptive matter to be implemented is illustrative and non-limiting.
Claims
1. A method for updating parameters for one or more network nodes, the method comprising: receiving, by a transceiver unit [302] at a network management system (NMS) [320], from an interface, a set of requests comprising one or more update parameters for the one or more network nodes; validating, by a validation unit [304] at the NMS [320], each of the request from the set of requests; adding, by the validation unit [304] at the NMS [320], the validated set of requests in a queue maintained in an input-output (IO) cache [504]; and running, by a scheduler unit [306], at the NMS [320], a scheduler job at a configured interval, for updating the one or more network nodes with the one or more update parameters.
2. The method as claimed in claim 1, wherein, the set of requests comprises at least multiupdate requests and all-update requests, wherein the multi-update requests are configured to update each of the one or more update parameters on a list of specified Network Function (NF) instances of the one or more network nodes, wherein the all-update requests are configured to update each of the NF instances of the one or more network nodes.
3. The method as claimed in claim 1, wherein, the set of requests is received from the interface in response to a polling by the transceiver unit [302] at the NMS [320],
4. The method as claimed in claim 1, wherein, each request from the set of requests is associated with a work order identity.
5. The method as claimed in claim 1, wherein, the validating each request from the set of requests comprises validating a schema of a configuration data associated with each of the request from the set of requests.
6. The method as claimed in claim 4, wherein, each of the request from the validated set of requests added in the queue, is grouped based on the work order identity.
7. The method as claimed in claim 1, wherein, updating the one or more network nodes with the one or more update parameters, by the scheduler unit [306], further comprises:
checking, by the scheduler unit [306], one of a presence and an absence of a first work order identity associated with a first subset of requests from the set of requests, in the IO cache [504]; sending, by the scheduler unit [306], the first subset of requests associated with the first work order identity, in a batch, to update the one or more nodes, in an event of the presence of the first work order identity associated with the first subset of requests; and sending, by the scheduler unit [306], a response to the interface.
8. The method as claimed in claim 7, the method further comprising: removing, by an analysis unit [308], the first work order identity associated with the first subset of requests from the queue maintained in the IO cache [504] after sending the first subset of requests associated with the first work order identity for updating the one or more network nodes.
9. The method as claimed in claim 1, further comprising: sending, by the scheduler unit [306], an update response for each of the one or more network nodes, to a database [312], wherein the database [312] stores status associated with each of the one or more network nodes; and updating, by a processing unit [310], at the NMS [320], the status associated with each of the one or more network nodes in the database, with the update response for each of the one or more network nodes.
10. The method as claimed in claim 9 the method further comprising: receiving, by the transceiver unit [302], an abort request for a second work order identity associated with a second subset of requests from the set of requests; checking, by the processing unit [310], one of a presence and an absence of the second work order identity associated with the second subset of requests in the IO cache [504]; removing, by the processing unit [310], the second work order identity associated with the second subset of requests from the IO cache [504], in an event of the presence of the second work order identity associated with the second subset of requests in the IO cache [504]; and sending, by the transceiver unit [302], an aborted response to the interface.
11. The method as claimed in claim 10, further comprising:
sending, by the transceiver unit [302], at the NMS [320], to the interface, a failure response, in an event of the absence of the second work order identity associated with the second subset of requests in the IO cache [504],
12. The method as claimed in claim 10, wherein the abort request for the second work order identity associated with the second subset of requests, is received in response to a polling, by the transceiver unit [302], at the NMS [320],
13. A system for updating parameters for one or more network nodes, the system comprising a network management system (NMS) [320], the NMS [320] further comprising: a transceiver unit [302], configured to receive from an interface, a set of requests comprising one or more update parameters for the one or more network nodes; a validation unit [304] connected to at least the transceiver unit [302], the validation unit [304] configured to: o validate, each request from the set of requests, and o add, the validated set of requests in a queue maintained in an input-output (IO) cache [504]; and a scheduler unit [306], connected to at least an analysis unit [308], the scheduler unit [306] configured to run a scheduler job at a configured interval, for updating the one or more network nodes with the one or more update parameters.
14. The system as claimed in claim 13, wherein, the set of requests comprises at least multiupdate requests and all-update requests, wherein the multi-update requests are configured to update each of the one or more update parameters on a list of specified NF instances of the one or more network nodes, wherein the all-update requests are configured to update each of the NF instances of the one or more network nodes.
15. The system as claimed in claim 13, wherein, the set of requests is received from the interface in response to a polling by the transceiver unit [302] at the NMS [320],
16. The system as claimed in claim 13, wherein, each request of the set of requests is associated with a work order identity.
17. The system as claimed in claim 13, wherein, the validation unit [304] is configured to validate a format associated with each request from the set of requests.
18. The system as claimed in claim 16, wherein, each request from the validated set of requests added in the queue, is grouped based on the work order identity.
19. The system as claimed in claim 13, wherein, the scheduler unit [306], is further configured to: check one of a presence and an absence of a first work order identity associated with a first subset of requests from the set of requests, in the IO cache [504]; send the first subset of requests associated with the first work order identity, in a batch, to update the one or more nodes, in an event of the presence of the first work order identity associated with the first subset of requests; and send a response to the interface.
20. The system as claimed in claim 19, wherein the analysis unit [308] is further configured to: remove, the first work order identity associated with the first subset of requests from the queue maintained in the IO cache [504] after sending the first subset of requests associated with the first work order identity for updating the one or more network nodes.
21. The system as claimed in claim 19, further comprising: the scheduler unit [306], configured to send, an update response for each of the one or more network nodes, to a database [312], wherein the database [312] stores status associated with each of the one or more network nodes; and a processing unit [310], configured to update, at the NMS [320], the status associated with each of the one or more network nodes in the database [312], with the update response for each of the one or more network nodes.
22. The system as claimed in claim 21, wherein the system is further configured to: receive, by the transceiver unit [302], an abort request for a second work order identity associated with a second subset of requests from the set of requests; check, by the processing unit [310], one of a presence and an absence of the second work order identity associated with the second subset of requests in the IO cache [504]; remove, by the processing unit [310], the second work order identity associated with the second subset of requests from the IO cache [504], in an event of the presence of the second work order identity associated with the second subset of requests in the IO cache [504]; and
send, by the transceiver unit [302], an aborted response to the interface.
23. The system as claimed in claim 22, further comprising: the transceiver unit [302], configured to send, at the NMS [320], to the interface, a failure response, in an event of the absence of the second work order identity associated with the second subset of requests in the IO cache [504],
24. The system as claimed in claim 22, wherein the abort request for the second work order identity associated with the second subset of requests, is received in response to a polling, by the transceiver unit [302], at the NMS [320],
25. A non-transitory computer-readable storage medium storing instructions for updating parameters for one or more network nodes, the storage medium comprising executable code which, when executed by one or more units of a system, causes: a transceiver unit [302] to receive a set of requests comprising one or more update parameters for the one or more network nodes; a validation unit [304] to: validate, each request from the set of requests, and add, the validated set of requests in a queue maintained in an input-output (IO) cache [504]; and a scheduler unit [306] to run a scheduler job at a configured interval, for updating the one or more network nodes with the one or more update parameters.
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| IN202321047028 | 2023-07-12 | ||
| IN202321047028 | 2023-07-12 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2025013048A1 true WO2025013048A1 (en) | 2025-01-16 |
Family
ID=94215170
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/IN2024/051108 Pending WO2025013048A1 (en) | 2023-07-12 | 2024-07-08 | Method and system for updating parameters for one or more network nodes |
Country Status (1)
| Country | Link |
|---|---|
| WO (1) | WO2025013048A1 (en) |
Citations (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20090240791A1 (en) * | 2008-03-19 | 2009-09-24 | Fujitsu Limited | Update management method and update management unit |
-
2024
- 2024-07-08 WO PCT/IN2024/051108 patent/WO2025013048A1/en active Pending
Patent Citations (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20090240791A1 (en) * | 2008-03-19 | 2009-09-24 | Fujitsu Limited | Update management method and update management unit |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| WO2025013048A1 (en) | Method and system for updating parameters for one or more network nodes | |
| WO2025013047A1 (en) | Method and system for performing bulk configuration changes of network nodes | |
| WO2025069062A1 (en) | Method and system for service continuity in a communication network | |
| WO2025052474A1 (en) | Method and system for managing subscription of network functions | |
| WO2025008893A1 (en) | Method and system for suppressing repetitive log entries | |
| WO2025012956A2 (en) | Method and system for providing multimedia priority service in a communication network | |
| WO2025008863A1 (en) | Method and system for granting a data traffic access associated with a target network | |
| WO2025017730A1 (en) | Method and system for handling non-ip data delivery (nidd) configuration data | |
| WO2025012979A1 (en) | Method and system for a configuration-based management of a procedure request | |
| WO2025052402A1 (en) | Method and system for managing stale subscriptions | |
| WO2025052436A1 (en) | Method and system for discovery of one or more peer network functions | |
| WO2025052421A1 (en) | Method and system for handling location requests in a wireless communication network | |
| WO2025069097A1 (en) | Method and system for managing one or more session policies in a network | |
| WO2025062443A1 (en) | Method and system for handling a race condition in a wireless communication network | |
| WO2025057186A1 (en) | Method and system for managing registration of a network function | |
| WO2025008936A1 (en) | Method and system for establishing pdu session with upf | |
| WO2025008978A1 (en) | Method and system for listing devices for optimizing allocation of ipv4 and ipv6 addresses | |
| WO2025062454A1 (en) | Method and system for increasing throughput of data transactions by implementing a storage system | |
| WO2025008879A1 (en) | Method and system for optimization of device triggering procedure for iot devices | |
| WO2025052433A1 (en) | Method and system for automatically fetching a slice authorisation data at amf of a network | |
| WO2025012980A1 (en) | Method and system for performing a barring procedure in a pre-defined presence reporting area (pra) | |
| WO2025008867A1 (en) | Method and system for managing network slice selection in a telecommunications network | |
| WO2025052478A1 (en) | Method and system for generating a pcf response in a telecommunications network | |
| WO2025012936A1 (en) | Method and system for reporting slice-specific load information | |
| WO2025012922A1 (en) | Method and system for overwriting network requests based on a priority of timestamps |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| DPE2 | Request for preliminary examination filed before expiration of 19th month from priority date (pct application filed from 20040101) | ||
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 24837530 Country of ref document: EP Kind code of ref document: A1 |