WO2025017691A1 - Method and system for executing requests in network - Google Patents
Method and system for executing requests in network Download PDFInfo
- Publication number
- WO2025017691A1 WO2025017691A1 PCT/IN2024/051260 IN2024051260W WO2025017691A1 WO 2025017691 A1 WO2025017691 A1 WO 2025017691A1 IN 2024051260 W IN2024051260 W IN 2024051260W WO 2025017691 A1 WO2025017691 A1 WO 2025017691A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- network nodes
- requests
- workflow
- network
- processors
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/50—Network services
- H04L67/60—Scheduling or organising the servicing of application requests, e.g. requests for application data transmissions using the analysis and optimisation of the required network resources
- H04L67/63—Routing a service request depending on the request content or context
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/50—Network services
- H04L67/56—Provisioning of proxy services
- H04L67/568—Storing data temporarily at an intermediate stage, e.g. caching
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L43/00—Arrangements for monitoring or testing data switching networks
- H04L43/08—Monitoring or testing based on specific metrics, e.g. QoS, energy consumption or environmental parameters
- H04L43/0805—Monitoring or testing based on specific metrics, e.g. QoS, energy consumption or environmental parameters by checking availability
- H04L43/0817—Monitoring or testing based on specific metrics, e.g. QoS, energy consumption or environmental parameters by checking availability by checking functioning
Definitions
- the present invention relates to the field of data communication in networks, and more particularly, the invention pertains to a method and a system for handling an interface (e.g., southbound interface or the like) in the networks.
- an interface e.g., southbound interface or the like
- One or more embodiments of the present disclosure provide a system and a method for executing requests in a network.
- a method of executing requests in a network includes receiving, by one or more processors, one or more requests related to an order for execution by one or more network nodes. Further, the method includes retrieving, by the one or more processors, a plurality of details of a workflow from a cache data store. Further, the method includes determining, by the one or more processors, availability of the one or more network nodes using a monitoring unit. Further, the method includes routing, by the one or more processors, the one or more requests towards the one or more network nodes according to the workflow, when the one or more network nodes are determined to be available. Further, the method includes pausing, by the one or more processors, the one or more requests from reaching the one or more network nodes, when the one or more network nodes are determined to be unavailable.
- upon pausing the one or more requests from reaching the one or more network nodes includes the step of checking, by the one or more processors, if the one or more network nodes is determined to be available.
- the one or more requests that are paused is stored in a request cache until the one or more network nodes are available, wherein the one or more requests are routed towards the one or more network nodes for execution in accordance with the workflow when the one or more network nodes are available.
- the one or more network nodes are determined to be unavailable when the one or more network nodes go into a maintenance mode.
- the method includes facilitating, by the one or more processors, creating the workflow via an interface (e.g., user interface, command line interface (CLI) or the like), wherein the plurality of details comprises a sequence of execution of the one or more requests, and data required for the execution. Further, the method includes storing, by the one or more processors, the workflow in the cache data store.
- an interface e.g., user interface, command line interface (CLI) or the like
- the method includes storing, by the one or more processors, the workflow in the cache data store.
- a system for executing requests in a network includes an input interface (e.g., command line interface or the like) configured to receive one or more requests related to an order for execution by one or more network nodes. Further, the system includes a dynamic activator configured to retrieve a plurality of details of a workflow from a cache data store. Further, the system includes a checking module configured to determine availability of the one or more network nodes using a monitoring unit. Further, the system includes a dynamic routing manager configured to route the one or more requests towards the one or more network nodes according to the workflow, when the one or more network nodes are determined to be available. Further, the dynamic routing manager is configured to pause the one or more requests from reaching the one or more network nodes, when the one or more network nodes are determined to be unavailable.
- an input interface e.g., command line interface or the like
- the system includes a dynamic activator configured to retrieve a plurality of details of a workflow from a cache data store.
- the system includes a checking module configured to determine availability of the one
- system further includes a dynamic activator configured to execute the workflow based on the plurality of details.
- the system further includes a request cache configured to store the one or more requests that are paused due to unavailability of the one or more network nodes, and wherein the one or more requests are stored until the one or more network nodes are available.
- system further includes the dynamic routing manager further configured to route the one or more requests towards the one or more network nodes for execution in accordance with the workflow when the one or more network nodes are available.
- a non-transitory computer-readable medium having stored thereon computer-readable instructions that, when executed by a processor, cause the processor to: receive one or more requests related to an order for execution by one or more network nodes; retrieve a plurality of details of a workflow from a cache data store; determine availability of the one or more network nodes using a monitoring unit; route the one or more requests towards the one or more network nodes according to the workflow, when the one or more network nodes are determined to be available; and pause the one or more requests from reaching the one or more network nodes, when the one or more network nodes are determined to be unavailable.
- FIG. 1 is an exemplary block diagram of an environment for executing requests in a network, according to various embodiments of the present disclosure
- FIG. 2 is an example workflow diagram illustrating handling of a southbound interface via the system illustrated in FIG.1 , according to an embodiment of the present invention
- FIG. 3 is an example workflow diagram illustrating a method of handling the southbound interface when the system is paused for a network node, according to an embodiment of the present invention
- FIG. 4 is an example workflow diagram illustrating a method of resuming the system that is paused upon receiving a southbound request, according to an embodiment of the present invention
- FIG. 5 is a block diagram of the system of FIG. 1, according to various embodiments of the present disclosure.
- FIG. 6 illustrates a block diagram of a processor included in the system for handling southbound interface, according to one or more embodiments of the present invention
- FIG. 7 is an example schematic representation of the system of FIG. 1 in which various entities operations are explained, according to various embodiments of the present system.
- FIG. 8 shows a sequence flow diagram illustrating a method for executing requests in the network, according to various embodiments of the present disclosure.
- Various embodiments of the invention provide a method of executing requests in a network.
- the method includes receiving, by one or more processors, one or more requests related to an order for execution by one or more network nodes. Further, the method includes retrieving, by the one or more processors, a plurality of details of a workflow from a cache data store. Further, the method includes determining, by the one or more processors, availability of the one or more network nodes using a monitoring unit. Further, the method includes routing, by the one or more processors, the one or more requests towards the one or more network nodes according to the workflow, when the one or more network nodes are determined to be available. Further, the method includes pausing, by the one or more processors, the one or more requests from reaching the one or more network nodes, when the one or more network nodes are determined to be unavailable.
- the system also knows as a fulfilment management system (FMS), identifies downtime maintenance for a southbound interface in the network node, and pauses activities to that the network node, while continuing to take requests from the northbound interface, and keep it in queue for such southbound interface.
- FMS fulfilment management system
- An Artificial Intelligence / Machine Learning (AI/ML) module (or monitoring unit) is configured to detect downtime of the network node, and automatically pause the requests to such node.
- FIG. 1 illustrates an exemplary block diagram of an environment (100) for executing requests in a network (106), according to various embodiments of the present disclosure.
- the environment (100) comprises a plurality of user equipment’s (UEs) 102-1, 102-2, ,102-n.
- the at least one UE (102-n) from the plurality of the UEs (102-1, 102-2, > 102-n) is configured to connect to a system (108) via the network (106).
- the plurality of UEs (102) may be a wireless device or a communication device that may be a part of the system (108).
- the wireless device or the UE (102) may include, but are not limited to, a handheld wireless communication device (e.g., a mobile phone, a smart phone, a phablet device, and so on), a wearable computer device (e.g., a headmounted display computer device, a head-mounted camera device, a wristwatch computer device, and so on), a laptop computer, a tablet computer, or another type of portable computer, a media playing device, a portable gaming system, and/or any other type of computer device with wireless communication or VoIP capabilities.
- a handheld wireless communication device e.g., a mobile phone, a smart phone, a phablet device, and so on
- a wearable computer device e.g., a headmounted display computer device, a head-mounted camera device, a wristwatch computer device, and so on
- a laptop computer e.g., a tablet computer, or another type of portable computer, a media playing device, a portable gaming system, and/or any other type of
- the plurality of UEs (102) may comprise a memory such as a volatile memory (e.g., RAM), a non-volatile memory (e.g., disk memory, FLASH memory, EPROMs, etc.), an unalterable memory, and/or other types of memory.
- the memory might be configured or designed to store data.
- the data may pertain to attributes and access rights specifically defined for the plurality of UEs (102).
- the UE (102) may be accessed by the user, to receive the requests related to an order determined by the system (108).
- the network (106) may use one or more communication interfaces/protocols such as, for example, Voice Over Internet Protocol (VoIP), 802.11 (Wi-Fi), 802.15 (including BluetoothTM), 802.16 (Wi-Max), 802.22, Cellular standards such as Code Division Multiple Access (CDMA), CDMA2000, Wideband CDMA (WCDMA), Radio Frequency Identification (e.g., RFID), Infrared, laser, Near Field Magnetics, etc.
- VoIP Voice Over Internet Protocol
- Wi-Fi Wi-Fi
- 802.15 including BluetoothTM
- Wi-Max Wi-Max
- 802.22 Cellular standards such as Code Division Multiple Access (CDMA), CDMA2000, Wideband CDMA (WCDMA), Radio Frequency Identification (e.g., RFID), Infrared, laser, Near Field Magnetics, etc.
- CDMA Code Division Multiple Access
- WCDMA Wideband CDMA
- RFID Radio Frequency Identification
- Infrared laser, Near Field Magnetics, etc.
- the system (108) is communicatively coupled to a server (104) via the network (106).
- the server (104) can be, for example, but not limited to a standalone server, a server blade, a server rack, an application server, a bank of servers, a business telephony application server (BTAS), a server farm, a cloud server, an edge server, home server, a virtualized server, one or more processors executing code to function as a server, or the like.
- the server (104) may operate at various entities or a single entity (include, but is not limited to, a vendor side, a service provider side, a network operator side, a company side, an organization side, a university side, a lab facility side, a business enterprise side, a defence facility side, or any other facility) that provides service.
- entities or a single entity include, but is not limited to, a vendor side, a service provider side, a network operator side, a company side, an organization side, a university side, a lab facility side, a business enterprise side, a defence facility side, or any other facility.
- the network (106) includes, by way of example but not limitation, one or more of a wireless network, a wired network, an internet, an intranet, a public network, a private network, a packet-switched network, a circuit-switched network, an ad hoc network, an infrastructure network, a Public-Switched Telephone Network (PSTN), a cable network, a cellular network, a satellite network, a fiber optic network, or some combination thereof.
- PSTN Public-Switched Telephone Network
- the network (106) may include, but is not limited to, a Third Generation (3G), a Fourth Generation (4G), a Fifth Generation (5G), a Sixth Generation (6G), a New Radio (NR), a Narrow Band Internet of Things (NB-IoT), an Open Radio Access Network (O-RAN), and the like.
- 3G Third Generation
- 4G Fourth Generation
- 5G Fifth Generation
- 6G Sixth Generation
- NR New Radio
- NB-IoT Narrow Band Internet of Things
- O-RAN Open Radio Access Network
- the network (106) may also include, by way of example but not limitation, at least a portion of one or more networks having one or more nodes that transmit, receive, forward, generate, buffer, store, route, switch, process, or a combination thereof, etc. one or more messages, packets, signals, waves, voltage or current levels, some combination thereof, or so forth.
- the network (106) may also include, by way of example but not limitation, one or more of a wireless network, a wired network, an internet, an intranet, a public network, a private network, a packet-switched network, a circuit-switched network, an ad hoc network, an infrastructure network, a Public- Switched Telephone Network (PSTN), a cable network, a cellular network, a satellite network, a fiber optic network, a V OIP or some combination thereof.
- PSTN Public- Switched Telephone Network
- the one or more network nodes (106a-106n) can be, for example, but not limited to a base station that is located in the fixed or stationary part of the network (106).
- the base station may correspond to a remote radio head, a transmission point, an access point or access node, a macro cell, a small cell, a micro cell, a femto cell, a metro cell.
- the base station enables transmission of radio signals to the UE or mobile transceiver.
- a radio signal may comply with radio signals as, for example, standardized by 3GPP or, generally, in line with one or more of the above listed systems.
- a base station may correspond to a NodeB, an eNodeB, a Base Transceiver Station (BTS), an access point, a remote radio head, a transmission point, which may be further divided into a remote unit and a central unit.
- BTS Base Transceiver Station
- the system (108) may include one or more processors (502) coupled with a memory (504), wherein the memory (504) may store instructions which when executed by the one or more processors (502) may cause the system (108) executing requests in the network (106) or the server (104).
- FIG. 2 An exemplary representation of the system (108) for such purpose, in accordance with embodiments of the present disclosure, is shown in FIG. 2 as system (108).
- the system (108) may include one or more processor(s) (502).
- the one or more processor(s) (502) may be implemented as one or more microprocessors, microcomputers, microcontrollers, edge or fog microcontrollers, digital signal processors, central processing units, logic circuitries, and/or any devices that process data based on operational instructions.
- the one or more processor(s) (502) may be configured to fetch and execute computer-readable instructions stored in the memory (504) of the system (108).
- the memory (504) may be configured to store one or more computer- readable instructions or routines in a non-transitory computer readable storage medium, which may be fetched and executed to create or share data packets over a network service.
- the environment (100) further includes the system (108) communicably coupled to the remote server (104) and each UE of the plurality of UEs (102) via the network (106).
- the remote server (104) is configured to execute the requests in the network (106).
- the system (108) is adapted to be embedded within the remote server (104) or is embedded as the individual entity.
- the system (108) is designed to provide a centralized and unified view of data and facilitate efficient business operations.
- the system (108) is authorized to access to update/create/delete one or more parameters of their relationship between the requests for the workflow, which gets reflected in realtime independent of the complexity of network.
- the system (108) may include an enterprise provisioning server (for example), which may connect with the remote server (104).
- the enterprise provisioning server provides flexibility for enterprises, ecommerce, finance to update/create/delete information related to the requests for the workflow in real time as per their business needs.
- a user with administrator rights can access and retrieve the requests for the workflow and perform real-time analysis in the system (108).
- the system (108) may include, by way of example but not limitation, one or more of a standalone server, a server blade, a server rack, a bank of servers, a business telephony application server (BTAS), a server farm, hardware supporting a part of a cloud service or system, a home server, hardware running a virtualized server, one or more processors executing code to function as a server, one or more machines performing server-side functionality as described herein, at least a portion of any of the above, some combination thereof.
- BTAS business telephony application server
- system (108) may operate at various entities or single entity (for example include, but is not limited to, a vendor side, service provider side, a network operator side, a company side, an organization side, a university side, a lab facility side, a business enterprise side, ecommerce side, finance side, a defence facility side, or any other facility) that provides service.
- entities or single entity for example include, but is not limited to, a vendor side, service provider side, a network operator side, a company side, an organization side, a university side, a lab facility side, a business enterprise side, ecommerce side, finance side, a defence facility side, or any other facility.
- system (108) is described as an integral part of the remote server (104), without deviating from the scope of the present disclosure. Operational and construction features of the system (108) will be explained in detail with respect to the following figures.
- a monitoring unit e.g., AI/ML unit
- the AI/ML unit (202) can be, for example, but not limited to, a linear regression module, decision trees module, Random Forests module, AutoRegressive Integrated Moving Average (ARIMA), anomaly detection models or the like.
- a southbound interface e.g., second network node (106b)
- the system (108) will be automatically paused for that interface.
- the monitoring unit e.g., AI/ML unit
- the request (step 1) from the NB Interface (206) will not be forwarded from the system (108) to the second network node (106b), the request to this node will be paused.
- the operations to a first network node (106a) shall continue at (step 2).
- the paused requests shall be stored in the queue of NB Interface (206). The requests emitting for that interface (e.g., second network node (106b)) will be maintained in the queue.
- the system (108) will keep taking requests from the NB Interface (206) and send it to the first network node (106a) and the second network node (106b) (at step 3). Once the maintenance activity for the second network node (106b) is complete, the paused requests will then be sent to the second network node (106b) at (step 3). The response is shared with the NB Interface (206) at step 4.
- FIG. 3 is an example workflow diagram illustrating a method of handling the southbound interface (206) when the system (108) is paused for the network node, according to an embodiment of the present invention. As shown in FIG. 3, at (step 1) the request for any order is received by the system (108).
- the workflow details are received by the system (108) from the cache data store (304).
- the workflow execution starts.
- availability of the first network node (106a) is checked. If the first network node (106a) is available then at (step 5) the availability status is given to the monitoring unit (e.g., AI/ML unit) (202) using metrics.
- the metrics can be, for example, but not limited to, a latency, a throughput, packet loss, signal strength, load, bandwidth, health, or the like.
- the monitoring unit e.g., AI/ML unit
- the request is sent to the first network node (106a) from the system (108).
- the monitoring unit e.g., AI/ML unit (202) then checks for the availability of the second network node (106b) at (step 7). If the second network node (106b) is not available, it responds at (step 8) that it is not available.
- step 9 all the requests to the second network node (106b) are paused.
- step 10 the request to the second network node (106b) are then stored in a request cache unit (302).
- the resume operations are executed as shown in FIG. 4.
- FIG. 4 is an example workflow diagram illustrating a method of resuming the system that is paused upon receiving the southbound request, according to an embodiment of the present invention.
- step 1 the request for any order is received from the system (108).
- step 2 the workflow details are fetched from the cache data store (304).
- step 3 the workflow starts execution.
- step 4 availability of the second network node (106b) is checked.
- step 5 if the second network node (106b) is available, it responds with an ‘OK’.
- the monitoring unit e.g., AI/ML unit
- step 6 the monitoring unit (e.g., AI/ML unit) (202), then sends a signal to the system (108) to resume operations for the second network node (106b).
- the system (108) accesses the request queue (302) to access the queue of requests for the second network node (106b).
- the requests for the second network node (106b) is then executed.
- FIG. 5 illustrates a block diagram of the system (108) provided for executing requests in the network (106), according to one or more embodiments of the present invention.
- the system (108) includes the one or more processors (502), the memory (504), an input/output interface unit (506), a display (508), an input device (510), a graph database (512), and a centralized database (514). Further the system (108) may comprise one or more processors (502).
- the one or more processors (502), hereinafter referred to as the processor (502) may be implemented as one or more microprocessors, microcomputers, microcontrollers, digital signal processors, central processing units, state machines, logic circuitries, single board computers, and/or any devices that manipulate signals based on operational instructions.
- the system (108) includes one processor.
- the system (108) may include multiple processors as per the requirement and without deviating from the scope of the present disclosure.
- the information related to the request may be provided or stored in the memory (504) of the system (108).
- the processor (502) is configured to fetch and execute computer-readable instructions stored in the memory (504).
- the memory (504) may be configured to store one or more computer-readable instructions or routines in a non-transitory computer-readable storage medium, which may be fetched and executed to create or share data packets over a network service.
- the memory (504) may include any non-transitory storage device including, for example, volatile memory such as RAM, or non-volatile memory such as disk memory, EPROMs, FLASH memory, unalterable memory, and the like.
- the memory (504) may comprise any non-transitory storage device including, for example, volatile memory such as Random- Access Memory (RAM), or non-volatile memory such as Electrically Erasable Programmable Read-only Memory (EPROM), flash memory, and the like.
- the system (108) may include an interface(s).
- the interface(s) may comprise a variety of interfaces, for example, interfaces for data input and output devices, referred to as input/output (I/O) devices, storage devices, and the like.
- the interface(s) may facilitate communication for the system.
- the interface(s) may also provide a communication pathway for one or more components of the system. Examples of such components include, but are not limited to, processing unit/engine(s) and a database.
- the processing unit/engine(s) may be implemented as a combination of hardware and programming (for example, programmable instructions) to implement one or more functionalities of the processing engine(s).
- the information related to the requests may further be configured to render on the user interface (506).
- the user interface (506) may include functionality similar to at least a portion of functionality implemented by one or more computer system interfaces such as those described herein and/or generally known to one having ordinary skill in the art.
- the user interface (506) may be rendered on the display (508), implemented using Liquid Crystal Display (LCD) display technology, Organic Light- Emitting Diode (OLED) display technology, and/or other types of conventional display technology.
- the display (508) may be integrated within the system (108) or connected externally.
- the input device(s) (510) may include, but not limited to, keyboard, buttons, scroll wheels, cursors, touchscreen sensors, audio command interfaces, magnetic strip reader, optical scanner, etc.
- the centralized database (514) may be communicably connected to the processor (502) and the memory (504).
- the centralized database (514) may be configured to store and retrieve the request pertaining to features, or services or workflow of the enterprise, an ecommerce, and the finance, access rights, attributes, approved list, and authentication data provided by an administrator.
- the remote server (104) may allow the system (108) to update/create/delete one or more parameters of their information related to the request, which provides flexibility to roll out multiple variants of the request as per business needs.
- the centralized database (514) may be outside the system (108) and communicated through a wired medium and wireless medium.
- the processor (502), in an embodiment, may be implemented as a combination of hardware and programming (for example, programmable instructions) to implement one or more functionalities of the processor (502).
- programming for the processor (502) may be processor-executable instructions stored on a non-transitory machine-readable storage medium and the hardware for the processor (502) may comprise a processing resource (for example, one or more processors), to execute such instructions.
- the memory (504) may store instructions that, when executed by the processing resource, implement the processor (502).
- system (108) may comprise the memory (504) storing the instructions and the processing resource to execute the instructions, or the memory (504) may be separate but accessible to the system (108) and the processing resource.
- the processor (502) may be implemented by an electronic circuitry.
- the processor (502) includes the request cache unit (302), a dynamic routing manager (516), a dynamic activator (518), a checking module (520) and a command line interface (604) (explained in detailed in FIG. 6).
- the request cache unit (302), the dynamic routing manager (516), the dynamic activator (518), the checking module (520) and the command line interface (604) may be implemented as a combination of hardware and programming (for example, programmable instructions) to implement one or more functionalities of the processor (502).
- programming for the processor (502) may be processor-executable instructions stored on a non-transitory machine -readable storage medium and the hardware for the processor (502) may comprise a processing resource (for example, one or more processors), to execute such instructions.
- the memory (504) may store instructions that, when executed by the processing resource, implement the processor.
- system (108) may comprise the memory (504) storing the instructions and the processing resource to execute the instructions, or the memory (504) may be separate but accessible to the system (108) and the processing resource.
- the processor (502) may be implemented by the electronic circuitry.
- the request cache unit (302), the dynamic routing manager (516), the dynamic activator (518), the checking module (520) and the command line interface (604) are communicably coupled to each other.
- the input interface (626) receives the one or more requests related to the order for execution by the one or more network nodes (106a-106n).
- the dynamic activator (518) retrieve the plurality of details of the workflow from the cache data store (304). The plurality of details includes the sequence of execution of the one or more requests, and data required for the execution.
- the checking module (520) determines availability of the one or more network nodes (106a-106n) using the monitoring unit (202). Further, the dynamic routing manager (516) routes the one or more requests towards the one or more network nodes (106a-106n) according to the workflow, when the one or more network nodes (106a-106n) are determined to be available.
- the dynamic routing manager (516) pauses the one or more requests from reaching the one or more network nodes (106a-106n), when the one or more network nodes are determined to be unavailable.
- the user Interface includes the checking module (520), the dynamic routing manager (516), and the command line interface (604). The user Interface determines the availability of the one or more network nodes (106a-
- the user Interface routes the one or more requests towards the one or more network nodes (106a-106n) according to the workflow, when the one or more network nodes (106a-106n) are determined to be available.
- the checking module (520) checks if the one or more network nodes (106a-106n) is determined to be available upon pausing the one or more requests from reaching the one or more network nodes (106a-106n).
- the user interface (506) facilitates creation of the workflow by a user (e.g., technician, service provider or the like).
- the request cache unit (302) stores the one or more requests that are paused due to unavailability of the one or more network nodes (106a-106n), wherein the one or more requests are stored until the one or more network nodes (106a-106n) are available.
- the dynamic activator (518) executes the workflow based on the plurality of details. Also, the dynamic routing manager (516) routes the one or more requests towards the one or more network nodes (106a-106n) for execution in accordance with the workflow when the one or more network nodes (106a-106n) are available.
- FIG. 6 illustrates a block diagram of the processor (502) included in the system (108) for handling the southbound interface, according to one or more embodiments of the present invention.
- the processor (502) facilitates supporting creation of dynamic uniform resource locator (URL) context for the interface in the network (106).
- the processor (502) includes the cache data store (304), the dynamic routing manager (516), the command line interface (604), an execution engine (606) having the dynamic activator (518), a workflow manager (610), a queuing engine (612) having a message broker (614), an operation and management module (618), a distributed database (620) having a distributed data lake (622), and a load balancer (624).
- the processor (502) is coupled with the graph database (512) within the system (108) or outside the system (108).
- the graph database (512) is coupled with the workflow manager (610), where the workflow manager (610) is coupled with the execution engine (606), the operation and management module (618) and the load balancer (624).
- the execution engine (606) is coupled with the distributed database (620).
- any change in context is possible via the user interface (506).
- the user can create a state in the workflow via the user interface (506).
- the user can create the workflow using the user interface (506).
- the workflow includes or defines what needs to be sent in the API context and the workflow gets stored in the distributed data lake (622) and the cache data store (304).
- FIG. 7 is an example schematic representation of the system (700) of FIG. 1 in which various entities operations are explained, according to various embodiments of the present system.
- FIG. 7 describes the system (700) for executing requests in the network (106). It is to be noted that the embodiment with respect to FIG. 7 will be explained with respect to the first network node (106a) and the system (108) for the purpose of description and illustration and should nowhere be construed as limited to the scope of the present disclosure.
- the first network node (106a) includes one or more primary processors (705) communicably coupled to the one or more processors (502) of the system (108).
- the one or more primary processors (705) are coupled with a memory (710) storing instructions which are executed by the one or more primary processors (705). Execution of the stored instructions by the one or more primary processors (705) enables the first network node (106a). The execution of the stored instructions by the one or more primary processors (705) further enables the first network node (106a) to execute the requests in the network (106).
- the one or more processors (502) is configured to transmit the response content related to the workflow request to the first network node (106a). More specifically, the one or more processors (502) of the system (108) is configured to transmit the response content from a kernel (715) to at least one of the first network node (106a).
- the kernel (715) is a core component serving as the primary interface between hardware components of the first network node (106a) and the system (108).
- the kernel (715) is configured to provide the plurality of response contents hosted on the system (108) to access resources available in the network (106).
- the resources include one of a Central Processing Unit (CPU), memory components such as Random Access Memory (RAM) and Read Only Memory (ROM).
- the system (108) includes the one or more processors (502), the memory (504), the input/output interface unit (506), the display (508), and the input device (510).
- the operations and functions of the one or more processors (502), the memory (504), the input/output interface unit (506), the display (508), and the input device (510) are already explained in FIG. 5.
- the processor (502) includes the request cache unit (302), the dynamic routing manager (516), the dynamic activator (518), and the command line interface (604).
- the operations and functions of the request cache unit (302), the dynamic routing manager (516), the dynamic activator (518), and the command line interface (604) are already explained in FIG. 6. For the sake of brevity, we are not explaining the same operations (or repeated information) in the patent disclosure.
- FIG. 8 is a flow chart (800) illustrating a method for executing requests in the network (106), according to various embodiments of the present system.
- the method includes receiving the one or more requests related to the order for execution by the one or more network nodes (106a-106n).
- the method allows the input interface (626) (e.g., command line interface (604) or the like) to receive the one or more requests related to the order for execution by the one or more network nodes (106a-106n).
- the method includes retrieving the plurality of details of the workflow from the cache data store (304).
- the method allows the dynamic activator (518) to retrieve the plurality of details of the workflow from the cache data store (304).
- the method includes determining the availability of the one or more network nodes (106a-106n) using the monitoring unit (202). In an embodiment, the method allows the checking module (520) to determine availability of the one or more network nodes (106a-106n) using the monitoring unit (202).
- the method includes routing the one or more requests towards the one or more network nodes according to the workflow, when the one or more network nodes (106a-106n) are determined to be available.
- the method allows the dynamic routing manager (516) to route the one or more requests towards the one or more network nodes (106a-106n) according to the workflow, when the one or more network nodes are determined to be available.
- the method includes pausing the one or more requests from reaching the one or more network nodes, when the one or more network nodes are determined to be unavailable. In an embodiment, the method allows the dynamic routing manager (516) to pause the one or more requests from reaching the one or more network nodes, when the one or more network nodes are determined to be unavailable. [0083] At 812, the method includes facilitating creating the workflow via the user interface (506), wherein the plurality of details comprises a sequence of execution of the one or more requests, node details like endpoint, parameter, signatures, or data required for the execution or like. In an embodiment, the method allows the user interface (506) facilitating to create the workflow via the user interface (506), where the plurality of details includes a sequence of execution of the one or more requests, and data required for the execution.
- the method includes storing the workflow in the cache data store (304).
- the method allows the one or more processors (502) storing the workflow in the cache data store (304).
- the method can be used send dynamic attributes in the context of the API, without having to undergo any change in code, so as to facilitate changes in the dynamic attributes of the API.
- Monitoring unit i.e., AE ML
- AE ML Monitoring unit
Landscapes
- Engineering & Computer Science (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Data Exchanges In Wide-Area Networks (AREA)
Abstract
The present disclosure relates to a method of executing requests in a network (106) by one or more processors (502) The method includes receiving one or more requests related to an order for execution by one or more network nodes (106a-106n). Further, the method includes retrieving a plurality of details of a workflow from a cache data store (304). Further, the method includes determining availability of the one or more network nodes using a monitoring unit (e.g., AI/ML unit or the like) (202). Further, the method includes routing the one or more requests towards the one or more network nodes according to the workflow, when the one or more network nodes are determined to be available. Further, the method includes pausing the one or more requests from reaching the one or more network nodes, when the one or more network nodes are determined to be unavailable.
Description
METHOD AND SYSTEM FOR EXECUTING REQUESTS IN NETWORK
FIELD OF THE INVENTION
[0001] The present invention relates to the field of data communication in networks, and more particularly, the invention pertains to a method and a system for handling an interface (e.g., southbound interface or the like) in the networks.
BACKGROUND OF THE INVENTION
[0002] In a data communication, multiple network nodes are present to execute workflow activities. In an example network where there exist five network nodes, there is a possibility that one or more network nodes go into a downtime mode or a maintenance mode. Currently, when a network node goes into the maintenance mode, a request that comes to a fulfilment management system (FMS), fails as one of the network nodes is not working properly.
[0003] In order to ensure, serving requests that are targeted towards other network nodes and to provision other requests, there is a need for a method and system that can ensure receiving requests from a Northbound Interface targeted towards other network nodes that are in operation, while pausing only those requests that are targeted towards the affected network node.
SUMMARY OF THE INVENTION
[0004] One or more embodiments of the present disclosure provide a system and a method for executing requests in a network.
[0005] In one aspect of the present invention, a method of executing requests in a network is described. The method includes receiving, by one or more processors, one or more requests related to an order for execution by one or more network nodes. Further, the method includes retrieving, by the one or more processors, a plurality of
details of a workflow from a cache data store. Further, the method includes determining, by the one or more processors, availability of the one or more network nodes using a monitoring unit. Further, the method includes routing, by the one or more processors, the one or more requests towards the one or more network nodes according to the workflow, when the one or more network nodes are determined to be available. Further, the method includes pausing, by the one or more processors, the one or more requests from reaching the one or more network nodes, when the one or more network nodes are determined to be unavailable.
[0006] In an embodiment, upon pausing the one or more requests from reaching the one or more network nodes includes the step of checking, by the one or more processors, if the one or more network nodes is determined to be available.
[0007] In an embodiment, the one or more requests that are paused is stored in a request cache until the one or more network nodes are available, wherein the one or more requests are routed towards the one or more network nodes for execution in accordance with the workflow when the one or more network nodes are available.
[0008] In an embodiment, the one or more network nodes are determined to be unavailable when the one or more network nodes go into a maintenance mode.
[0009] In an embodiment, further, the method includes facilitating, by the one or more processors, creating the workflow via an interface (e.g., user interface, command line interface (CLI) or the like), wherein the plurality of details comprises a sequence of execution of the one or more requests, and data required for the execution. Further, the method includes storing, by the one or more processors, the workflow in the cache data store.
[0010] In another aspect of the present invention, a system for executing requests in a network is described. The system includes an input interface (e.g., command line interface or the like) configured to receive one or more requests related to an order for execution by one or more network nodes. Further, the system includes a dynamic
activator configured to retrieve a plurality of details of a workflow from a cache data store. Further, the system includes a checking module configured to determine availability of the one or more network nodes using a monitoring unit. Further, the system includes a dynamic routing manager configured to route the one or more requests towards the one or more network nodes according to the workflow, when the one or more network nodes are determined to be available. Further, the dynamic routing manager is configured to pause the one or more requests from reaching the one or more network nodes, when the one or more network nodes are determined to be unavailable.
[0011] In an embodiment, the system further includes a dynamic activator configured to execute the workflow based on the plurality of details.
[0012] In an embodiment, the system further includes a request cache configured to store the one or more requests that are paused due to unavailability of the one or more network nodes, and wherein the one or more requests are stored until the one or more network nodes are available.
[0013] In an embodiment, the system further includes the dynamic routing manager further configured to route the one or more requests towards the one or more network nodes for execution in accordance with the workflow when the one or more network nodes are available.
[0014] In one aspect of the present invention, a non-transitory computer-readable medium having stored thereon computer-readable instructions that, when executed by a processor, cause the processor to: receive one or more requests related to an order for execution by one or more network nodes; retrieve a plurality of details of a workflow from a cache data store; determine availability of the one or more network nodes using a monitoring unit; route the one or more requests towards the one or more network nodes according to the workflow, when the one or more network nodes are determined to be available; and pause the one or more requests from reaching the one
or more network nodes, when the one or more network nodes are determined to be unavailable.
[0015] Other features and aspects of this invention will be apparent from the following description and the accompanying drawings. The features and advantages described in this summary and in the following detailed description are not all- inclusive, and particularly, many additional features and advantages will be apparent to one of ordinary skill in the relevant art, in view of the drawings, specification, and claims hereof. Moreover, it should be noted that the language used in the specification has been principally selected for readability and instructional purposes and may not have been selected to delineate or circumscribe the inventive subject matter, resort to the claims being necessary to determine such inventive subject matter.
BRIEF DESCRIPTION OF THE DRAWINGS
[0016] The accompanying drawings, which are incorporated herein, and constitute a part of this disclosure, illustrate exemplary embodiments of the disclosed methods and systems in which like reference numerals refer to the same parts throughout the different drawings. Components in the drawings are not necessarily to scale, emphasis instead being placed upon clearly illustrating the principles of the present disclosure. Some drawings may indicate the components using block diagrams and may not represent the internal circuitry of each component. It will be appreciated by those skilled in the art that disclosure of such drawings includes disclosure of electrical components, electronic components or circuitry commonly used to implement such components.
[0017] FIG. 1 is an exemplary block diagram of an environment for executing requests in a network, according to various embodiments of the present disclosure;
[0018] FIG. 2 is an example workflow diagram illustrating handling of a southbound interface via the system illustrated in FIG.1 , according to an embodiment of the present invention;
[0019] FIG. 3 is an example workflow diagram illustrating a method of handling the southbound interface when the system is paused for a network node, according to an embodiment of the present invention;
[0020] FIG. 4 is an example workflow diagram illustrating a method of resuming the system that is paused upon receiving a southbound request, according to an embodiment of the present invention;
[0021] FIG. 5 is a block diagram of the system of FIG. 1, according to various embodiments of the present disclosure;
[0022] FIG. 6 illustrates a block diagram of a processor included in the system for handling southbound interface, according to one or more embodiments of the present invention;
[0023] FIG. 7 is an example schematic representation of the system of FIG. 1 in which various entities operations are explained, according to various embodiments of the present system; and
[0024] FIG. 8 shows a sequence flow diagram illustrating a method for executing requests in the network, according to various embodiments of the present disclosure.
[0025] Further, skilled artisans will appreciate that elements in the drawings are illustrated for simplicity and may not have been necessarily been drawn to scale. For example, the flow charts illustrate the method in terms of the most prominent steps involved to help to improve understanding of aspects of the present invention. Furthermore, in terms of the construction of the device, one or more components of the device may have been represented in the drawings by conventional symbols, and the drawings may show only those specific details that are pertinent to understanding
the embodiments of the present invention so as not to obscure the drawings with details that will be readily apparent to those of ordinary skill in the art having benefit of the description herein.
[0026] The foregoing shall be more apparent from the following detailed description of the invention.
DETAILED DESCRIPTION OF THE INVENTION
[0027] Some embodiments of the present disclosure, illustrating all its features, will now be discussed in detail. It must also be noted that as used herein and in the appended claims, the singular forms “a”, “an” and “the” include plural references unless the context clearly dictates otherwise.
[0028] Various modifications to the embodiment will be readily apparent to those skilled in the art and the generic principles herein may be applied to other embodiments. However, one of ordinary skill in the art will readily recognize that the present disclosure including the definitions listed here below are not intended to be limited to the embodiments illustrated but is to be accorded the widest scope consistent with the principles and features described herein.
[0029] A person of ordinary skill in the art will readily ascertain that the illustrated steps detailed in the figures and here below are set out to explain the exemplary embodiments shown, and it should be anticipated that ongoing technological development will change the manner in which particular functions are performed. These examples are presented herein for purposes of illustration, and not limitation. Further, the boundaries of the functional building blocks have been arbitrarily defined herein for the convenience of the description. Alternative boundaries can be defined so long as the specified functions and relationships thereof are appropriately performed. Alternatives (including equivalents, extensions, variations, deviations, etc., of those described herein) will be apparent to persons skilled in the relevant art(s)
based on the teachings contained herein. Such alternatives fall within the scope and spirit of the disclosed embodiments.
[0030] Various embodiments of the invention provide a method of executing requests in a network. The method includes receiving, by one or more processors, one or more requests related to an order for execution by one or more network nodes. Further, the method includes retrieving, by the one or more processors, a plurality of details of a workflow from a cache data store. Further, the method includes determining, by the one or more processors, availability of the one or more network nodes using a monitoring unit. Further, the method includes routing, by the one or more processors, the one or more requests towards the one or more network nodes according to the workflow, when the one or more network nodes are determined to be available. Further, the method includes pausing, by the one or more processors, the one or more requests from reaching the one or more network nodes, when the one or more network nodes are determined to be unavailable.
[0031] The system also knows as a fulfilment management system (FMS), identifies downtime maintenance for a southbound interface in the network node, and pauses activities to that the network node, while continuing to take requests from the northbound interface, and keep it in queue for such southbound interface. When the downtime network node is back in operation, the FMS resumes sending requests to such network node. An Artificial Intelligence / Machine Learning (AI/ML) module (or monitoring unit) is configured to detect downtime of the network node, and automatically pause the requests to such node.
[0032] FIG. 1 illustrates an exemplary block diagram of an environment (100) for executing requests in a network (106), according to various embodiments of the present disclosure. The environment (100) comprises a plurality of user equipment’s (UEs) 102-1, 102-2, ,102-n. The at least one UE (102-n) from the plurality of the UEs (102-1, 102-2, > 102-n) is configured to connect to a system (108) via the network (106).
[0033] In accordance with yet another aspect of the exemplary embodiment, the plurality of UEs (102) may be a wireless device or a communication device that may be a part of the system (108). The wireless device or the UE (102) may include, but are not limited to, a handheld wireless communication device (e.g., a mobile phone, a smart phone, a phablet device, and so on), a wearable computer device (e.g., a headmounted display computer device, a head-mounted camera device, a wristwatch computer device, and so on), a laptop computer, a tablet computer, or another type of portable computer, a media playing device, a portable gaming system, and/or any other type of computer device with wireless communication or VoIP capabilities. A person skilled in the art will appreciate that the plurality of UEs (102) may include a fixed landline, a landline with assigned extension within the network.
[0034] The plurality of UEs (102) may comprise a memory such as a volatile memory (e.g., RAM), a non-volatile memory (e.g., disk memory, FLASH memory, EPROMs, etc.), an unalterable memory, and/or other types of memory. In one implementation, the memory might be configured or designed to store data. The data may pertain to attributes and access rights specifically defined for the plurality of UEs (102). The UE (102) may be accessed by the user, to receive the requests related to an order determined by the system (108). The network (106), may use one or more communication interfaces/protocols such as, for example, Voice Over Internet Protocol (VoIP), 802.11 (Wi-Fi), 802.15 (including Bluetooth™), 802.16 (Wi-Max), 802.22, Cellular standards such as Code Division Multiple Access (CDMA), CDMA2000, Wideband CDMA (WCDMA), Radio Frequency Identification (e.g., RFID), Infrared, laser, Near Field Magnetics, etc.
[0035] The system (108) is communicatively coupled to a server (104) via the network (106). The server (104) can be, for example, but not limited to a standalone server, a server blade, a server rack, an application server, a bank of servers, a business telephony application server (BTAS), a server farm, a cloud server, an edge server, home server, a virtualized server, one or more processors executing code to function as a server, or the like. In an implementation, the server (104) may operate at various
entities or a single entity (include, but is not limited to, a vendor side, a service provider side, a network operator side, a company side, an organization side, a university side, a lab facility side, a business enterprise side, a defence facility side, or any other facility) that provides service.
[0036] The network (106) includes, by way of example but not limitation, one or more of a wireless network, a wired network, an internet, an intranet, a public network, a private network, a packet-switched network, a circuit-switched network, an ad hoc network, an infrastructure network, a Public-Switched Telephone Network (PSTN), a cable network, a cellular network, a satellite network, a fiber optic network, or some combination thereof. The network (106) may include, but is not limited to, a Third Generation (3G), a Fourth Generation (4G), a Fifth Generation (5G), a Sixth Generation (6G), a New Radio (NR), a Narrow Band Internet of Things (NB-IoT), an Open Radio Access Network (O-RAN), and the like.
[0037] The network (106) may also include, by way of example but not limitation, at least a portion of one or more networks having one or more nodes that transmit, receive, forward, generate, buffer, store, route, switch, process, or a combination thereof, etc. one or more messages, packets, signals, waves, voltage or current levels, some combination thereof, or so forth. The network (106) may also include, by way of example but not limitation, one or more of a wireless network, a wired network, an internet, an intranet, a public network, a private network, a packet-switched network, a circuit-switched network, an ad hoc network, an infrastructure network, a Public- Switched Telephone Network (PSTN), a cable network, a cellular network, a satellite network, a fiber optic network, a V OIP or some combination thereof.
[0038] The one or more network nodes (106a-106n) can be, for example, but not limited to a base station that is located in the fixed or stationary part of the network (106). The base station may correspond to a remote radio head, a transmission point, an access point or access node, a macro cell, a small cell, a micro cell, a femto cell, a metro cell. The base station enables transmission of radio signals to the UE or mobile
transceiver. Such a radio signal may comply with radio signals as, for example, standardized by 3GPP or, generally, in line with one or more of the above listed systems. Thus, a base station may correspond to a NodeB, an eNodeB, a Base Transceiver Station (BTS), an access point, a remote radio head, a transmission point, which may be further divided into a remote unit and a central unit.
[0039] The system (108) may include one or more processors (502) coupled with a memory (504), wherein the memory (504) may store instructions which when executed by the one or more processors (502) may cause the system (108) executing requests in the network (106) or the server (104). An exemplary representation of the system (108) for such purpose, in accordance with embodiments of the present disclosure, is shown in FIG. 2 as system (108). In an embodiment, the system (108) may include one or more processor(s) (502). The one or more processor(s) (502) may be implemented as one or more microprocessors, microcomputers, microcontrollers, edge or fog microcontrollers, digital signal processors, central processing units, logic circuitries, and/or any devices that process data based on operational instructions. Among other capabilities, the one or more processor(s) (502) may be configured to fetch and execute computer-readable instructions stored in the memory (504) of the system (108). The memory (504) may be configured to store one or more computer- readable instructions or routines in a non-transitory computer readable storage medium, which may be fetched and executed to create or share data packets over a network service.
[0040] The environment (100) further includes the system (108) communicably coupled to the remote server (104) and each UE of the plurality of UEs (102) via the network (106). The remote server (104) is configured to execute the requests in the network (106).
[0041] The system (108) is adapted to be embedded within the remote server (104) or is embedded as the individual entity. The system (108) is designed to provide a centralized and unified view of data and facilitate efficient business operations. The
system (108) is authorized to access to update/create/delete one or more parameters of their relationship between the requests for the workflow, which gets reflected in realtime independent of the complexity of network.
[0042] In another embodiment, the system (108) may include an enterprise provisioning server (for example), which may connect with the remote server (104). The enterprise provisioning server provides flexibility for enterprises, ecommerce, finance to update/create/delete information related to the requests for the workflow in real time as per their business needs. A user with administrator rights can access and retrieve the requests for the workflow and perform real-time analysis in the system (108).
[0043] The system (108) may include, by way of example but not limitation, one or more of a standalone server, a server blade, a server rack, a bank of servers, a business telephony application server (BTAS), a server farm, hardware supporting a part of a cloud service or system, a home server, hardware running a virtualized server, one or more processors executing code to function as a server, one or more machines performing server-side functionality as described herein, at least a portion of any of the above, some combination thereof. In an implementation, system (108) may operate at various entities or single entity (for example include, but is not limited to, a vendor side, service provider side, a network operator side, a company side, an organization side, a university side, a lab facility side, a business enterprise side, ecommerce side, finance side, a defence facility side, or any other facility) that provides service.
[0044] However, for the purpose of description, the system (108) is described as an integral part of the remote server (104), without deviating from the scope of the present disclosure. Operational and construction features of the system (108) will be explained in detail with respect to the following figures.
[0045] In FIG. 2, for any request coming at (step 1) from the NB Interface (206) to the system (108), for a particular process, before executing the request, a monitoring
unit (e.g., AI/ML unit) (202) will check for southbound interface availability. The AI/ML unit (202) can be, for example, but not limited to, a linear regression module, decision trees module, Random Forests module, AutoRegressive Integrated Moving Average (ARIMA), anomaly detection models or the like.
[0046] If a southbound interface (e.g., second network node (106b)) is not available then the system (108) will be automatically paused for that interface. If a maintenance activity is going on in the second network node (106b), the monitoring unit (e.g., AI/ML unit) (202) will identify that the second network node (106b) is not available and then the request (step 1) from the NB Interface (206) will not be forwarded from the system (108) to the second network node (106b), the request to this node will be paused. But the operations to a first network node (106a) shall continue at (step 2). The paused requests shall be stored in the queue of NB Interface (206). The requests emitting for that interface (e.g., second network node (106b)) will be maintained in the queue.
[0047] The system (108) will keep taking requests from the NB Interface (206) and send it to the first network node (106a) and the second network node (106b) (at step 3). Once the maintenance activity for the second network node (106b) is complete, the paused requests will then be sent to the second network node (106b) at (step 3). The response is shared with the NB Interface (206) at step 4.
[0048] In an embodiment, if more orders are there for some workflow and less orders for another workflow, the monitoring unit (e.g., AI/ML unit) (202) will identify this and prioritize the workflows having more workloads, to be executed at a faster rate. Identifying when the network node is in downtime and pausing the flow of requests, and identifying when the network node comes up, and resuming the workflow execution and covering up for the time lost is an inventive step of this invention.
[0049] FIG. 3 is an example workflow diagram illustrating a method of handling the southbound interface (206) when the system (108) is paused for the network node, according to an embodiment of the present invention. As shown in FIG. 3, at (step 1) the request for any order is received by the system (108). At (step 2) the workflow details are received by the system (108) from the cache data store (304). At (step 3) the workflow execution starts. At (step 4) availability of the first network node (106a) is checked. If the first network node (106a) is available then at (step 5) the availability status is given to the monitoring unit (e.g., AI/ML unit) (202) using metrics. The metrics can be, for example, but not limited to, a latency, a throughput, packet loss, signal strength, load, bandwidth, health, or the like. At (step 6) if the monitoring unit (e.g., AI/ML unit) (202) determines that the first network node (106a) is available, then the request is sent to the first network node (106a) from the system (108).
[0050] The monitoring unit (e.g., AI/ML unit) (202) then checks for the availability of the second network node (106b) at (step 7). If the second network node (106b) is not available, it responds at (step 8) that it is not available.
[0051] At (step 9), all the requests to the second network node (106b) are paused. At (step 10), the request to the second network node (106b) are then stored in a request cache unit (302). When the second network node (106b) is available, the resume operations are executed as shown in FIG. 4.
[0052] FIG. 4 is an example workflow diagram illustrating a method of resuming the system that is paused upon receiving the southbound request, according to an embodiment of the present invention.
[0053] At (step 1), the request for any order is received from the system (108). At (step 2), the workflow details are fetched from the cache data store (304). At (step 3), the workflow starts execution.
[0054] At (step 4), availability of the second network node (106b) is checked. At (step 5) if the second network node (106b) is available, it responds with an ‘OK’. At
(step 6), the monitoring unit (e.g., AI/ML unit) (202), then sends a signal to the system (108) to resume operations for the second network node (106b).
[0055] At (step 7), the system (108) accesses the request queue (302) to access the queue of requests for the second network node (106b). At (step 8), the requests for the second network node (106b) is then executed.
[0056] FIG. 5 illustrates a block diagram of the system (108) provided for executing requests in the network (106), according to one or more embodiments of the present invention. As per the illustrated embodiment, the system (108) includes the one or more processors (502), the memory (504), an input/output interface unit (506), a display (508), an input device (510), a graph database (512), and a centralized database (514). Further the system (108) may comprise one or more processors (502). The one or more processors (502), hereinafter referred to as the processor (502) may be implemented as one or more microprocessors, microcomputers, microcontrollers, digital signal processors, central processing units, state machines, logic circuitries, single board computers, and/or any devices that manipulate signals based on operational instructions. As per the illustrated embodiment, the system (108) includes one processor. However, it is to be noted that the system (108) may include multiple processors as per the requirement and without deviating from the scope of the present disclosure.
[0057] The information related to the request may be provided or stored in the memory (504) of the system (108). Among other capabilities, the processor (502) is configured to fetch and execute computer-readable instructions stored in the memory (504). The memory (504) may be configured to store one or more computer-readable instructions or routines in a non-transitory computer-readable storage medium, which may be fetched and executed to create or share data packets over a network service. The memory (504) may include any non-transitory storage device including, for example, volatile memory such as RAM, or non-volatile memory such as disk memory, EPROMs, FLASH memory, unalterable memory, and the like.
[0058] The memory (504) may comprise any non-transitory storage device including, for example, volatile memory such as Random- Access Memory (RAM), or non-volatile memory such as Electrically Erasable Programmable Read-only Memory (EPROM), flash memory, and the like. In an embodiment, the system (108) may include an interface(s). The interface(s) may comprise a variety of interfaces, for example, interfaces for data input and output devices, referred to as input/output (I/O) devices, storage devices, and the like. The interface(s) may facilitate communication for the system. The interface(s) may also provide a communication pathway for one or more components of the system. Examples of such components include, but are not limited to, processing unit/engine(s) and a database. The processing unit/engine(s) may be implemented as a combination of hardware and programming (for example, programmable instructions) to implement one or more functionalities of the processing engine(s).
[0059] The information related to the requests may further be configured to render on the user interface (506). The user interface (506) may include functionality similar to at least a portion of functionality implemented by one or more computer system interfaces such as those described herein and/or generally known to one having ordinary skill in the art. The user interface (506) may be rendered on the display (508), implemented using Liquid Crystal Display (LCD) display technology, Organic Light- Emitting Diode (OLED) display technology, and/or other types of conventional display technology. The display (508) may be integrated within the system (108) or connected externally. Further the input device(s) (510) may include, but not limited to, keyboard, buttons, scroll wheels, cursors, touchscreen sensors, audio command interfaces, magnetic strip reader, optical scanner, etc.
[0060] The centralized database (514) may be communicably connected to the processor (502) and the memory (504). The centralized database (514) may be configured to store and retrieve the request pertaining to features, or services or workflow of the enterprise, an ecommerce, and the finance, access rights, attributes, approved list, and authentication data provided by an administrator. Further the remote
server (104) may allow the system (108) to update/create/delete one or more parameters of their information related to the request, which provides flexibility to roll out multiple variants of the request as per business needs. In another embodiment, the centralized database (514) may be outside the system (108) and communicated through a wired medium and wireless medium.
[0061] Further, the processor (502), in an embodiment, may be implemented as a combination of hardware and programming (for example, programmable instructions) to implement one or more functionalities of the processor (502). In the examples described herein, such combinations of hardware and programming may be implemented in several different ways. For example, the programming for the processor (502) may be processor-executable instructions stored on a non-transitory machine-readable storage medium and the hardware for the processor (502) may comprise a processing resource (for example, one or more processors), to execute such instructions. In the present examples, the memory (504) may store instructions that, when executed by the processing resource, implement the processor (502). In such examples, the system (108) may comprise the memory (504) storing the instructions and the processing resource to execute the instructions, or the memory (504) may be separate but accessible to the system (108) and the processing resource. In other examples, the processor (502) may be implemented by an electronic circuitry.
[0062] In order for the system (108) to execute the requests in the network (106), the processor (502) includes the request cache unit (302), a dynamic routing manager (516), a dynamic activator (518), a checking module (520) and a command line interface (604) (explained in detailed in FIG. 6).
[0063] The request cache unit (302), the dynamic routing manager (516), the dynamic activator (518), the checking module (520) and the command line interface (604) may be implemented as a combination of hardware and programming (for example, programmable instructions) to implement one or more functionalities of the processor (502). In the examples described herein, such combinations of hardware and
programming may be implemented in several different ways. For example, the programming for the processor (502) may be processor-executable instructions stored on a non-transitory machine -readable storage medium and the hardware for the processor (502) may comprise a processing resource (for example, one or more processors), to execute such instructions. In the present examples, the memory (504) may store instructions that, when executed by the processing resource, implement the processor. In such examples, the system (108) may comprise the memory (504) storing the instructions and the processing resource to execute the instructions, or the memory (504) may be separate but accessible to the system (108) and the processing resource. In other examples, the processor (502) may be implemented by the electronic circuitry.
[0064] In order for the system (108) to execute the requests in the network (106), the request cache unit (302), the dynamic routing manager (516), the dynamic activator (518), the checking module (520) and the command line interface (604) are communicably coupled to each other.
[0065] In an embodiment, the input interface (626) (e.g., command line interface (604) or the like) receives the one or more requests related to the order for execution by the one or more network nodes (106a-106n). The dynamic activator (518) retrieve the plurality of details of the workflow from the cache data store (304). The plurality of details includes the sequence of execution of the one or more requests, and data required for the execution. The checking module (520) determines availability of the one or more network nodes (106a-106n) using the monitoring unit (202). Further, the dynamic routing manager (516) routes the one or more requests towards the one or more network nodes (106a-106n) according to the workflow, when the one or more network nodes (106a-106n) are determined to be available. Alternatively, the dynamic routing manager (516) pauses the one or more requests from reaching the one or more network nodes (106a-106n), when the one or more network nodes are determined to be unavailable. In an implementation, the user Interface includes the checking module (520), the dynamic routing manager (516), and the command line interface (604). The user Interface determines the availability of the one or more network nodes (106a-
Y1
106n) using the monitoring unit (202). Further, the user Interface routes the one or more requests towards the one or more network nodes (106a-106n) according to the workflow, when the one or more network nodes (106a-106n) are determined to be available.
[0066] In an embodiment, the checking module (520) checks if the one or more network nodes (106a-106n) is determined to be available upon pausing the one or more requests from reaching the one or more network nodes (106a-106n).
[0067] Further, the user interface (506) facilitates creation of the workflow by a user (e.g., technician, service provider or the like). Further, the request cache unit (302) stores the one or more requests that are paused due to unavailability of the one or more network nodes (106a-106n), wherein the one or more requests are stored until the one or more network nodes (106a-106n) are available.
[0068] Further, the dynamic activator (518) executes the workflow based on the plurality of details. Also, the dynamic routing manager (516) routes the one or more requests towards the one or more network nodes (106a-106n) for execution in accordance with the workflow when the one or more network nodes (106a-106n) are available.
[0069] FIG. 6 illustrates a block diagram of the processor (502) included in the system (108) for handling the southbound interface, according to one or more embodiments of the present invention. The processor (502) facilitates supporting creation of dynamic uniform resource locator (URL) context for the interface in the network (106). The processor (502) includes the cache data store (304), the dynamic routing manager (516), the command line interface (604), an execution engine (606) having the dynamic activator (518), a workflow manager (610), a queuing engine (612) having a message broker (614), an operation and management module (618), a distributed database (620) having a distributed data lake (622), and a load balancer (624). The processor (502) is coupled with the graph database (512) within the system
(108) or outside the system (108). The graph database (512) is coupled with the workflow manager (610), where the workflow manager (610) is coupled with the execution engine (606), the operation and management module (618) and the load balancer (624). The execution engine (606) is coupled with the distributed database (620).
[0070] In case the API is a southbound interface, then any change in context is possible via the user interface (506). The user can create a state in the workflow via the user interface (506). The user can create the workflow using the user interface (506). The workflow includes or defines what needs to be sent in the API context and the workflow gets stored in the distributed data lake (622) and the cache data store (304).
[0071] When an order comes to the workflow manager (610), the workflow details are fetched and the dynamic activator (518) is asked to execute it. To execute it the dynamic activator (518) checks in the context which parameter needs to be sent. The detailed workflow is explained with reference to FIG. 2 and FIG. 3.
[0072] FIG. 7 is an example schematic representation of the system (700) of FIG. 1 in which various entities operations are explained, according to various embodiments of the present system. Referring to FIG. 7, FIG. 7 describes the system (700) for executing requests in the network (106). It is to be noted that the embodiment with respect to FIG. 7 will be explained with respect to the first network node (106a) and the system (108) for the purpose of description and illustration and should nowhere be construed as limited to the scope of the present disclosure.
[0073] As mentioned earlier, the first network node (106a) includes one or more primary processors (705) communicably coupled to the one or more processors (502) of the system (108). The one or more primary processors (705) are coupled with a memory (710) storing instructions which are executed by the one or more primary processors (705). Execution of the stored instructions by the one or more primary
processors (705) enables the first network node (106a). The execution of the stored instructions by the one or more primary processors (705) further enables the first network node (106a) to execute the requests in the network (106).
[0074] As mentioned earlier, the one or more processors (502) is configured to transmit the response content related to the workflow request to the first network node (106a). More specifically, the one or more processors (502) of the system (108) is configured to transmit the response content from a kernel (715) to at least one of the first network node (106a). The kernel (715) is a core component serving as the primary interface between hardware components of the first network node (106a) and the system (108). The kernel (715) is configured to provide the plurality of response contents hosted on the system (108) to access resources available in the network (106). The resources include one of a Central Processing Unit (CPU), memory components such as Random Access Memory (RAM) and Read Only Memory (ROM).
[0075] As per the illustrated embodiment, the system (108) includes the one or more processors (502), the memory (504), the input/output interface unit (506), the display (508), and the input device (510). The operations and functions of the one or more processors (502), the memory (504), the input/output interface unit (506), the display (508), and the input device (510) are already explained in FIG. 5. For the sake of brevity, we are not explaining the same operations (or repeated information) in the patent disclosure.
[0076] Further, the processor (502) includes the request cache unit (302), the dynamic routing manager (516), the dynamic activator (518), and the command line interface (604). The operations and functions of the request cache unit (302), the dynamic routing manager (516), the dynamic activator (518), and the command line interface (604) are already explained in FIG. 6. For the sake of brevity, we are not explaining the same operations (or repeated information) in the patent disclosure.
[0077] FIG. 8 is a flow chart (800) illustrating a method for executing requests in the network (106), according to various embodiments of the present system.
[0078] At 802, the method includes receiving the one or more requests related to the order for execution by the one or more network nodes (106a-106n). In an embodiment, the method allows the input interface (626) (e.g., command line interface (604) or the like) to receive the one or more requests related to the order for execution by the one or more network nodes (106a-106n).
[0079] At 804, the method includes retrieving the plurality of details of the workflow from the cache data store (304). In an embodiment, the method allows the dynamic activator (518) to retrieve the plurality of details of the workflow from the cache data store (304).
[0080] At 806, the method includes determining the availability of the one or more network nodes (106a-106n) using the monitoring unit (202). In an embodiment, the method allows the checking module (520) to determine availability of the one or more network nodes (106a-106n) using the monitoring unit (202).
[0081] At 808, the method includes routing the one or more requests towards the one or more network nodes according to the workflow, when the one or more network nodes (106a-106n) are determined to be available. In an embodiment, the method allows the dynamic routing manager (516) to route the one or more requests towards the one or more network nodes (106a-106n) according to the workflow, when the one or more network nodes are determined to be available.
[0082] At 810, the method includes pausing the one or more requests from reaching the one or more network nodes, when the one or more network nodes are determined to be unavailable. In an embodiment, the method allows the dynamic routing manager (516) to pause the one or more requests from reaching the one or more network nodes, when the one or more network nodes are determined to be unavailable.
[0083] At 812, the method includes facilitating creating the workflow via the user interface (506), wherein the plurality of details comprises a sequence of execution of the one or more requests, node details like endpoint, parameter, signatures, or data required for the execution or like. In an embodiment, the method allows the user interface (506) facilitating to create the workflow via the user interface (506), where the plurality of details includes a sequence of execution of the one or more requests, and data required for the execution.
[0084] At 814, the method includes storing the workflow in the cache data store (304). In an embodiment, the method allows the one or more processors (502) storing the workflow in the cache data store (304).
[0085] The method can be used send dynamic attributes in the context of the API, without having to undergo any change in code, so as to facilitate changes in the dynamic attributes of the API.
[0086] A person of ordinary skill in the art will readily ascertain that the illustrated embodiments and steps in description and drawings (FIGS. 1-8) are set out to explain the exemplary embodiments shown, and it should be anticipated that ongoing technological development will change the manner in which particular functions are performed. These examples are presented herein for purposes of illustration, and not limitation. Further, the boundaries of the functional building blocks have been arbitrarily defined herein for the convenience of the description. Alternative boundaries can be defined so long as the specified functions and relationships thereof are appropriately performed. Alternatives (including equivalents, extensions, variations, deviations, etc., of those described herein) will be apparent to persons skilled in the relevant art(s) based on the teachings contained herein. Such alternatives fall within the scope and spirit of the disclosed embodiments.
[0087] The present invention offers multiple advantages over the prior art and the above listed are a few examples to emphasize on some of the advantageous features. The listed advantages are to be read in a non-limiting manner.
REFERENCE NUMERALS
[0088] Environment - 100
[0089] UEs- 102, 102- 1 - 102-n
[0090] Server - 104
[0091] Network - 106
[0092] Network node - 106a-106n
[0093] System - 108
[0094] Monitoring unit (i.e., AE ML) - 202
[0095] Queue of NB Interface requests - 204
[0096] NB Interface - 206
[0097] Request queue unit 302
[0098] Cache data store 304
[0099] Processor - 502
[00100] Memory - 504
[00101] User Interface - 506
[00102] Display - 508
[00103] Input device - 510
[00104] Graph Database - 512
[00105] Centralized Database - 514
[00106] Dynamic routing manager - 516
[00107] Dynamic activator - 518
[00108] Checking module - 520
[00109] Command line interface - 604
[00110] Execution engine - 606
[00111] Workflow manager - 610
[00112] Queuing engine - 612
[00113] Message broker - 614
[00114] Operation and management module - 618
[00115] Distributed database - 620
[00116] Distributed data lake - 622
[00117] Load balancer - 624
[00118] Input interface - 626
[00119] System - 700
[00120] Primary processors -705
[00121] Memory- 710
[00122] Kernel- 715
Claims
1. A method of executing requests in a network (106), the method comprising the steps of: receiving, by one or more processors (502), one or more requests related to an order for execution by one or more network nodes (106a-106n); retrieving, by the one or more processors (502), a plurality of details of a workflow from a cache data store (304); determining, by the one or more processors (502), availability of the one or more network nodes (106a-106n) using a monitoring unit (202); routing, by the one or more processors (502), the one or more requests towards the one or more network nodes (106a-106n) according to the workflow, when the one or more network nodes (106a-106n) are determined to be available; and pausing, by the one or more processors (502), the one or more requests from reaching the one or more network nodes (106a-106n), when the one or more network nodes (106a-106n) are determined to be unavailable.
2. The method as claimed in claim 1, wherein upon pausing the one or more requests from reaching the one or more network nodes (106a-106n) comprises the step of checking, by the one or more processors (502), if the one or more network nodes (106a-106n) is determined to be available.
3. The method as claimed in claim 1, wherein the one or more requests that are paused is stored in a request cache unit (302) until the one or more network nodes (106a-106n) are available, and wherein the one or more requests are routed towards the one or more network nodes (106a-106n) for execution in accordance with the workflow when the one or more network nodes (106a-106n) are available.
4. The method as claimed in claim 1, wherein the one or more network nodes (106a-106n) are determined to be unavailable when the one or more network nodes (106a-106n) go into a maintenance mode.
5. The method as claimed in claim 1, further comprising: facilitating, by the one or more processors (502), creating the workflow via a input interface (626), wherein the plurality of details comprises a sequence of execution of the one or more requests, and data required for the execution; and storing, by the one or more processors (502), the workflow in the cache data store (304).
6. A system (108) for executing requests in a network, the system (108) comprising: an input interface (626) configured to: receive one or more requests related to an order for execution by one or more network nodes (106a-106n); a dynamic activator (518) configured to: retrieve a plurality of details of a workflow from a cache data store (304); a checking module (520) configured to: determine availability of the one or more network nodes (106a-106n) using a monitoring unit (202); a dynamic routing manager (516) configured to: route the one or more requests towards the one or more network nodes (106a-106n) according to the workflow, when the one or more network nodes (106a-106n) are determined to be available; and pause the one or more requests from reaching the one or more network nodes (106a-106n), when the one or more network nodes are determined to be unavailable.
7. The system (108) as claimed in claim 6, comprising:
the checking module (520) configured to check if the one or more network nodes is determined to be available upon pausing the one or more requests from reaching the network nodes (106a-106n).
8. The system (108) as claimed in claim 6, wherein the system (108) further comprises: a user interface (506) configured to: facilitate creation of the workflow by a user, wherein the plurality of details comprises a sequence of execution of the one or more requests, and data required for the execution.
9. The system (108) as claimed in claim 6, wherein the dynamic activator (518) is further configured to: execute the workflow based on the plurality of details.
10. The system (108) as claimed in claim 6, wherein the system (108) further comprises: a request cache unit (302) configured to: store the one or more requests that are paused due to unavailability of the one or more network nodes (106a-106n), and wherein the one or more requests are stored until the one or more network nodes (106a-106n) are available.
11. The system (108) as claimed in claim 10, wherein the dynamic routing manager (516) is further configured to: route the one or more requests towards the one or more network nodes (106a- 106n) for execution in accordance with the workflow when the one or more network nodes (106a-106n) are available.
2. A non-transitory computer-readable medium having stored thereon computer- readable instructions that, when executed by a processor (502), cause the processor to: receive one or more requests related to an order for execution by one or more network nodes (106a-106n); retrieve a plurality of details of a workflow from a cache data store (304); determine availability of the one or more network nodes (106a-106n) using a monitoring unit (202); route the one or more requests towards the one or more network nodes (106a- 106n) according to the workflow, when the one or more network nodes (106a- 106n) are determined to be available; and pause the one or more requests from reaching the one or more network nodes (106a-106n), when the one or more network nodes (106a-106n) are determined to be unavailable.
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| IN202321047882 | 2023-07-16 | ||
| IN202321047882 | 2023-07-16 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2025017691A1 true WO2025017691A1 (en) | 2025-01-23 |
Family
ID=94281335
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/IN2024/051260 Pending WO2025017691A1 (en) | 2023-07-16 | 2024-07-16 | Method and system for executing requests in network |
Country Status (1)
| Country | Link |
|---|---|
| WO (1) | WO2025017691A1 (en) |
Citations (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20150180702A1 (en) * | 2013-12-19 | 2015-06-25 | Jvl Ventures, Llc | Systems, methods, and computer program products for service processing |
| US20180248940A1 (en) * | 2017-02-27 | 2018-08-30 | International Business Machines Corporation | Distributed data management |
| CN109842500A (en) * | 2017-11-24 | 2019-06-04 | 阿里巴巴集团控股有限公司 | A kind of dispatching method and system, working node and monitoring node |
| CN114285903A (en) * | 2021-12-16 | 2022-04-05 | 奇安信科技集团股份有限公司 | Request processing method, device and system and electronic equipment |
-
2024
- 2024-07-16 WO PCT/IN2024/051260 patent/WO2025017691A1/en active Pending
Patent Citations (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20150180702A1 (en) * | 2013-12-19 | 2015-06-25 | Jvl Ventures, Llc | Systems, methods, and computer program products for service processing |
| US20180248940A1 (en) * | 2017-02-27 | 2018-08-30 | International Business Machines Corporation | Distributed data management |
| CN109842500A (en) * | 2017-11-24 | 2019-06-04 | 阿里巴巴集团控股有限公司 | A kind of dispatching method and system, working node and monitoring node |
| CN114285903A (en) * | 2021-12-16 | 2022-04-05 | 奇安信科技集团股份有限公司 | Request processing method, device and system and electronic equipment |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US10891560B2 (en) | Supervised learning system training using chatbot interaction | |
| US20180049179A1 (en) | Method and a system for identifying operating modes of communications in mobile-edge computing environment | |
| US10313219B1 (en) | Predictive intelligent processor balancing in streaming mobile communication device data processing | |
| JP7695020B2 (en) | Predictive communication compensation | |
| US11722371B2 (en) | Utilizing unstructured data in self-organized networks | |
| US11308429B2 (en) | Enterprise data mining systems | |
| CN114466005B (en) | IoT Device Orchestration | |
| US20250147758A1 (en) | Digital twin auto-coding orchestrator | |
| US12184490B2 (en) | Automated configuration and deployment of contact center software suite | |
| WO2025017691A1 (en) | Method and system for executing requests in network | |
| WO2025012986A2 (en) | Method and system for selecting path for communication within communication network | |
| US9699020B1 (en) | Component aware maintenance alarm monitoring system and methods | |
| WO2025079092A1 (en) | Method and system for predicting performance trends of one or more network functions | |
| WO2025057229A1 (en) | System and method for managing resources for container network function (cnf) instantiation | |
| WO2025057243A1 (en) | System and method to manage routing of requests in network | |
| WO2025057226A1 (en) | System and method to manage resources for container network function (cnf) operations | |
| WO2025017637A1 (en) | Method and system for performing a dynamic application programming interface (api) orchestration | |
| US20250343759A1 (en) | Systems and methods for detecting blocked traffic flows in voice related services | |
| US20230269652A1 (en) | Control of communication handovers based on criticality of running software programs | |
| WO2025017746A1 (en) | Method and system for generating reports | |
| WO2025057227A1 (en) | System and method for managing data of container network functions (cnfs) in network | |
| WO2025017662A1 (en) | Method and system for failure management in a network | |
| WO2025052456A1 (en) | System and method for managing data in database | |
| WO2025057244A1 (en) | System and method of managing one or more application programming interface (api) requests in network | |
| WO2025017636A1 (en) | Method and system for obtaining a service access control policy |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 24842682 Country of ref document: EP Kind code of ref document: A1 |