[go: up one dir, main page]

WO2025094198A1 - System and method for dynamic routing of an event request - Google Patents

System and method for dynamic routing of an event request Download PDF

Info

Publication number
WO2025094198A1
WO2025094198A1 PCT/IN2024/052156 IN2024052156W WO2025094198A1 WO 2025094198 A1 WO2025094198 A1 WO 2025094198A1 IN 2024052156 W IN2024052156 W IN 2024052156W WO 2025094198 A1 WO2025094198 A1 WO 2025094198A1
Authority
WO
WIPO (PCT)
Prior art keywords
event
request
load balancer
routing
status
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
PCT/IN2024/052156
Other languages
French (fr)
Inventor
Aayush Bhatnagar
Sumit Thakur
Pramod JUNDRE
Ganmesh KOLI
Arun MAURYA
Kuldeep Singh
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jio Platforms Ltd
Original Assignee
Jio Platforms Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jio Platforms Ltd filed Critical Jio Platforms Ltd
Publication of WO2025094198A1 publication Critical patent/WO2025094198A1/en
Pending legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1001Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
    • H04L67/1004Server selection for load balancing
    • H04L67/1008Server selection for load balancing based on parameters of servers, e.g. available memory or workload
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/12Avoiding congestion; Recovering from congestion
    • H04L47/125Avoiding congestion; Recovering from congestion by balancing the load, e.g. traffic engineering
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/60Scheduling or organising the servicing of application requests, e.g. requests for application data transmissions using the analysis and optimisation of the required network resources
    • H04L67/63Routing a service request depending on the request content or context

Definitions

  • a portion of the disclosure of this patent document contains material, which is subject to intellectual property rights such as, but are not limited to, copyright, design, trademark, Integrated Circuit (IC) layout design, and/or trade dress protection, belonging to JIO PLATFORMS LIMITED or its affiliates (hereinafter referred as owner).
  • owner has no objection to the facsimile reproduction by anyone of the patent document or the patent disclosure, as it appears in the Patent and Trademark Office patent files or records, but otherwise reserves all rights whatsoever. All rights to such intellectual property are fully reserved by the owner.
  • the present disclosure relates generally to event routing in wireless communication systems. More particularly, the present disclosure relates to a system and a method for dynamic routing of an event request.
  • ERM instances used hereinafter in the specification refers to various instances of service involved in the ERM running to handle events workload.
  • the ERM instances enables real-time adjustments to routing rules and load-balancing strategies without requiring service restarts.
  • Event Routing Manager or event routing management unit used hereinafter in the specification refers to a processing entity that is responsible for managing and routing event requests within a network.
  • the ERM monitors the status of various event processing instances and selects the most appropriate instance based on their availability, load, and other defined criteria.
  • the ERM also facilitates communication between the event request source (e.g., microservices or load balancers) and the destination, ensuring efficient routing and handling of event-based data.
  • event used hereinafter in the specification refers to a specific action that can trigger a network element or a system to take a particular action.
  • the event may include service requests, network traffic, system configuration changes, security incidents, and the like.
  • event request used hereinafter in the specification refers to a set of instructions for an event.
  • the event request includes specific parameters and data that instruct the ERM to perform one or more operations related to the event.
  • microservice used hereinafter in the specification refers to a software architecture where an application is composed of small, independently deployable services, each responsible for a specific business function.
  • the microservices communicate over well-defined application programming interfaces (APIs) and can be developed, deployed, and scaled independently.
  • APIs application programming interfaces
  • Toad balancer used hereinafter in the specification refers to a network component or device responsible for distributing event requests or other types of traffic across a set of instances, such as ERM instances or microservices.
  • the load balancer ensures efficient distribution of requests based on load balancing algorithms such as round-robin, least connections, or header-based request dispatching. It also monitors the status of the instances and ensures that event requests are routed only to healthy or active instances, helping maintain system stability and performance.
  • CLM Configuration and Lifecycle Management system responsible for overseeing the provisioning, configuration, and lifecycle management of network components or microservices.
  • the CLM ensures that instances are properly deployed, configured, and maintained throughout their lifecycle, including automated scaling, updates, and fault management.
  • the CLM may interact with load balancers, microservices, and ERM instances to ensure the availability and health of the deployed services.
  • TAM used hereinafter in the specification refers to Identity and Access Management, a security framework used to manage the identities of users, systems, or services, and control access to resources in the network.
  • the IAM system ensure that only authorized entities can interact with the ERM, load balancers, and other components, and enforce security policies for event handling. This helps protect the system from unauthorized access and ensures the integrity of event requests and responses.
  • dynamic routing used hereinafter in the specification refers to a method of directing event requests through a network based on real-time factors such as system load, availability of resources, or predefined rules. Unlike static routing, where paths are predetermined, dynamic routing involves continuously monitoring the status of event routing management units and load balancers and making on-the-fly decisions to optimize the distribution of event requests. This ensures high availability, fault tolerance, and efficient handling of network traffic, particularly in systems with varying workloads or multiple processing entities.
  • Load balancing is a process of distributing workloads across multiple computing resources, such as application servers, virtual machines, or containers, to achieve better performance, availability, and scalability. Load balancing is typically performed by load balancers.
  • the load balancers route all requests to access applications on the clusters to a back-end server.
  • the load balancer receives a request for an application, selects a given server to run the application, and distributes the request to the selected back-end application server.
  • the load balancer makes sure that routing of requests for a given application to a given server running that application. This ensures to achieve similar performance to each request, independent of the particular server that is destined to execute the request.
  • the load balancer must consider factors (for example, server's reported load, recent response times, up/down status, number of active connections, geographic location, capabilities, or how much traffic the load balancer has recently assigned the server, etc.) that affect application performance on each server.
  • factors for example, server's reported load, recent response times, up/down status, number of active connections, geographic location, capabilities, or how much traffic the load balancer has recently assigned the server, etc.
  • load balancing can be complex, especially when dealing with large-scale systems. This requires careful planning and configuration to ensure that it works effectively.
  • the service provided by the load balancer may be interrupted or experience a delay.
  • a method for dynamic routing of an event request between a plurality of event routing management units and a plurality of load balancers includes receiving, by a receiving unit, the event request from an interfacing unit via a network interface.
  • the method includes monitoring, by a processing unit, a status of each of the plurality of event routing management units. The status is one of an active status or an inactive status.
  • the method further includes routing, by the processing unit via at least one load balancer, the event request towards one of an event routing management unit with the active status.
  • the method further includes forwarding, by the event routing management unit, the event request to at least one microservice via one of a load balancer of the plurality of load balancers.
  • the method further includes receiving, by one of the load balancer of the plurality of load balancers, an event response from the at least one microservice.
  • the selection of one of the event routing management unit from the plurality of event routing management units with the active status is performed using a distribution process and a header-based request dispatching.
  • the selection of one of the load balancer from the plurality of load balancers is performed using the distribution process.
  • the method further includes sending, by the at least one load balancer, a timeout request to the plurality of event routing management units via the network interface upon determining that the event request is not performed within a predefined time.
  • the network interface corresponds to an event routing manager_load balancer) EM_LB interface that facilitates event-based communication between the plurality of event routing management units and the plurality of load balancers.
  • the EM_LB interface is configured to enable the at least one load balancer to send the event request and receive the event response from same instance of the event routing management unit having the active state using the headerbased request dispatching.
  • a system for dynamic routing of an event request between a plurality of event routing management units and a plurality of load balancers includes a receiving unit configured to receive the event request from an interfacing unit via a network interface.
  • the system further includes a memory and a processing unit.
  • the processing unit is coupled with the receiving unit to receive the event request and is further coupled with the memory to execute a set of instructions stored in the memory.
  • the processing unit is configured to monitor a status of each of the plurality of event routing management units. The status is one of an active status or an inactive status.
  • the processing unit is further configured to route, via at least one load balancer, the event request towards one of an event routing management unit with the active status.
  • the event routing management unit is configured to forward the event request to at least one microservice via one of a load balancer of the plurality of load balancers.
  • the load balancer of the plurality of load balancers is configured to receive an event response from at least one microservice.
  • the present disclosure discloses a computer program product comprising a non-transitory computer-readable medium comprising instructions that, when executed by one or more processors, cause the one or more processors to perform a method for dynamic routing of an event request.
  • the method includes receiving, by a receiving unit, the event request from an interfacing unit via a network interface.
  • the method includes monitoring, by a processing unit, a status of each of the plurality of event routing management units. The status is one of an active status or an inactive status.
  • the method further includes routing, by the processing unit via at least one load balancer, the event request towards one of an event routing management unit with the active status.
  • the method further includes forwarding, by the event routing management unit, the event request to at least one microservice via one of a load balancer of the plurality of load balancers.
  • the method further includes receiving, by one of the load balancer of the plurality of load balancers, an event response from the at least one microservice.
  • An object of the present disclosure is to provide a system and a method for efficiently distributing incoming network traffic across a group of backend servers or group of instances via a network interface (e.g., an EM_LB interface).
  • a network interface e.g., an EM_LB interface
  • Another object of the present disclosure is to distribute incoming events requests & responses by load balancer across the ERM instances via the EM_LB interface to ensure that the system can scale horizontally, handling increasing loads effectively.
  • Yet another object of the present disclosure is to ensure that events are directed to healthy instances of ERM if any instance goes down, thereby minimizing downtime and maintaining high availability.
  • An object of the present disclosure is to provide EM_LB interface to be used by load balancers to provide zero-downtime deployments of microservices.
  • Yet another object of the present disclosure is to prevent overloading of the ERM instances.
  • FIG. 1 A illustrates an exemplary network architecture for implementing a system for dynamic routing of an event request, in accordance with an embodiment of the present disclosure.
  • FIG. IB illustrates an exemplary block diagram of the system for dynamic routing of an event request, in accordance with an embodiment of the present disclosure.
  • FIG. 1C illustrates an exemplary system architecture for dynamic routing of an event request, in accordance with an embodiment of the present disclosure.
  • FIG. 2 illustrates an exemplary an exemplary block diagram of the system for dynamic routing of an event request, in accordance with an embodiment of the present disclosure.
  • FIG. 3 illustrates an exemplary Management and Orchestration (MANO) framework architecture, in accordance with an embodiment of the present disclosure.
  • FIG. 4 illustrates another exemplary flow diagram of a method for dynamic routing of an event request, in accordance with an embodiment of the present disclosure, in accordance with an embodiment of the present disclosure.
  • FIG. 5 illustrates a computer system in which or with which the embodiments of the present disclosure may be implemented.
  • IAM Identity and access management
  • Event routing manager (ERM)
  • CLMS Cloud-based learning management system
  • individual embodiments may be described as a process that is depicted as a flowchart, a flow diagram, a data flow diagram, a structure diagram, or a block diagram. Although a flowchart may describe the operations as a sequential process, many of the operations can be performed in parallel or concurrently. In addition, the order of the operations may be re-arranged. A process is terminated when its operations are completed but could have additional steps not included in a figure. A process may correspond to a method, a function, a procedure, a subroutine, a subprogram, etc. When a process corresponds to a function, its termination can correspond to a return of the function to the calling function or the main function.
  • exemplary and/or “demonstrative” is used herein to mean serving as an example, instance, or illustration.
  • the subject matter disclosed herein is not limited by such examples.
  • any aspect or design described herein as “exemplary” and/or “demonstrative” is not necessarily to be construed as preferred or advantageous over other aspects or designs, nor is it meant to preclude equivalent exemplary structures and techniques known to those of ordinary skill in the art.
  • the terms “includes,” “has,” “contains,” and other similar words are used in either the detailed description or the claims, such terms are intended to be inclusive like the term “comprising” as an open transition word without precluding any additional or other elements.
  • mobile device “user equipment”, “user device”, “communication device”, “device” and similar terms are used interchangeably for the purpose of describing the invention. These terms are not intended to limit the scope of the invention or imply any specific functionality or limitations on the described embodiments. The use of these terms is solely for convenience and clarity of description. The invention is not limited to any particular type of device or equipment, and it should be understood that other equivalent terms or variations thereof may be used interchangeably without departing from the scope of the invention as defined herein.
  • Load balancing is a process of distributing workloads across multiple computing resources, such as application servers, virtual machines, or containers, to achieve better performance, availability, and scalability. Load balancing is typically performed by load balancers.
  • the load balancers route all requests to access applications on the clusters to a back-end server.
  • the load balancer receives a request for an application, selects a given server to run the application, and distributes the request to the selected back-end application server.
  • the load balancer makes sure that routing of requests for a given application to a given server running that application. This ensures to achieve similar performance to each request, independent of the particular server that is destined to execute the request.
  • the load balancer must consider factors (for example, server's reported load, recent response times, up/down status, number of active connections, geographic location, capabilities, or how much traffic the load balancer has recently assigned the server, etc.) that affect application performance on each server.
  • factors for example, server's reported load, recent response times, up/down status, number of active connections, geographic location, capabilities, or how much traffic the load balancer has recently assigned the server, etc.
  • load balancing can be complex, especially when dealing with large-scale systems. This requires careful planning and configuration to ensure that it works effectively.
  • the service provided by the load balancer may be interrupted or experience a delay.
  • the present disclosure aims to overcome the above-mentioned and other existing problems in this field of technology by providing a system and a method for facilitating communication between an event routing manager (ERM) and load balancer (LB) via an EM_LB interface which is used to distribute incoming events requests & responses by load balancers across the ERM instances.
  • the EM_LB interface may be dynamically identified by load balancer service for connecting to ERM instances, thereby enabling real-time adjustments to routing rules and loadbalancing strategies without requiring service restarts.
  • An asynchronous event-based implementation is supported to utilize the EM_LB interface efficiently.
  • FIG. 1A illustrates an exemplary network architecture (100A) for implementing a system (108) for dynamic routing of an event request, in accordance with an embodiment of the present disclosure.
  • the network architecture (100A) may include one or more user equipments (UEs) (104-1, 104-2... 104-N) associated with one or more users (102-1, 102-2... 102-N) in an environment.
  • UEs user equipments
  • a person of ordinary skill in the art will understand that one or more users (102-1, 102-2... 102-N) may collectively referred to as the users (102).
  • UEs UE-1, 104-2... 104-N
  • UE UEs
  • the UE (104) may include smart devices operating in a smart environment, for example, an Internet of Things (loT) system.
  • the UE (104) may include, but are not limited to, smartphones, smart watches, smart sensors (e.g., mechanical, thermal, electrical, magnetic, etc.), networked appliances, networked peripheral devices, networked lighting system, communication devices, networked vehicle accessories, networked vehicular devices, smart accessories, tablets, smart television (TV), computers, smart security system, smart home system, other devices for monitoring or interacting with or for the users (102) and/or entities, or any combination thereof.
  • smartphones such an embodiment, the UE (104) may include, but are not limited to, smartphones, smart watches, smart sensors (e.g., mechanical, thermal, electrical, magnetic, etc.), networked appliances, networked peripheral devices, networked lighting system, communication devices, networked vehicle accessories, networked vehicular devices, smart accessories, tablets, smart television (TV), computers, smart security system, smart home system, other devices for monitoring or interacting with or
  • the UE (104) may include, but not limited to, intelligent, multisensing, network-connected devices, which may integrate seamlessly with each other and/or with a central server or a cloud-computing system or any other device that is network-connected.
  • the UE (104) may include, but not limited to, a handheld wireless communication device (e.g., a mobile phone, a smartphone, a phablet device, and so on), a wearable computer device (e.g., a headmounted display computer device, a head-mounted camera device, a wristwatch computer device, and so on), a Global Positioning System (GPS) device, a laptop computer, a tablet computer, or another type of portable computer, a media playing device, a portable gaming system, and/or any other type of computer device with wireless communication capabilities, and the like.
  • a handheld wireless communication device e.g., a mobile phone, a smartphone, a phablet device, and so on
  • a wearable computer device e.g., a headmounted display computer device, a head-mounted camera device, a wristwatch computer device, and so on
  • GPS Global Positioning System
  • the UE (104) may include, but are not limited to, any electrical, electronic, electromechanical, or equipment, or a combination of one or more of the above devices, such as virtual reality (VR) devices, augmented reality (AR) devices, laptop, a general-purpose computer, desktop, personal digital assistant, tablet computer, mainframe computer, or any other computing device, wherein the UE (104) may include one or more in-built or externally coupled accessories including, but not limited to, a visual aid device such as a camera, an audio aid, a microphone, a keyboard, and input devices for receiving input from the user (102) or the entity such as touchpad, touch-enabled screen, electronic pen, and the like.
  • a visual aid device such as a camera, an audio aid, a microphone, a keyboard, and input devices for receiving input from the user (102) or the entity such as touchpad, touch-enabled screen, electronic pen, and the like.
  • the UE (104) may not be restricted to the mentioned devices and various other devices may be used.
  • the UE (104) may communicate with the system (108) through the network (106) for sending or receiving various types of data.
  • the network (106) may include at least one of a 5G network, 6G network, or the like.
  • the network (106) may enable the UE (104) to communicate with other devices in the network architecture (100 A) and/or with the system (108).
  • the network (106) may include a wireless card or some other transceiver connection to facilitate this communication.
  • the network (106) may be implemented as, or include any of a variety of different communication technologies such as a wide area network (WAN), a local area network (LAN), a wireless network, a mobile network, a Virtual Private Network (VPN), the Internet, the Public Switched Telephone Network (PSTN), or the like.
  • WAN wide area network
  • LAN local area network
  • VPN Virtual Private Network
  • PSTN Public Switched Telephone Network
  • the network (106) may include, by way of example but not limitation, at least a portion of one or more networks having one or more nodes that transmit, receive, forward, generate, buffer, store, route, switch, process, or a combination thereof, etc. one or more messages, packets, signals, waves, voltage or current levels, some combination thereof, or so forth.
  • the network (106) may also include, by way of example but not limitation, one or more of a radio access network (RAN), a wireless network, a wired network, an internet, an intranet, a public network, a private network, a packet-switched network, a circuit- switched network, an ad hoc network, an infrastructure network, a Public-Switched Telephone Network (PSTN), a cable network, a cellular network, a satellite network, a fiber optic network, or some combination thereof.
  • RAN radio access network
  • PSTN Public-Switched Telephone Network
  • the UE (104) is communicatively coupled with the network (106).
  • the network (106) may receive a connection request from the UE (104).
  • the network (106) may send an acknowledgment of the connection request to the UE (104).
  • the UE (104) may transmit a plurality of signals in response to the connection request.
  • FIG. 1A shows exemplary components of the network architecture (100A)
  • the network architecture (100 A) may include fewer components, different components, differently arranged components, or additional functional components than depicted in FIG. 1A. Additionally, or alternatively, one or more components of the network architecture (100A) may perform functions described as being performed by one or more other components of the network architecture (100A).
  • FIG. IB illustrates an exemplary block diagram (100B) of the system (108) for dynamic routing of the event request, in accordance with an embodiment of the present disclosure.
  • the system (108) may include one or more processor(s) (hereafter referred to as a processing unit (110)), a memory (112), a plurality of interface(s) (114), a receiving unit (116), a load balancer (118), and a database (120).
  • the processing unit (110) may be implemented as one or more microprocessors, microcomputers, microcontrollers, digital signal processors, central processing units, logic circuitries, and/or any devices that process data based on operational instructions.
  • the processing unit (110) may be configured to fetch and execute computer-readable instructions stored in the memory (112) of the system (108).
  • the memory (112) may be configured to store one or more computer-readable instructions or routines in a non-transitory computer readable storage medium, which may be fetched and executed to create or share data packets over a network service.
  • the memory (112) may include any non-transitory storage device including, for example, volatile memory such as random-access memory (RAM), or non-volatile memory such as erasable programmable read only memory (EPROM), flash memory, and the like.
  • the interface(s) (114) may include a variety of interfaces, for example, interfaces for data input and output devices (RO), storage devices, and the like.
  • the interface(s) (114) may facilitate communication through the system (108).
  • the interface(s) (114) may also provide a communication pathway for one or more components of the system (108). Examples of such components include, but are not limited to, the processing unit (110), the receiving unit (116), load balancer (118) and the database (120).
  • the processing unit (108) may be implemented as a combination of hardware and programming (for example, programmable instructions) to implement one or more functionalities of the processing unit (108).
  • programming for the processing unit (108) may be processor-executable instructions stored on a non-transitory machine -readable storage medium and the hardware for the processing unit (108) may comprise a processing resource (for example, one or more processors), to execute such instructions.
  • the machine -readable storage medium may store instructions that, when executed by the processing resource, implement the processing unit (108).
  • the system may comprise the machine -readable storage medium storing the instructions and the processing resource to execute the instructions, or the machine-readable storage medium may be separate but accessible to the system and the processing resource.
  • the processing unit (108) may be implemented by electronic circuitry.
  • the database (120) includes data that may be either stored or generated as a result of functionalities implemented by any of the components of the processing unit (108).
  • the system (108) is configured to perform dynamic routing of the event request between a plurality of event routing management units and a plurality of load balancers.
  • dynamic routing may refer to a method of directing event requests through a network based on real-time factors such as system load, availability of resources, or predefined rules. Unlike static routing, where paths are predetermined, dynamic routing involves continuously monitoring the status of event routing management units and load balancers and making on-the-fly decisions to optimize the distribution of event requests. This ensures high availability, fault tolerance, and efficient handling of network traffic, particularly in systems with varying workloads or multiple processing entities.
  • the receiving unit (116) may initially receive an event request from an interfacing unit via a network interface.
  • the event request may be any message, signal, or data packet that requires processing by one or more microservices (MS) within the system (108).
  • the receiving unit (116) may receive the event request when an event is triggered from a source, such as an external service or microservice (MS).
  • the network interface acting as a communication gateway, receives the incoming event request with necessary information such as headers, request types, priority levels, and specific routing instructions and forwards it to the receiving unit (116).
  • the processing unit (110) Upon receiving the event request, the processing unit (110) is configured to monitor the status of each of the plurality of event routing management units.
  • the status of each event routing management unit (interchangeably used as an event routing manager (ERM)) may indicate whether it is in an active state or an inactive state.
  • the active status may signify that the event routing management unit is ready to handle requests.
  • the inactive status may indicate that a particular event routing management unit is temporarily unavailable due to overload, maintenance, or failure conditions.
  • the monitoring may involve checking the availability, processing capacity, and response times of the event routing management units, ensuring that only active and operational units are considered for routing.
  • the status information may be periodically updated and stored in the database (120) accessible to the processing unit (110).
  • the processing unit (110) in conjunction with at least one load balancer (e.g., the load balancer 118), routes the event request towards one of the event routing management units with the active status.
  • the processing unit (110) may employ a distribution process to select the appropriate event routing management unit for handling the event request.
  • the distribution process may include, but is not limited to, round-robin distribution, load-based selection, or header-based request dispatching.
  • the header of the event request may contain specific parameters, such as priority levels or request types, which assist in selecting the most appropriate event routing management unit. The selection ensures that the event request is routed to an available and operational event routing management unit.
  • the event requests are routed sequentially to each event routing management unit (e.g., ERM), in turn, ensuring an even and balanced distribution of requests across all available units.
  • ERM event routing management unit
  • This method does not consider the load on individual ERM. Still, the method ensures that all units receive an approximately equal number of event requests over time, making it simple yet effective for managing workloads evenly.
  • the load-based selection process involves monitoring the current processing load or capacity of each event routing management unit.
  • the event request is routed to the ERM with the least load or highest available capacity, ensuring that no single unit becomes overwhelmed.
  • This dynamic distribution adapts to the real-time workload of each ERM, optimizing system performance by preventing bottlenecks and overload conditions.
  • the event request is routed based on specific parameters included in the request header. These parameters may include priority levels, request types, or specific service-level agreements (SLAs), which help determine the most appropriate ERM. For instance, high-priority requests may be routed to units that specialize in urgent tasks, while routine or lower-priority requests can be assigned to standard units, allowing for more intelligent and specialized request handling.
  • SLAs service-level agreements
  • the processing unit (110) may send a timeout request to the load balancer.
  • the processing unit (110) continuously monitors the status of the event routing management unit to check whether it has completed the processing of the event request. If the request is not completed within the predefined time period (for example, within a 15 minute), the processing unit (110) recognizes that the event request has not been performed. This may trigger a generation of a timeout request, which is then sent to the load balancer.
  • the timeout request may refer to a notification or signal that is triggered when the event routing management unit fails to process the event request within the predefined time period.
  • the predefined time may be configured based on factors such as system load, the complexity of the request, or the priority of the event. For instance, a high-priority event request may have a shorter timeout duration than a lower-priority one.
  • the load balancer may then reroute the event request to another event routing management unit with the active status, ensuring that the system (108) continues to operate efficiently without failures.
  • the predefined timeout duration may be configured based on the system load or the type of event request being processed.
  • the event routing management unit may forward it to at least one microservice (e.g., MS) via one of the load balancer.
  • the load balancer may distribute the event request among different microservices (MS) based on their availability, capacity, or predefined routing logic.
  • the load balancer is responsible for distributing the event request evenly across multiple microservices, ensuring that no single microservice becomes overloaded and that system resources are used efficiently.
  • the system (108) may include a plurality of load balancers to handle different categories of microservices, such as one for data processing microservices and another for notification microservices.
  • microservices e.g., MS
  • the load balancer may receive the request from the event routing management unit and intelligently distribute it to the appropriate microservice based on predefined criteria, such as current load, availability, or proximity. If microservice A is currently handling a high number of requests, the load balancer may forward the new request to microservice B, which is under less load. In another case, if the load balancer detects that microservice A is inactive due to downtime or maintenance, it may automatically forward the event request to the next available microservice, such as microservice C.
  • the load balancer ensures that the system (108) remains efficient and responsive, by routing requests evenly across the available microservices.
  • Each load balancer in the plurality of load balancers can specialize in routing the event request to different microservices or groups of microservices, depending on the system’s architecture and needs.
  • the at least one microservice may perform specific tasks related to the event request, such as data processing, resource allocation, or task execution.
  • the microservice may refer to a small, independent, and modular service that performs a specific task or set of tasks within the system.
  • the microservice may be responsible for handling user authentication, data storage, notification services, or billing calculations.
  • Each microservice operates autonomously, making the system highly flexible, scalable, and easily manageable.
  • the microservice may include, but is not limited to, an authentication microservice, a data processing microservice, a notification microservice, a logging microservice, and a payment or billing microservice.
  • the authentication microservice can handle requests related to verifying user credentials, managing user sessions, and ensuring secure access to various system components.
  • the data processing microservice may process data requests, such as transforming data formats, performing calculations, or running machine learning algorithms on received data before returning the result.
  • the notification microservice may send real-time alerts or notifications via email, SMS, or push notifications to relevant users or systems based on the event.
  • the logging microservice can log event data, errors, or any activity within the system, allowing for audit trails and system monitoring.
  • the payment or billing service handles requests related to charging, billing, or processing payments for customers or users within a financial or subscription-based system.
  • an event response is generated.
  • the event response may refer to the output or result generated by the at least one microservice after processing the event request.
  • the nature of the event response depends on the type of event being handled. For example, if the event request is related to user authentication, the event response could be a success message indicating that the user has been authenticated or a failure message if the credentials are invalid.
  • the event response may include a processed result, such as a summary report or confirmation that the data operation was successful.
  • notification events such as sending emails or SMS
  • the event response may confirm whether the notification was successfully delivered or report any errors like delivery failures.
  • the event response may confirm whether the transaction was successful or failed, potentially requiring further validation or indicating issues like insufficient funds.
  • the load balancer of the plurality of load balancers may receive the event response from the at least one microservice and directs it back to the originating event routing management unit.
  • the selection of the load balancer may be done using the similar distribution process used for the event routing management unit.
  • the event routing management unit then forwards the event response to the processing unit (110), which subsequently directs the response to the interfacing unit (122) via the network interface. This ensures that the event request cycle is completed, and the appropriate response is delivered back to the originator, which may be an external client or another system component.
  • the communication flow ensures that the response reaches the microservice responsible for publishing the event, maintaining consistency in event tracking and handling.
  • the network interface may correspond to an event routing manager_load balancer (EM_LB) interface.
  • EM_LB event routing manager_load balancer
  • the EM_LB interface facilitates event-based transfer of the event requests and event responses between the load balancers and event routing management units.
  • the EM_LB interface is configured to enable the at least one load balancer to send the event request and receive the event response from same instance of the event routing management unit having the active state using the header-based request dispatching.
  • the EM_LB interface ensures that real-time status updates and routing decisions may be efficiently communicated between the components, further optimizing the routing process.
  • the EM_LB interface allows for dynamic adjustments in response to fluctuating system demands, such as high traffic volumes or changes in microservice availability.
  • the system architecture (100C) includes a user interface (UI/UX) (122), an identity and access management (IAM) (124), a plurality of elastic load balancers (ELBs) (e.g., ELB 1 (126-1), ELB 2 (126-2), ELB 3 (126-3), and ELB 4 (126-4)), an event routing manager (ERM) (128), a plurality of microservices (e.g., MS-1 (130-1), MS-2 (130-2)..
  • UI/UX user interface
  • IAM identity and access management
  • ELBs elastic load balancers
  • ELM event routing manager
  • microservices e.g., MS-1 (130-1), MS-2 (130-2).
  • the plurality of ELBs may be in communication with ERM (128) via the EM_LB interface.
  • the UI/UX (122), the IAM (124), the ERM (128), the ELB (126), the plurality of microservices (collectively referred to as MS (130)) and the elastic search cluster (132) may be in communication with each other.
  • the ERM (128) may be in communication with the 0AM (134) and the CLMS (136).
  • the MS (130) may be related to publishers or subscribers.
  • the elastic search cluster (132) includes a plurality of databases (DBs) (e.g., DB (132- 1), DB (132-2), DB (132-3), DB (132-4), DB (132-5), DB (132-6).
  • DBs databases
  • the ERM (128) involves multiple instances of service running to handle events workload.
  • the ERM (128) and the ELB (126) may perform event request routing via the EM_LB interface.
  • the event request may come from subscribers or publishers.
  • the load balancing is used to efficiently distribute incoming network traffic across a group of backend servers or a group of instances.
  • ERM instances for example, round robin, header-based routing
  • the microservice instances may be deployed in active-active mode.
  • the ELB (126) may check the health of ERM instances to ensure the availability of ERM instances.
  • the 0AM (134) is configured to store the health status of ERM instances.
  • the ELB (126) may get information about the health status of ERM instances from the 0AM (134).
  • the ELB (126) may send the event request only to the healthy instances. If any instance goes down, the ELB (126) may send the event request to another healthy ERM instance. For example, in the present FIG.
  • microservice (MS-1) 130-1) is cross-marked to indicate that MS-1 (130-1) is not in healthy condition. So, the event request is sent to another microservice instance (For example, MS-2 (130-2)). In this way, the ELB (126) distributes the ingress traffic on the available instances, thereby, minimizing system downtime and maintaining high availability.
  • FIG. 2 illustrates an exemplary block diagram (200) of the system (108) for dynamic routing of the event request between a plurality of event routing management units (e.g., ERM) and a plurality of load balancers (e.g., ELB), in accordance with an embodiment of the present disclosure.
  • ERM event routing management units
  • ELB load balancers
  • the block diagram (200) includes at least one microservice (MS) (202), at least one load balancer (204-1) (analogous to the load balancer 118), a plurality of load balancers (204-2, 204-3) (collectively referred to as the load balancers (204)), a plurality of event routing manager (ERM) (206- 1 , 206-2) (collectively referred to as ERM (206)) and an 0AM service discovery (208).
  • the ERM (206) may be implemented as an event management unit (206).
  • the MS (202) may function as either a publisher or a subscriber.
  • the MS (202) generates and sends event notifications, such as updates or changes in its state, to the system. These events are then processed and routed by the event routing management units (e.g., ERM 206) to relevant components.
  • the MS (202) registers to receive specific event notifications from the system, indicating its interest in certain types of events.
  • the plurality of load balancers (204) are configured to determine whether event request is coming from the publisher microservices or the subscriber microservices.
  • the plurality of load balancers (204) may be in communication with the plurality of ERM (206) via the EM_LB interface.
  • the 0AM service discovery (208) may be in communication with the plurality of load balancers (204) and the ERM (206-1, 206-2).
  • the 0AM service discovery (208) includes details about the ERM (for example, registration and deregistration of ERM instances, availability of ERM instances, etc.).
  • the MS (202) may send an event request to the load balancer (204-1).
  • the load balancer (204-1) may function as a load balancer controller.
  • the load balancer (204-1) may monitor the health of the ERM (206-1, 206- 2) and route the event request only to the healthy ERM instances based on selection algorithms (for example, header-based request dispatch).
  • the load balancer (204-1) sends the received event request to one of the healthy ERM (for example, ERM (206- 1)) via EM_LB interface. Further, the ERM (206-1) can also perform round robin process to select one of the load balancers (for example, LB (204-2)) via the EM_LB interface to distribute traffic uniformly. The load balancer (204-2) may send an event response against the event request to the same MS instance which has published the event.
  • ERM for example, ERM (206- 1)
  • the ERM (206-1) can also perform round robin process to select one of the load balancers (for example, LB (204-2)) via the EM_LB interface to distribute traffic uniformly.
  • the load balancer (204-2) may send an event response against the event request to the same MS instance which has published the event.
  • a microservice (MS) instance responsible for handling user login requests for an online application, sends an event request to the load balancer (204-1).
  • This load balancer (204-1) functions as a controller that not only routes requests but also monitors the health of the Event Routing Management (ERM) instances (206-1, 206-2).
  • ERM Event Routing Management
  • the load balancer (204-1) applies selection algorithms, such as header-based request dispatching, where it checks the event request’s header information, which may contain parameters like request type or priority level. Based on this analysis and the health status of the ERM instances, the load balancer routes the event request to the healthy ERM (in this case, ERM (206-1)) via the EM_LB interface.
  • the ERM (206-1) may then need to further distribute traffic among a second set of load balancers to handle the incoming requests more efficiently.
  • the ERM (206-1) performs a round-robin process, a common load-balancing technique, to uniformly distribute requests across a plurality of load balancers, such as the load balancer (204-2).
  • the round-robin process ensures that requests are evenly distributed, preventing any single load balancer from being overwhelmed.
  • the load balancer (204-2) after processing the event request, sends an event response back to the MS instance that originally published the event, ensuring that the communication loop is complete. For example, if the MS instance had requested user authentication from a login service, the load balancer would return the authentication result (e.g., success or failure) back to that MS instance.
  • ERM microservice (MS) instances are run in multiple instances in active-active mode.
  • Each of the ERM MS is being served with multiple load balancer (LB) instances.
  • the LB distributes the load on MS instances in roundrobin manner.
  • the LB also ensures that the event response against any event request is sent to same MS instance which has published the event.
  • the LB sends the event request and receives the event response via the EM_LB interface on the same ERM instance using header-based routing present in the LB. Further, if the event request has not been completed in a given time, then a timeout request is received on the EM_LB interface.
  • the event request handling is done by the MS as per execution need.
  • the load balancers (204) may perform dynamic mapping of endpoints which includes EM_LB interface, details of ERM instances using service discovery mechanisms, and providing dynamic routing between ERM instances.
  • the dynamic mapping referred to in the present disclosure involves the real-time identification, configuration, and association of endpoints (such as event routing management units and load balancers) within the system (108).
  • dynamic mapping ensures that the system (108) may adjust the assignment of the event request to the most appropriate instances of the ERMs, depending on their availability, status, and load conditions. In practice, this involves the EM_LB interface, which facilitates communication between the ERM instances and the load balancers.
  • the load balancers may access and maintain up-to-date information about all ERM instances, such as their operational status (active or inactive) and available processing capacity. This is achieved using service discovery mechanisms that constantly monitor the status and configuration of the ERM instances within the system (108). With the dynamic mapping, the load balancers may dynamically route the event request to the appropriate ERM instances. This process helps distribute workloads evenly across active ERM instances, prevents system overload, and optimizes resource allocation by routing the event request to the most suitable ERM unit in real-time. It enhances the system efficiency by ensuring that event requests are always routed to the most capable and available ERM instance.
  • the ERM (204) maintains a dynamic mapping of endpoints, including LB interface details.
  • the load balancer (204) monitors the health of ERM instances and routes traffic only to the healthy EM instances.
  • the load balancer (204) performs routing of ERM instances through the EM_LB interface using an algorithm (for example, round-robin algorithms & header-based request dispatch).
  • the ERM (206) may execute the round-robin algorithm or process across LB interfaces to distribute traffic uniformly.
  • the load balancers (204) helps in providing zero-downtime deployments of microservices. In this way, the load balancer (204) (220) offloads traffic offloading responsibilities from ERM instances and makes the ERM (206) available to focus only on the processing of the application logic.
  • the load balancers perform the routing of event requests via the EM_LB interface using round-robin algorithms.
  • the round-robin load balancing algorithm is an algorithm that distributes the event requests across a group of servers. The event request is forwarded to each server in turn.
  • the round-robin algorithm instructs the load balancer to go back to the top of the list and repeat again.
  • the round-robin network load balancing rotates connection requests among web servers in the order that requests are received.
  • the EM_LB interface is used to distribute incoming events requests and responses by the LB across these ERM instances to ensure that the system can scale horizontally, handling increasing loads effectively. Further, the ERM may use the round-robin technique to distribute outgoing event requests toward LB to ensure that the system can scale horizontally and handle increasing loads effectively.
  • the EM_LB interface can be dynamically identified by load balancer service for connecting to ERM instances enabling real-time adjustments to routing rules and load-balancing strategies without requiring service restarts.
  • the EM_LB interface is configured to facilitate async event- based implementation between the ERM (206) and the load balancers (204) to utilize the interface efficiently.
  • the EM_LB Interface is used for load balancing algorithms that consider various factors (for example, server health, round robin load balancing technique, and user-specific preferences) to intelligently distribute events to the ERM.
  • the EM_LB interface used by the load balancers ensures scalability, high availability, fault tolerance, traffic management capabilities, and efficient resource utilization.
  • FIG. 3 illustrates an exemplary representation of a user interface layer (302) within a Management and Orchestration (MANO) framework architecture (300), in accordance with an embodiment of the present disclosure, in accordance with an embodiment of the present disclosure.
  • MEO Management and Orchestration
  • the ERM (128, 206) may be characterized by the user interface layer (302).
  • the user interface layer (302) may include a menu having NFV and SDN design functions (304), platform foundation services (306), platform core services (308), platform operation, an administration and maintenance manager (310), and platform resource adapters and utilities (312).
  • various parameters associated with the ERM (128, 206) may be varied by an operator based on the requirements.
  • the parameters may include a number of microservices associated with the ERM (140, 230), a number of interfacing units that can be deployed simultaneously.
  • the NFV and SDN design functions (304) include a virtualized network function (VNF) lifecycle manager (compute) that is a specialized component focused on managing the compute resources associated with VNF throughout their lifecycle.
  • VNF virtualized network function
  • the NFV and SDN design functions (304) may include a VNF catalog that is a repository that stores and manages metadata, configurations, and templates for VNF, facilitating their deployment and lifecycle management.
  • the NFV and SDN design functions (304) may include network services catalog, network slicing and service chaining manager, physical and virtual resource manager and CNF lifecycle manager.
  • the network services catalog serves as a repository for managing and storing detailed information about network services, including their specifications and deployment requirements.
  • the network slicing & service chaining manager is responsible for orchestrating network slices and service chains, ensuring efficient allocation and utilization of network resources tailored to various services.
  • the physical and virtual resource manager oversees both physical and virtual resources, handling their allocation, monitoring, and optimization to ensure seamless operation across the network infrastructure.
  • the CNF lifecycle manager manages the complete lifecycle of the CNF, including onboarding, instantiation, scaling, monitoring, and termination, thereby facilitating the efficient deployment and operation of network functions in a cloud-native environment.
  • the platform foundation services (306) may support an asynchronous event-based processing model implemented by the ERM (128), enabling concurrent handling of multiple event requests.
  • the platform foundation services (306) include a microservices elastic load balancer, identity & access manager, the command line interface (CLI), central logging manager and ERM (140, 230).
  • the microservices elastic load balancer ensures that incoming traffic is evenly distributed across multiple microservices, enhancing performance and availability.
  • the identity & access manager handles user identity management and access control, enforcing permissions and roles to secure resources and services.
  • the CLI offers a text-based method for users to interact with the platform, enabling command execution and configuration management.
  • the central logging manager consolidates log data from various system components, providing a unified view for effective monitoring, troubleshooting, and data analysis
  • the platform core services (308) include NFV infrastructure monitoring manager, assurance manager, performance manager, policy execution engine, capacity monitoring manager, release management repository, configuration manager & global control tower (GCT), NFV platform decision analytics, platform Not only SQL database (NoSQL DB), platform schedulers & cron jobs, VNF backup & upgrade manager, microservice auditor and platform operation, administration and maintenance manager.
  • the NFV infrastructure monitoring manager tracks and oversees the health and performance of NFV infrastructure.
  • the assurance manager ensures service quality and compliance with operational standards.
  • the performance manager monitors system performance metrics to optimize efficiency.
  • the policy execution engine enforces and executes policies across the platform.
  • the capacity monitoring manager tracks resource usage and forecasts future needs.
  • the release management repository manages software releases and version control.
  • the configuration manager handles system configurations, ensuring consistency and automation.
  • the GCT provides centralized oversight and management of platform operations.
  • the NFV platform decision analytics platform utilizes data analytics to support decision-making.
  • the NoSQL DB stores unstructured data to support flexible and scalable data management.
  • the platform schedulers and jobs automate and schedule routine tasks and workflows.
  • the VNF backup and upgrade manager oversees the backup and upgrading of VNFs.
  • the microservice auditor ensures the integrity and compliance of microservices across the platform.
  • the platform operation, administration, and maintenance manager (310) may oversee operational aspects of the MANO framework architecture (300).
  • the platform operation, administration, and maintenance manager (310) may be responsible for implementing a load-balancing mechanism used by the ERM (128) to distribute the event request across multiple microservices instances.
  • the platform resource adapter and utilities (312) may provide necessary tools and interfaces for interacting with an underlying network infrastructure, i.e., the NFV architecture.
  • the platform resource adapter and utilities (312) may include platform external API adapter and gateway, generic decoder, and indexer (XML, CSV, JSON), docker service adapter (DSA), API adapter and NFV gateway.
  • the platform external API adapter and gateway facilitates seamless integration with external APIs and manages data flow between external systems and the platform.
  • the generic decoder and indexer processes and organizes data from various formats such as XML, comma- separated values (CSV), and JSON, ensuring compatibility and efficient indexing.
  • the DSA is a microservices-based system designed to deploy and manage Container Network Functions (CNFs) and their components (CNFCs) across Docker nodes. It offers REST endpoints for key operations, including uploading container images to a Docker registry, terminating CNFC instances, and creating Docker volumes and networks. CNFs, which are network functions packaged as containers, may consist of multiple CNFCs.
  • the DSA facilitates the deployment, configuration, and management of these components by interacting with Docker's API, ensuring proper setup and scalability within a containerized environment. This approach provides a modular and flexible framework for handling network functions in a virtualized network setup.
  • the API adapter interfaces with services, allowing integration and management of cloud resources.
  • the NFV gateway acts as a bridge for NFV communications, coordinating between NFV components and other platform elements.
  • FIG. 4 illustrates another exemplary flow diagram of a method (400) for dynamic routing of an event request, in accordance with an embodiment of the present disclosure, in accordance with an embodiment of the present disclosure.
  • the method (400) includes receiving (402), by a receiving unit (116), the event request from an interfacing unit (122) via a network interface.
  • the network interface corresponds to an event routing manager_load balancer (EM_LB) interface.
  • the EM_LB facilitates event-based communication between the plurality of event routing management units (206) and the plurality of load balancers (220-2, 220-3).
  • the EM_LB interface is configured to enable the at least one load balancer (204-1) to send the event request and receive the event response from same instance (which has published event request) of the event routing management unit having the active state using the header-based request dispatching.
  • the method (400) includes monitoring (404), by a processing unit (110), a status of each of the plurality of event routing management units (206), wherein the status is one of an active status or an inactive status.
  • the method (400) includes routing (406), by the processing unit (110) via at least one load balancer (204-1), the event request towards one of an event routing management unit with the active status.
  • the method (400) includes forwarding (408), by the event routing management unit, the event request to at least one microservice (MS) (202) via one of a load balancer of the plurality of load balancers (220-2, 220-3).
  • MS microservice
  • the method (400) includes receiving (410), by one of the load balancer of the plurality of load balancers (204-2, 204-3), an event response from the at least one MS (202).
  • selection of one of the event routing management unit from the plurality of event routing management units (206) with the active status is performed using a distribution process and a header-based request dispatching.
  • the selection of one of the load balancer from the plurality of load balancers is performed using the distribution process.
  • the method further includes sending, by the at least one load balancer, a timeout request to the plurality of event routing management units via the network interface upon determining that the event request is not performed within a predefined time.
  • the present disclosure discloses a computer program product comprising a non-transitory computer-readable medium comprising instructions that, when executed by one or more processors, cause the one or more processors to perform a method for dynamic routing of an event request.
  • the method includes receiving, by a receiving unit, the event request from an interfacing unit via a network interface.
  • the method includes monitoring, by a processing unit, a status of each of the plurality of event routing management units. The status is one of an active status or an inactive status.
  • the method further includes routing, by the processing unit via at least one load balancer, the event request towards one of an event routing management unit with the active status.
  • the method further includes forwarding, by the event routing management unit, the event request to at least one microservice via one of a load balancer of the plurality of load balancers.
  • the method further includes receiving, by one of the load balancer of the plurality of load balancers, an event response from the at least one microservice.
  • FIG. 5 illustrates an exemplary computer system (500) in which or with which embodiments of the present disclosure may be implemented.
  • the computer system (500) may include an external storage device (510), a bus (520), a main memory (530), a read only memory (540), a mass storage device (550), a communication port (560), and a processor (570).
  • the computer system (500) may include more than one processor (570) and communication ports (560).
  • Processor (570) may include various modules associated with embodiments of the present disclosure.
  • the communication port (560) may be any of an RS- 232 port for use with a modem-based dialup connection, a 10/100 Ethernet port, a Gigabit or 10 Gigabit port using copper or fiber, a serial port, a parallel port, or other existing or future ports.
  • the communication port (560) may be chosen depending on a network, such a Local Area Network (LAN), Wide Area Network (WAN), or any network to which the computer system (500) connects.
  • the memory (530) may be Random Access Memory (RAM), or any other dynamic storage device commonly known in the art.
  • Read-only memory (540) may be any static storage device(s) e.g., but not limited to, a Programmable Read Only Memory (PROM) chips for storing static information e.g., start-up or Basic Input/Output System (BIOS) instructions for the processor (570).
  • PROM Programmable Read Only Memory
  • the mass storage (550) may be any current or future mass storage solution, which may be used to store information and/or instructions.
  • Exemplary mass storage solutions include, but are not limited to, Parallel Advanced Technology Attachment (PATA) or Serial Advanced Technology Attachment (SATA) hard disk drives or solid-state drives (internal or external, e.g., having Universal Serial Bus (USB) and/or Firewire interfaces), one or more optical discs, Redundant Array of Independent Disks (RAID) storage, e.g., an array of disks (e.g., SATA arrays).
  • PATA Parallel Advanced Technology Attachment
  • SATA Serial Advanced Technology Attachment
  • SSD Universal Serial Bus
  • RAID Redundant Array of Independent Disks
  • the bus (520) communicatively couples the processor(s) (570) with the other memory, storage and communication blocks.
  • the bus (520) may be, e.g., a Peripheral Component Interconnect (PCI)/PCI Extended (PCI-X) bus, Small Computer System Interface (SCSI), Universal Serial Bus (USB) or the like, for connecting expansion cards, drives and other subsystems as well as other buses, such a front side bus (FSB), which connects the processor (570) to the computer system (500).
  • PCI Peripheral Component Interconnect
  • PCI-X PCI Extended
  • SCSI Small Computer System Interface
  • USB Universal Serial Bus
  • operator and administrative interfaces e.g., a display, keyboard, joystick, and a cursor control device
  • the bus (520) may also be coupled to the bus (520) to support direct operator interaction with the computer system (500).
  • Other operator and administrative interfaces may be provided through network connections connected through the communication port (560).
  • Components described above are meant only to exemplify various possibilities. In no way should the aforementioned exemplary computer system (500) limit the scope of the present disclosure.
  • the method and system of the present disclosure may be implemented in a number of ways.
  • the methods and systems of the present disclosure may be implemented by software, hardware, firmware, or any combination of software, hardware, and firmware.
  • the above-described order for the steps of the method is for illustration only, and the steps of the method of the present disclosure are not limited to the order specifically described above unless specifically stated otherwise.
  • the present disclosure may also be embodied as programs recorded in a recording medium, the programs including machine-readable instructions for implementing the methods according to the present disclosure.
  • the present disclosure also covers a recording medium storing a program for executing the method according to the present disclosure.
  • the present disclosure provides a technically advanced solution by providing a system and a method for facilitating communication between the ERM and the LB via the EM_LB interface.
  • the EM_LB interface distributes incoming events requests & responses by LB across these ERM instances to ensure that the system scales horizontally and handles increasing loads effectively.
  • the microservice instances are deployed in active-active mode to ensure the availability of a microservice to serve the traffic even if any instance goes down.
  • the load balancer (LB) monitors the health of ERM instances and ensures that events are directed to healthy instances of ERM if any instance goes down, thereby minimizing downtime and maintaining high availability.
  • the EM_LB interface enables real-time adjustments to routing rules and load-balancing strategies without requiring service restarts.
  • the EM_LB interface is used by load balancer to send request and receive response on the same instance of ERM using header-based routing present in LB.
  • the load balancers provide zero-downtime deployments of microservices.
  • the load balancer is configured to perform dynamic mapping of end points (for example, EM_LB interface details of ERM instances using service discovery mechanisms, providing dynamic routing between ERM instances, etc.). In this way, the LBs offload traffic offloading responsibilities from ERM instances and make the ERM available to focus only on processing of the application logic.
  • the EM_LB interface used by load balancers ensures scalability, high availability, fault tolerance, traffic management capabilities and efficient resource utilization.
  • the present disclosure provides a system and a method for dynamic routing of an event request between a plurality of event routing management units and a plurality of load balancers.
  • the present disclosure provides a system and a method for dynamic routing of event requests that enhances the efficiency of handling large volumes of event-based communications in distributed microservice architectures.
  • the system ensures optimal resource allocation and reduces response times under varying workloads.
  • the present disclosure provides a robust mechanism for monitoring the active status of event routing management units, ensuring that event requests are routed only to available and responsive units. This proactive monitoring prevents faults, improves reliability, and enhances the overall system performance.
  • the present disclosure allows for flexible routing strategies, including round-robin distribution, load-based selection, and header-based request dispatching. These strategies can be dynamically adjusted based on real-time conditions, enabling better load management and minimizing processing delays.
  • the present disclosure also introduces a timeout mechanism that ensures the timely processing of event requests. If a request is not handled within a predefined time, a timeout alert is generated, allowing for quicker issue resolution and preventing system slowdowns or failures due to unresponsive components. This mechanism enhances fault tolerance and ensures smoother operations in event-driven architectures.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer And Data Communications (AREA)

Abstract

A method (400) and a system (108) for dynamic routing of an event request is disclosed The method (400) includes receiving (402) the event request from an interfacing unit via a network interface. The method (400) includes monitoring (404) a status of each of a plurality of event routing management units (206). The status is one of an active status or an inactive status. The method (400) further includes routing (406), via at least one load balancer (204-1), the event request towards an event routing management unit with the active status. The method (400) further includes forwarding (408), by the event routing management unit, the event request to at least one microservice via one of the plurality of load balancers (220-2, 220-3). The method (400) further includes receiving (410), by one of the load balancer, an event response from the microservice.

Description

SYSTEM AND METHOD FOR DYNAMIC ROUTING OF AN EVENT REQUEST
RESERVATION OF RIGHTS
[0001] A portion of the disclosure of this patent document contains material, which is subject to intellectual property rights such as, but are not limited to, copyright, design, trademark, Integrated Circuit (IC) layout design, and/or trade dress protection, belonging to JIO PLATFORMS LIMITED or its affiliates (hereinafter referred as owner). The owner has no objection to the facsimile reproduction by anyone of the patent document or the patent disclosure, as it appears in the Patent and Trademark Office patent files or records, but otherwise reserves all rights whatsoever. All rights to such intellectual property are fully reserved by the owner.
TECHNICAL FIELD
[0002] The present disclosure relates generally to event routing in wireless communication systems. More particularly, the present disclosure relates to a system and a method for dynamic routing of an event request.
DEFINITION
[0003] As used in the present disclosure, the following terms are generally intended to have the meaning as set forth below, except to the extent that the context in which they are used to indicate otherwise.
[0004] The expression ‘Event Routing Manager (ERM) instances’ used hereinafter in the specification refers to various instances of service involved in the ERM running to handle events workload. The ERM instances enables real-time adjustments to routing rules and load-balancing strategies without requiring service restarts.
[0005] The expression ‘Event Routing Manager (ERM) or event routing management unit’ used hereinafter in the specification refers to a processing entity that is responsible for managing and routing event requests within a network. The ERM monitors the status of various event processing instances and selects the most appropriate instance based on their availability, load, and other defined criteria. The ERM also facilitates communication between the event request source (e.g., microservices or load balancers) and the destination, ensuring efficient routing and handling of event-based data.
[0006] The expression ‘event’ used hereinafter in the specification refers to a specific action that can trigger a network element or a system to take a particular action. In an example, the event may include service requests, network traffic, system configuration changes, security incidents, and the like.
[0007] The expression ‘event request’ used hereinafter in the specification refers to a set of instructions for an event. The event request includes specific parameters and data that instruct the ERM to perform one or more operations related to the event.
[0008] The expression ‘microservice (MS)’ used hereinafter in the specification refers to a software architecture where an application is composed of small, independently deployable services, each responsible for a specific business function. The microservices communicate over well-defined application programming interfaces (APIs) and can be developed, deployed, and scaled independently.
[0009] The expression ‘Operation and Maintenance (0AM)’ used hereinafter in the specification refers to the processes and tools designed to automate, coordinate, and oversee the deployment, configuration, and operation of complex systems, services, and resources.
[0010] The expression Toad balancer’ used hereinafter in the specification refers to a network component or device responsible for distributing event requests or other types of traffic across a set of instances, such as ERM instances or microservices. The load balancer ensures efficient distribution of requests based on load balancing algorithms such as round-robin, least connections, or header-based request dispatching. It also monitors the status of the instances and ensures that event requests are routed only to healthy or active instances, helping maintain system stability and performance.
[0011] The expression ‘CLM’ used hereinafter in the specification refers to a Configuration and Lifecycle Management system responsible for overseeing the provisioning, configuration, and lifecycle management of network components or microservices. The CLM ensures that instances are properly deployed, configured, and maintained throughout their lifecycle, including automated scaling, updates, and fault management. The CLM may interact with load balancers, microservices, and ERM instances to ensure the availability and health of the deployed services.
[0012] The expression TAM’ used hereinafter in the specification refers to Identity and Access Management, a security framework used to manage the identities of users, systems, or services, and control access to resources in the network. The IAM system ensure that only authorized entities can interact with the ERM, load balancers, and other components, and enforce security policies for event handling. This helps protect the system from unauthorized access and ensures the integrity of event requests and responses.
[0013] The expression ‘dynamic routing’ used hereinafter in the specification refers to a method of directing event requests through a network based on real-time factors such as system load, availability of resources, or predefined rules. Unlike static routing, where paths are predetermined, dynamic routing involves continuously monitoring the status of event routing management units and load balancers and making on-the-fly decisions to optimize the distribution of event requests. This ensures high availability, fault tolerance, and efficient handling of network traffic, particularly in systems with varying workloads or multiple processing entities.
BACKGROUND
[0014] The following description of related art is intended to provide background information pertaining to the field of the disclosure. This section may include certain aspects of the art that may be related to various features of the present disclosure. However, it should be appreciated that this section be used only to enhance the understanding of the reader with respect to the present disclosure, and not as admissions of prior art.
[0015] Load balancing is a process of distributing workloads across multiple computing resources, such as application servers, virtual machines, or containers, to achieve better performance, availability, and scalability. Load balancing is typically performed by load balancers. The load balancers route all requests to access applications on the clusters to a back-end server. The load balancer receives a request for an application, selects a given server to run the application, and distributes the request to the selected back-end application server. The load balancer makes sure that routing of requests for a given application to a given server running that application. This ensures to achieve similar performance to each request, independent of the particular server that is destined to execute the request.
[0016] In order to achieve this result, the load balancer must consider factors (for example, server's reported load, recent response times, up/down status, number of active connections, geographic location, capabilities, or how much traffic the load balancer has recently assigned the server, etc.) that affect application performance on each server. Further, implementing load balancing can be complex, especially when dealing with large-scale systems. This requires careful planning and configuration to ensure that it works effectively. When one of the servers servicing requests experiences a failure, the service provided by the load balancer may be interrupted or experience a delay.
[0017] There is, therefore, a need in the art to provide a system and a method that can overcome the problems associated with the prior arts.
SUMMARY
[0018] In an exemplary embodiment, a method for dynamic routing of an event request between a plurality of event routing management units and a plurality of load balancers is disclosed. The method includes receiving, by a receiving unit, the event request from an interfacing unit via a network interface. The method includes monitoring, by a processing unit, a status of each of the plurality of event routing management units. The status is one of an active status or an inactive status. The method further includes routing, by the processing unit via at least one load balancer, the event request towards one of an event routing management unit with the active status. The method further includes forwarding, by the event routing management unit, the event request to at least one microservice via one of a load balancer of the plurality of load balancers. The method further includes receiving, by one of the load balancer of the plurality of load balancers, an event response from the at least one microservice.
[0019] In an embodiment, the selection of one of the event routing management unit from the plurality of event routing management units with the active status is performed using a distribution process and a header-based request dispatching.
[0020] In an embodiment the selection of one of the load balancer from the plurality of load balancers is performed using the distribution process.
[0021] In an embodiment, the method further includes sending, by the at least one load balancer, a timeout request to the plurality of event routing management units via the network interface upon determining that the event request is not performed within a predefined time.
[0022] In an embodiment, the network interface corresponds to an event routing manager_load balancer) EM_LB interface that facilitates event-based communication between the plurality of event routing management units and the plurality of load balancers. The EM_LB interface is configured to enable the at least one load balancer to send the event request and receive the event response from same instance of the event routing management unit having the active state using the headerbased request dispatching.
[0023] In another exemplary embodiment, a system for dynamic routing of an event request between a plurality of event routing management units and a plurality of load balancers is disclosed. The system includes a receiving unit configured to receive the event request from an interfacing unit via a network interface. The system further includes a memory and a processing unit. The processing unit is coupled with the receiving unit to receive the event request and is further coupled with the memory to execute a set of instructions stored in the memory. The processing unit is configured to monitor a status of each of the plurality of event routing management units. The status is one of an active status or an inactive status. The processing unit is further configured to route, via at least one load balancer, the event request towards one of an event routing management unit with the active status. The event routing management unit is configured to forward the event request to at least one microservice via one of a load balancer of the plurality of load balancers. The load balancer of the plurality of load balancers is configured to receive an event response from at least one microservice.
[0024] In an exemplary embodiment, the present disclosure relates to a user equipment (UE) communicatively coupled with a system. The coupling includes steps of receiving, by the system, a connection request from the UE, sending, by the system, an acknowledgment of the connection request to the UE and transmitting a plurality of signals in response to the connection request. The system is configured to perform dynamic routing of an event request between a plurality of event routing management units and a plurality of load balancers.
[0025] In yet another exemplary embodiment, the present disclosure discloses a computer program product comprising a non-transitory computer-readable medium comprising instructions that, when executed by one or more processors, cause the one or more processors to perform a method for dynamic routing of an event request. The method includes receiving, by a receiving unit, the event request from an interfacing unit via a network interface. The method includes monitoring, by a processing unit, a status of each of the plurality of event routing management units. The status is one of an active status or an inactive status. The method further includes routing, by the processing unit via at least one load balancer, the event request towards one of an event routing management unit with the active status. The method further includes forwarding, by the event routing management unit, the event request to at least one microservice via one of a load balancer of the plurality of load balancers. The method further includes receiving, by one of the load balancer of the plurality of load balancers, an event response from the at least one microservice.
[0026] The foregoing general description of the illustrative embodiments and the following detailed description thereof are merely exemplary aspects of the teachings of this disclosure and are not restrictive.
OBJECTIVES OF THE PRESENT DISCLOSURE
[0027] Some of the objects of the present disclosure, which at least one embodiment herein satisfies, are as follows:
[0028] An object of the present disclosure is to provide a system and a method for efficiently distributing incoming network traffic across a group of backend servers or group of instances via a network interface (e.g., an EM_LB interface).
[0029] Another object of the present disclosure is to distribute incoming events requests & responses by load balancer across the ERM instances via the EM_LB interface to ensure that the system can scale horizontally, handling increasing loads effectively.
[0030] Yet another object of the present disclosure is to ensure that events are directed to healthy instances of ERM if any instance goes down, thereby minimizing downtime and maintaining high availability.
[0031] An object of the present disclosure is to provide EM_LB interface to be used by load balancers to provide zero-downtime deployments of microservices.
[0032] Yet another object of the present disclosure is to prevent overloading of the ERM instances.
[0033] Other objects and advantages of the present disclosure will be more apparent from the following description, which is not intended to limit the scope of the present disclosure. BRIEF DESCRIPTION OF THE ACCOMPANYING DRAWING
[0034] The accompanying drawings, which are incorporated herein, and constitute a part of this disclosure, illustrate exemplary embodiments of the disclosed methods and systems in which like reference numerals refer to the same parts throughout the different drawings. Components in the drawings are not necessarily to scale, emphasis instead being placed upon clearly illustrating the principles of the present disclosure. Some drawings may indicate the components using block diagrams and may not represent the internal circuitry of each component. It will be appreciated by those skilled in the art that disclosure of such drawings includes disclosure of electrical components, electronic components or circuitry commonly used to implement such components.
[0035] FIG. 1 A illustrates an exemplary network architecture for implementing a system for dynamic routing of an event request, in accordance with an embodiment of the present disclosure.
[0036] FIG. IB illustrates an exemplary block diagram of the system for dynamic routing of an event request, in accordance with an embodiment of the present disclosure.
[0037] FIG. 1C illustrates an exemplary system architecture for dynamic routing of an event request, in accordance with an embodiment of the present disclosure.
[0038] FIG. 2 illustrates an exemplary an exemplary block diagram of the system for dynamic routing of an event request, in accordance with an embodiment of the present disclosure.
[0039] FIG. 3 illustrates an exemplary Management and Orchestration (MANO) framework architecture, in accordance with an embodiment of the present disclosure. [0040] FIG. 4 illustrates another exemplary flow diagram of a method for dynamic routing of an event request, in accordance with an embodiment of the present disclosure, in accordance with an embodiment of the present disclosure.
[0041] FIG. 5 illustrates a computer system in which or with which the embodiments of the present disclosure may be implemented.
[0042] The foregoing shall be more apparent from the following more detailed description of the disclosure.
LIST OF REFERENCE NUMERALS
100A - Network Architecture
102-1, 102-2... 102-N - Plurality of Users
104-1, 104-2... 104-N - Plurality of User Equipments
106 - Network
108 - System
100B - Block Diagram
110 - Processing unit
112 - Memory
114 - Plurality of Interfaces
116 - Receiving unit
118 - Load balancer
120 - Database
100C - System architecture 122 - User interfacing unit
124 - Identity and access management (IAM)
126-1, 126-2, 126-3, 126-4 - Elastic load balancer (ELB)
128, 206-1, 206-2 - Event routing manager (ERM)
130-1, 130-2... 130-n, 202 - Microservices (MS)
132 - Elastic Search Cluster
134 - Operations and Management (0AM)
136 - Cloud-based learning management system (CLMS)
132-1, 132-2, 132-3, 132-3, 132-4, 132-5 - Database (BD)
200 - Block Diagram
204-1, 204-2, 204-3 - Load balancer
124 - 0AM service discovery
300 - Management and Orchestration (MANO) framework architecture
302 - User interface layer
304 - Network Functions Virtualization (NFV) and Software-Defined Networking SDN design function
306 - Platform foundation service
308 - Platform core service
310 - Platform operation, administration, and maintenance manager
312 - Platform resource adapters and utilities
400 - Flow diagram 500 - Computer System
510 - External Storage Device
520 - Bus
530 - Main Memory
540 - Read-Only Memory
550 - Mass Storage Device
560 - Communication Ports
570 - Processor
DETAILED DESCRIPTION
[0043] In the following description, for the purposes of explanation, various specific details are set forth in order to provide a thorough understanding of embodiments of the present disclosure. It will be apparent, however, that embodiments of the present disclosure may be practiced without these specific details. Several features described hereafter can each be used independently of one another or with any combination of other features. An individual feature may not address any of the problems discussed above or might address only some of the problems discussed above. Some of the problems discussed above might not be fully addressed by any of the features described herein. Example embodiments of the present disclosure are described below, as illustrated in various drawings in which like reference numerals refer to the same parts throughout the different drawings.
[0044] The ensuing description provides exemplary embodiments only, and is not intended to limit the scope, applicability, or configuration of the disclosure. Rather, the ensuing description of the exemplary embodiments will provide those skilled in the art with an enabling description for implementing an exemplary embodiment. It should be understood that various changes may be made in the function and arrangement of elements without departing from the spirit and scope of the disclosure as set forth.
[0045] Specific details are given in the following description to provide a thorough understanding of the embodiments. However, it will be understood by one of ordinary skill in the art that the embodiments may be practiced without these specific details. For example, circuits, systems, networks, processes, and other components may be shown as components in block diagram form in order not to obscure the embodiments in unnecessary detail. In other instances, well-known circuits, processes, algorithms, structures, and techniques may be shown without unnecessary detail in order to avoid obscuring the embodiments.
[0046] Also, it is noted that individual embodiments may be described as a process that is depicted as a flowchart, a flow diagram, a data flow diagram, a structure diagram, or a block diagram. Although a flowchart may describe the operations as a sequential process, many of the operations can be performed in parallel or concurrently. In addition, the order of the operations may be re-arranged. A process is terminated when its operations are completed but could have additional steps not included in a figure. A process may correspond to a method, a function, a procedure, a subroutine, a subprogram, etc. When a process corresponds to a function, its termination can correspond to a return of the function to the calling function or the main function.
[0047] The word “exemplary” and/or “demonstrative” is used herein to mean serving as an example, instance, or illustration. For the avoidance of doubt, the subject matter disclosed herein is not limited by such examples. In addition, any aspect or design described herein as “exemplary” and/or “demonstrative” is not necessarily to be construed as preferred or advantageous over other aspects or designs, nor is it meant to preclude equivalent exemplary structures and techniques known to those of ordinary skill in the art. Furthermore, to the extent that the terms “includes,” “has,” “contains,” and other similar words are used in either the detailed description or the claims, such terms are intended to be inclusive like the term “comprising” as an open transition word without precluding any additional or other elements.
[0048] Reference throughout this specification to “one embodiment” or “an embodiment” or “an instance” or “one instance” means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the present disclosure. Thus, the appearances of the phrases “in one embodiment” or “in an embodiment” in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments.
[0049] The terminology used herein is to describe particular embodiments only and is not intended to be limiting the disclosure. As used herein, the singular forms “a”, “an”, and “the” are intended to include the plural forms as well, unless the context indicates otherwise. It will be further understood that the terms “comprises” and/or “comprising,” when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. As used herein, the term “and/or” includes any combinations of one or more of the associated listed items. It should be noted that the terms “mobile device”, “user equipment”, “user device”, “communication device”, “device” and similar terms are used interchangeably for the purpose of describing the invention. These terms are not intended to limit the scope of the invention or imply any specific functionality or limitations on the described embodiments. The use of these terms is solely for convenience and clarity of description. The invention is not limited to any particular type of device or equipment, and it should be understood that other equivalent terms or variations thereof may be used interchangeably without departing from the scope of the invention as defined herein.
[0050] While considerable emphasis has been placed herein on the components and component parts of the preferred embodiments, it will be appreciated that many embodiments can be made and that many changes can be made in the preferred embodiments without departing from the principles of the disclosure. These and other changes in the preferred embodiment as well as other embodiments of the disclosure will be apparent to those skilled in the art from the disclosure herein, whereby it is to be distinctly understood that the foregoing descriptive matter is to be interpreted merely as illustrative of the disclosure and not as a limitation.
[0051] Load balancing is a process of distributing workloads across multiple computing resources, such as application servers, virtual machines, or containers, to achieve better performance, availability, and scalability. Load balancing is typically performed by load balancers. The load balancers route all requests to access applications on the clusters to a back-end server. The load balancer receives a request for an application, selects a given server to run the application, and distributes the request to the selected back-end application server. The load balancer makes sure that routing of requests for a given application to a given server running that application. This ensures to achieve similar performance to each request, independent of the particular server that is destined to execute the request. In order to achieve this result, the load balancer must consider factors (for example, server's reported load, recent response times, up/down status, number of active connections, geographic location, capabilities, or how much traffic the load balancer has recently assigned the server, etc.) that affect application performance on each server. Further, implementing load balancing can be complex, especially when dealing with large-scale systems. This requires careful planning and configuration to ensure that it works effectively. When one of the servers servicing requests experiences a failure, the service provided by the load balancer may be interrupted or experience a delay.
[0052] Accordingly, there is a need for systems and methods to allow the load balancers to efficiently distribute incoming network traffic across a group of backend servers or group of instances.
[0053] The present disclosure aims to overcome the above-mentioned and other existing problems in this field of technology by providing a system and a method for facilitating communication between an event routing manager (ERM) and load balancer (LB) via an EM_LB interface which is used to distribute incoming events requests & responses by load balancers across the ERM instances. The EM_LB interface may be dynamically identified by load balancer service for connecting to ERM instances, thereby enabling real-time adjustments to routing rules and loadbalancing strategies without requiring service restarts. An asynchronous event-based implementation is supported to utilize the EM_LB interface efficiently.
[0054] Hereinafter, exemplary embodiments of the present disclosure will be described with reference to the accompanying drawings.
[0055] FIG. 1A illustrates an exemplary network architecture (100A) for implementing a system (108) for dynamic routing of an event request, in accordance with an embodiment of the present disclosure.
[0056] As illustrated in FIG. 1A, the network architecture (100A) may include one or more user equipments (UEs) (104-1, 104-2... 104-N) associated with one or more users (102-1, 102-2... 102-N) in an environment. A person of ordinary skill in the art will understand that one or more users (102-1, 102-2... 102-N) may collectively referred to as the users (102). Similarly, a person of ordinary skill in the art will understand that one or more UEs (104-1, 104-2... 104-N) may be collectively referred to as the UE (104). Although only three UE (104) are depicted in FIG. 1A, however, any number of the UE (104) may be included without departing from the scope of the ongoing description.
[0057] In an embodiment, the UE (104) may include smart devices operating in a smart environment, for example, an Internet of Things (loT) system. In such an embodiment, the UE (104) may include, but are not limited to, smartphones, smart watches, smart sensors (e.g., mechanical, thermal, electrical, magnetic, etc.), networked appliances, networked peripheral devices, networked lighting system, communication devices, networked vehicle accessories, networked vehicular devices, smart accessories, tablets, smart television (TV), computers, smart security system, smart home system, other devices for monitoring or interacting with or for the users (102) and/or entities, or any combination thereof. A person of ordinary skill in the art will appreciate that the UE (104) may include, but not limited to, intelligent, multisensing, network-connected devices, which may integrate seamlessly with each other and/or with a central server or a cloud-computing system or any other device that is network-connected.
[0058] Additionally, in some embodiments, the UE (104) may include, but not limited to, a handheld wireless communication device (e.g., a mobile phone, a smartphone, a phablet device, and so on), a wearable computer device (e.g., a headmounted display computer device, a head-mounted camera device, a wristwatch computer device, and so on), a Global Positioning System (GPS) device, a laptop computer, a tablet computer, or another type of portable computer, a media playing device, a portable gaming system, and/or any other type of computer device with wireless communication capabilities, and the like. In an embodiment, the UE (104) may include, but are not limited to, any electrical, electronic, electromechanical, or equipment, or a combination of one or more of the above devices, such as virtual reality (VR) devices, augmented reality (AR) devices, laptop, a general-purpose computer, desktop, personal digital assistant, tablet computer, mainframe computer, or any other computing device, wherein the UE (104) may include one or more in-built or externally coupled accessories including, but not limited to, a visual aid device such as a camera, an audio aid, a microphone, a keyboard, and input devices for receiving input from the user (102) or the entity such as touchpad, touch-enabled screen, electronic pen, and the like. A person of ordinary skill in the art will appreciate that the UE (104) may not be restricted to the mentioned devices and various other devices may be used.
[0059] Referring to FIG. 1A, the UE (104) may communicate with the system (108) through the network (106) for sending or receiving various types of data. In an embodiment, the network (106) may include at least one of a 5G network, 6G network, or the like. The network (106) may enable the UE (104) to communicate with other devices in the network architecture (100 A) and/or with the system (108). The network (106) may include a wireless card or some other transceiver connection to facilitate this communication. In another embodiment, the network (106) may be implemented as, or include any of a variety of different communication technologies such as a wide area network (WAN), a local area network (LAN), a wireless network, a mobile network, a Virtual Private Network (VPN), the Internet, the Public Switched Telephone Network (PSTN), or the like.
[0060] In an embodiment, the network (106) may include, by way of example but not limitation, at least a portion of one or more networks having one or more nodes that transmit, receive, forward, generate, buffer, store, route, switch, process, or a combination thereof, etc. one or more messages, packets, signals, waves, voltage or current levels, some combination thereof, or so forth. The network (106) may also include, by way of example but not limitation, one or more of a radio access network (RAN), a wireless network, a wired network, an internet, an intranet, a public network, a private network, a packet-switched network, a circuit- switched network, an ad hoc network, an infrastructure network, a Public-Switched Telephone Network (PSTN), a cable network, a cellular network, a satellite network, a fiber optic network, or some combination thereof.
[0061] In an embodiment, the UE (104) is communicatively coupled with the network (106). The network (106) may receive a connection request from the UE (104). The network (106) may send an acknowledgment of the connection request to the UE (104). The UE (104) may transmit a plurality of signals in response to the connection request.
[0062] Although FIG. 1A shows exemplary components of the network architecture (100A), in other embodiments, the network architecture (100 A) may include fewer components, different components, differently arranged components, or additional functional components than depicted in FIG. 1A. Additionally, or alternatively, one or more components of the network architecture (100A) may perform functions described as being performed by one or more other components of the network architecture (100A). [0063] FIG. IB illustrates an exemplary block diagram (100B) of the system (108) for dynamic routing of the event request, in accordance with an embodiment of the present disclosure.
[0064] In an embodiment, the system (108) may include one or more processor(s) (hereafter referred to as a processing unit (110)), a memory (112), a plurality of interface(s) (114), a receiving unit (116), a load balancer (118), and a database (120). The processing unit (110) may be implemented as one or more microprocessors, microcomputers, microcontrollers, digital signal processors, central processing units, logic circuitries, and/or any devices that process data based on operational instructions. Among other capabilities, the processing unit (110) may be configured to fetch and execute computer-readable instructions stored in the memory (112) of the system (108). The memory (112) may be configured to store one or more computer-readable instructions or routines in a non-transitory computer readable storage medium, which may be fetched and executed to create or share data packets over a network service. The memory (112) may include any non-transitory storage device including, for example, volatile memory such as random-access memory (RAM), or non-volatile memory such as erasable programmable read only memory (EPROM), flash memory, and the like.
[0065] In an embodiment, the interface(s) (114) may include a variety of interfaces, for example, interfaces for data input and output devices (RO), storage devices, and the like. The interface(s) (114) may facilitate communication through the system (108). The interface(s) (114) may also provide a communication pathway for one or more components of the system (108). Examples of such components include, but are not limited to, the processing unit (110), the receiving unit (116), load balancer (118) and the database (120).
[0066] The processing unit (108) may be implemented as a combination of hardware and programming (for example, programmable instructions) to implement one or more functionalities of the processing unit (108). In the examples described herein, such combinations of hardware and programming may be implemented in several different ways. For example, the programming for the processing unit (108) may be processor-executable instructions stored on a non-transitory machine -readable storage medium and the hardware for the processing unit (108) may comprise a processing resource (for example, one or more processors), to execute such instructions. In the present examples, the machine -readable storage medium may store instructions that, when executed by the processing resource, implement the processing unit (108). In such examples, the system may comprise the machine -readable storage medium storing the instructions and the processing resource to execute the instructions, or the machine-readable storage medium may be separate but accessible to the system and the processing resource. In other examples, the processing unit (108) may be implemented by electronic circuitry. In an embodiment, the database (120) includes data that may be either stored or generated as a result of functionalities implemented by any of the components of the processing unit (108).
[0067] In an embodiment, the system (108) is configured to perform dynamic routing of the event request between a plurality of event routing management units and a plurality of load balancers. In an embodiment, dynamic routing may refer to a method of directing event requests through a network based on real-time factors such as system load, availability of resources, or predefined rules. Unlike static routing, where paths are predetermined, dynamic routing involves continuously monitoring the status of event routing management units and load balancers and making on-the-fly decisions to optimize the distribution of event requests. This ensures high availability, fault tolerance, and efficient handling of network traffic, particularly in systems with varying workloads or multiple processing entities. To perform dynamic routing, the receiving unit (116) may initially receive an event request from an interfacing unit via a network interface. In an aspect, the event request may be any message, signal, or data packet that requires processing by one or more microservices (MS) within the system (108). The receiving unit (116) may receive the event request when an event is triggered from a source, such as an external service or microservice (MS). The network interface, acting as a communication gateway, receives the incoming event request with necessary information such as headers, request types, priority levels, and specific routing instructions and forwards it to the receiving unit (116).
[0068] Upon receiving the event request, the processing unit (110) is configured to monitor the status of each of the plurality of event routing management units. The status of each event routing management unit (interchangeably used as an event routing manager (ERM)) may indicate whether it is in an active state or an inactive state. The active status may signify that the event routing management unit is ready to handle requests. In contrast, the inactive status may indicate that a particular event routing management unit is temporarily unavailable due to overload, maintenance, or failure conditions. In an aspect, the monitoring may involve checking the availability, processing capacity, and response times of the event routing management units, ensuring that only active and operational units are considered for routing. In an embodiment, the status information may be periodically updated and stored in the database (120) accessible to the processing unit (110).
[0069] Based on the monitored status of the event routing management units, the processing unit (110), in conjunction with at least one load balancer (e.g., the load balancer 118), routes the event request towards one of the event routing management units with the active status. In an embodiment, the processing unit (110) may employ a distribution process to select the appropriate event routing management unit for handling the event request. The distribution process may include, but is not limited to, round-robin distribution, load-based selection, or header-based request dispatching. In one embodiment, the header of the event request may contain specific parameters, such as priority levels or request types, which assist in selecting the most appropriate event routing management unit. The selection ensures that the event request is routed to an available and operational event routing management unit.
[0070] To further elaborate, in the round-robin distribution process, the event requests are routed sequentially to each event routing management unit (e.g., ERM), in turn, ensuring an even and balanced distribution of requests across all available units. This method does not consider the load on individual ERM. Still, the method ensures that all units receive an approximately equal number of event requests over time, making it simple yet effective for managing workloads evenly. The load-based selection process involves monitoring the current processing load or capacity of each event routing management unit. The event request is routed to the ERM with the least load or highest available capacity, ensuring that no single unit becomes overwhelmed. This dynamic distribution adapts to the real-time workload of each ERM, optimizing system performance by preventing bottlenecks and overload conditions. Additionally, in the header-based request dispatching, the event request is routed based on specific parameters included in the request header. These parameters may include priority levels, request types, or specific service-level agreements (SLAs), which help determine the most appropriate ERM. For instance, high-priority requests may be routed to units that specialize in urgent tasks, while routine or lower-priority requests can be assigned to standard units, allowing for more intelligent and specialized request handling.
[0071] In some embodiments, if the event routing management unit does not process the event request within a predefined time, the processing unit (110) may send a timeout request to the load balancer. In order to determine that the event request is not performed within the predefined time, the processing unit (110) continuously monitors the status of the event routing management unit to check whether it has completed the processing of the event request. If the request is not completed within the predefined time period (for example, within a 15 minute), the processing unit (110) recognizes that the event request has not been performed. This may trigger a generation of a timeout request, which is then sent to the load balancer. The timeout request may refer to a notification or signal that is triggered when the event routing management unit fails to process the event request within the predefined time period. The predefined time may be configured based on factors such as system load, the complexity of the request, or the priority of the event. For instance, a high-priority event request may have a shorter timeout duration than a lower-priority one. The load balancer may then reroute the event request to another event routing management unit with the active status, ensuring that the system (108) continues to operate efficiently without failures. The predefined timeout duration may be configured based on the system load or the type of event request being processed.
[0072] Once the event request is routed to the selected event routing management unit, the event routing management unit may forward it to at least one microservice (e.g., MS) via one of the load balancer. The load balancer may distribute the event request among different microservices (MS) based on their availability, capacity, or predefined routing logic. In an aspect, the load balancer is responsible for distributing the event request evenly across multiple microservices, ensuring that no single microservice becomes overloaded and that system resources are used efficiently. The system (108) may include a plurality of load balancers to handle different categories of microservices, such as one for data processing microservices and another for notification microservices.
[0073] By way of an example, consider an event request that comes into the system (108) for user authentication, and several microservices (e.g., MS) are responsible for this task. These microservices are spread across different instances to handle high volumes of requests. The load balancer may receive the request from the event routing management unit and intelligently distribute it to the appropriate microservice based on predefined criteria, such as current load, availability, or proximity. If microservice A is currently handling a high number of requests, the load balancer may forward the new request to microservice B, which is under less load. In another case, if the load balancer detects that microservice A is inactive due to downtime or maintenance, it may automatically forward the event request to the next available microservice, such as microservice C. Thus, the load balancer ensures that the system (108) remains efficient and responsive, by routing requests evenly across the available microservices. Each load balancer in the plurality of load balancers can specialize in routing the event request to different microservices or groups of microservices, depending on the system’s architecture and needs.
[0074] In an embodiment, the at least one microservice may perform specific tasks related to the event request, such as data processing, resource allocation, or task execution. The microservice may refer to a small, independent, and modular service that performs a specific task or set of tasks within the system. For example, the microservice may be responsible for handling user authentication, data storage, notification services, or billing calculations. Each microservice operates autonomously, making the system highly flexible, scalable, and easily manageable.
[0075] In an aspect, the microservice may include, but is not limited to, an authentication microservice, a data processing microservice, a notification microservice, a logging microservice, and a payment or billing microservice. The authentication microservice can handle requests related to verifying user credentials, managing user sessions, and ensuring secure access to various system components. The data processing microservice may process data requests, such as transforming data formats, performing calculations, or running machine learning algorithms on received data before returning the result. The notification microservice may send real-time alerts or notifications via email, SMS, or push notifications to relevant users or systems based on the event. The logging microservice can log event data, errors, or any activity within the system, allowing for audit trails and system monitoring. Further, the payment or billing service handles requests related to charging, billing, or processing payments for customers or users within a financial or subscription-based system.
[0076] Once the event request is processed by the at least one microservice, an event response is generated. In an aspect, the event response may refer to the output or result generated by the at least one microservice after processing the event request. The nature of the event response depends on the type of event being handled. For example, if the event request is related to user authentication, the event response could be a success message indicating that the user has been authenticated or a failure message if the credentials are invalid. In the case of a data processing request, the event response may include a processed result, such as a summary report or confirmation that the data operation was successful. Similarly, for notification events such as sending emails or SMS, the event response may confirm whether the notification was successfully delivered or report any errors like delivery failures. For transaction-based events, such as financial operations or purchase order processing, the event response may confirm whether the transaction was successful or failed, potentially requiring further validation or indicating issues like insufficient funds.
[0077] The load balancer of the plurality of load balancers may receive the event response from the at least one microservice and directs it back to the originating event routing management unit. In an aspect, the selection of the load balancer may be done using the similar distribution process used for the event routing management unit. The event routing management unit then forwards the event response to the processing unit (110), which subsequently directs the response to the interfacing unit (122) via the network interface. This ensures that the event request cycle is completed, and the appropriate response is delivered back to the originator, which may be an external client or another system component. The communication flow ensures that the response reaches the microservice responsible for publishing the event, maintaining consistency in event tracking and handling.
[0078] In an embodiment, the network interface may correspond to an event routing manager_load balancer (EM_LB) interface. The EM_LB interface facilitates event-based transfer of the event requests and event responses between the load balancers and event routing management units. In particular, the EM_LB interface is configured to enable the at least one load balancer to send the event request and receive the event response from same instance of the event routing management unit having the active state using the header-based request dispatching. The EM_LB interface ensures that real-time status updates and routing decisions may be efficiently communicated between the components, further optimizing the routing process. The EM_LB interface allows for dynamic adjustments in response to fluctuating system demands, such as high traffic volumes or changes in microservice availability.
[0079] Referring to FIG. 1C, an exemplary system architecture (100C) for dynamic routing of the event request is illustrated, in accordance with an embodiment of the present disclosure. [0080] As shown in FIG. 1C, the system architecture (100C) includes a user interface (UI/UX) (122), an identity and access management (IAM) (124), a plurality of elastic load balancers (ELBs) (e.g., ELB 1 (126-1), ELB 2 (126-2), ELB 3 (126-3), and ELB 4 (126-4)), an event routing manager (ERM) (128), a plurality of microservices (e.g., MS-1 (130-1), MS-2 (130-2)... MS-n (130-n)), an elastic search cluster (132), operations, and management (0AM) (134), and a cloud learning management system (CLMS) (136). The plurality of ELBs (collectively referred to as ELB (126)) may be in communication with ERM (128) via the EM_LB interface. The UI/UX (122), the IAM (124), the ERM (128), the ELB (126), the plurality of microservices (collectively referred to as MS (130)) and the elastic search cluster (132) may be in communication with each other. The ERM (128) may be in communication with the 0AM (134) and the CLMS (136).
[0081] In an aspect, the MS (130) may be related to publishers or subscribers. The elastic search cluster (132) includes a plurality of databases (DBs) (e.g., DB (132- 1), DB (132-2), DB (132-3), DB (132-4), DB (132-5), DB (132-6).
[0082] In an aspect, the ERM (128) involves multiple instances of service running to handle events workload. The ERM (128) and the ELB (126) may perform event request routing via the EM_LB interface. The event request may come from subscribers or publishers. The load balancing is used to efficiently distribute incoming network traffic across a group of backend servers or a group of instances.
[0083] In an implementation, dynamic routing of ERM instances (for example, round robin, header-based routing) is facilitated between the ELB (126) and the MS (130). The microservice instances may be deployed in active-active mode. The ELB (126) may check the health of ERM instances to ensure the availability of ERM instances. The 0AM (134) is configured to store the health status of ERM instances. The ELB (126) may get information about the health status of ERM instances from the 0AM (134). The ELB (126) may send the event request only to the healthy instances. If any instance goes down, the ELB (126) may send the event request to another healthy ERM instance. For example, in the present FIG. 1C, microservice (MS-1) (130-1) is cross-marked to indicate that MS-1 (130-1) is not in healthy condition. So, the event request is sent to another microservice instance (For example, MS-2 (130-2)). In this way, the ELB (126) distributes the ingress traffic on the available instances, thereby, minimizing system downtime and maintaining high availability.
[0084] FIG. 2 illustrates an exemplary block diagram (200) of the system (108) for dynamic routing of the event request between a plurality of event routing management units (e.g., ERM) and a plurality of load balancers (e.g., ELB), in accordance with an embodiment of the present disclosure.
[0085] The block diagram (200) includes at least one microservice (MS) (202), at least one load balancer (204-1) (analogous to the load balancer 118), a plurality of load balancers (204-2, 204-3) (collectively referred to as the load balancers (204)), a plurality of event routing manager (ERM) (206- 1 , 206-2) (collectively referred to as ERM (206)) and an 0AM service discovery (208). In one aspect, the ERM (206) may be implemented as an event management unit (206).
[0086] In an aspect, the MS (202) may function as either a publisher or a subscriber. As a publisher, the MS (202) generates and sends event notifications, such as updates or changes in its state, to the system. These events are then processed and routed by the event routing management units (e.g., ERM 206) to relevant components. As a subscriber, the MS (202) registers to receive specific event notifications from the system, indicating its interest in certain types of events. The plurality of load balancers (204) are configured to determine whether event request is coming from the publisher microservices or the subscriber microservices. The plurality of load balancers (204) may be in communication with the plurality of ERM (206) via the EM_LB interface. The 0AM service discovery (208) may be in communication with the plurality of load balancers (204) and the ERM (206-1, 206-2). The 0AM service discovery (208) includes details about the ERM (for example, registration and deregistration of ERM instances, availability of ERM instances, etc.). [0087] In an implementation, the MS (202) may send an event request to the load balancer (204-1). Here, the load balancer (204-1) may function as a load balancer controller. The load balancer (204-1) may monitor the health of the ERM (206-1, 206- 2) and route the event request only to the healthy ERM instances based on selection algorithms (for example, header-based request dispatch). The load balancer (204-1) sends the received event request to one of the healthy ERM (for example, ERM (206- 1)) via EM_LB interface. Further, the ERM (206-1) can also perform round robin process to select one of the load balancers (for example, LB (204-2)) via the EM_LB interface to distribute traffic uniformly. The load balancer (204-2) may send an event response against the event request to the same MS instance which has published the event.
[0088] For the sake of clarity, consider a scenario where a microservice (MS) instance, responsible for handling user login requests for an online application, sends an event request to the load balancer (204-1). This load balancer (204-1) functions as a controller that not only routes requests but also monitors the health of the Event Routing Management (ERM) instances (206-1, 206-2). For instance, the ERM (206-1) may be fully operational, while the ERM (206-2) is undergoing maintenance or facing issues. The load balancer (204-1) applies selection algorithms, such as header-based request dispatching, where it checks the event request’s header information, which may contain parameters like request type or priority level. Based on this analysis and the health status of the ERM instances, the load balancer routes the event request to the healthy ERM (in this case, ERM (206-1)) via the EM_LB interface.
[0089] In continuation with the above example, the ERM (206-1) may then need to further distribute traffic among a second set of load balancers to handle the incoming requests more efficiently. To do so, the ERM (206-1) performs a round-robin process, a common load-balancing technique, to uniformly distribute requests across a plurality of load balancers, such as the load balancer (204-2). The round-robin process ensures that requests are evenly distributed, preventing any single load balancer from being overwhelmed. Finally, the load balancer (204-2), after processing the event request, sends an event response back to the MS instance that originally published the event, ensuring that the communication loop is complete. For example, if the MS instance had requested user authentication from a login service, the load balancer would return the authentication result (e.g., success or failure) back to that MS instance.
[0090] In an aspect, ERM microservice (MS) instances are run in multiple instances in active-active mode. Each of the ERM MS is being served with multiple load balancer (LB) instances. The LB distributes the load on MS instances in roundrobin manner. The LB also ensures that the event response against any event request is sent to same MS instance which has published the event. The LB sends the event request and receives the event response via the EM_LB interface on the same ERM instance using header-based routing present in the LB. Further, if the event request has not been completed in a given time, then a timeout request is received on the EM_LB interface. The event request handling is done by the MS as per execution need.
[0091] According to the aspect of the present disclosure, the load balancers (204) may perform dynamic mapping of endpoints which includes EM_LB interface, details of ERM instances using service discovery mechanisms, and providing dynamic routing between ERM instances. In particular, the dynamic mapping referred to in the present disclosure, involves the real-time identification, configuration, and association of endpoints (such as event routing management units and load balancers) within the system (108). In this aspect, dynamic mapping ensures that the system (108) may adjust the assignment of the event request to the most appropriate instances of the ERMs, depending on their availability, status, and load conditions. In practice, this involves the EM_LB interface, which facilitates communication between the ERM instances and the load balancers. Through dynamic mapping, the load balancers may access and maintain up-to-date information about all ERM instances, such as their operational status (active or inactive) and available processing capacity. This is achieved using service discovery mechanisms that constantly monitor the status and configuration of the ERM instances within the system (108). With the dynamic mapping, the load balancers may dynamically route the event request to the appropriate ERM instances. This process helps distribute workloads evenly across active ERM instances, prevents system overload, and optimizes resource allocation by routing the event request to the most suitable ERM unit in real-time. It enhances the system efficiency by ensuring that event requests are always routed to the most capable and available ERM instance.
[0092] The ERM (204) maintains a dynamic mapping of endpoints, including LB interface details. The load balancer (204) monitors the health of ERM instances and routes traffic only to the healthy EM instances. The load balancer (204) performs routing of ERM instances through the EM_LB interface using an algorithm (for example, round-robin algorithms & header-based request dispatch). The ERM (206) may execute the round-robin algorithm or process across LB interfaces to distribute traffic uniformly. As the load balancers (204) route the event request only to the healthy ERM instances via the EM_LB interface, the load balancer (204) helps in providing zero-downtime deployments of microservices. In this way, the load balancer (204) (220) offloads traffic offloading responsibilities from ERM instances and makes the ERM (206) available to focus only on the processing of the application logic.
[0093] In an implementation, the load balancers perform the routing of event requests via the EM_LB interface using round-robin algorithms. The round-robin load balancing algorithm is an algorithm that distributes the event requests across a group of servers. The event request is forwarded to each server in turn. The round-robin algorithm instructs the load balancer to go back to the top of the list and repeat again. The round-robin network load balancing rotates connection requests among web servers in the order that requests are received. The EM_LB interface is used to distribute incoming events requests and responses by the LB across these ERM instances to ensure that the system can scale horizontally, handling increasing loads effectively. Further, the ERM may use the round-robin technique to distribute outgoing event requests toward LB to ensure that the system can scale horizontally and handle increasing loads effectively.
[0094] According to the aspect of present disclosure, the EM_LB interface can be dynamically identified by load balancer service for connecting to ERM instances enabling real-time adjustments to routing rules and load-balancing strategies without requiring service restarts. The EM_LB interface is configured to facilitate async event- based implementation between the ERM (206) and the load balancers (204) to utilize the interface efficiently.
[0095] In an aspect, the EM_LB Interface is used for load balancing algorithms that consider various factors (for example, server health, round robin load balancing technique, and user-specific preferences) to intelligently distribute events to the ERM. The EM_LB interface used by the load balancers ensures scalability, high availability, fault tolerance, traffic management capabilities, and efficient resource utilization.
[0096] FIG. 3 illustrates an exemplary representation of a user interface layer (302) within a Management and Orchestration (MANO) framework architecture (300), in accordance with an embodiment of the present disclosure, in accordance with an embodiment of the present disclosure.
[0097] In an example, the ERM (128, 206) may be characterized by the user interface layer (302). In an aspect, the user interface layer (302) may include a menu having NFV and SDN design functions (304), platform foundation services (306), platform core services (308), platform operation, an administration and maintenance manager (310), and platform resource adapters and utilities (312). In an example, using the user interface layer (302), various parameters associated with the ERM (128, 206) may be varied by an operator based on the requirements. In an example, the parameters may include a number of microservices associated with the ERM (140, 230), a number of interfacing units that can be deployed simultaneously.
[0098] The NFV and SDN design functions (304) include a virtualized network function (VNF) lifecycle manager (compute) that is a specialized component focused on managing the compute resources associated with VNF throughout their lifecycle. The NFV and SDN design functions (304) may include a VNF catalog that is a repository that stores and manages metadata, configurations, and templates for VNF, facilitating their deployment and lifecycle management. The NFV and SDN design functions (304) may include network services catalog, network slicing and service chaining manager, physical and virtual resource manager and CNF lifecycle manager. The network services catalog serves as a repository for managing and storing detailed information about network services, including their specifications and deployment requirements. The network slicing & service chaining manager is responsible for orchestrating network slices and service chains, ensuring efficient allocation and utilization of network resources tailored to various services. The physical and virtual resource manager oversees both physical and virtual resources, handling their allocation, monitoring, and optimization to ensure seamless operation across the network infrastructure. The CNF lifecycle manager manages the complete lifecycle of the CNF, including onboarding, instantiation, scaling, monitoring, and termination, thereby facilitating the efficient deployment and operation of network functions in a cloud-native environment.
[0099] The platform foundation services (306) may support an asynchronous event-based processing model implemented by the ERM (128), enabling concurrent handling of multiple event requests. The platform foundation services (306) include a microservices elastic load balancer, identity & access manager, the command line interface (CLI), central logging manager and ERM (140, 230). The microservices elastic load balancer ensures that incoming traffic is evenly distributed across multiple microservices, enhancing performance and availability. The identity & access manager handles user identity management and access control, enforcing permissions and roles to secure resources and services. The CLI offers a text-based method for users to interact with the platform, enabling command execution and configuration management. The central logging manager consolidates log data from various system components, providing a unified view for effective monitoring, troubleshooting, and data analysis
[00100] The platform core services (308) include NFV infrastructure monitoring manager, assurance manager, performance manager, policy execution engine, capacity monitoring manager, release management repository, configuration manager & global control tower (GCT), NFV platform decision analytics, platform Not only SQL database (NoSQL DB), platform schedulers & cron jobs, VNF backup & upgrade manager, microservice auditor and platform operation, administration and maintenance manager. The NFV infrastructure monitoring manager tracks and oversees the health and performance of NFV infrastructure. The assurance manager ensures service quality and compliance with operational standards. The performance manager monitors system performance metrics to optimize efficiency. The policy execution engine enforces and executes policies across the platform. The capacity monitoring manager tracks resource usage and forecasts future needs. The release management repository manages software releases and version control. The configuration manager handles system configurations, ensuring consistency and automation. The GCT provides centralized oversight and management of platform operations. The NFV platform decision analytics platform utilizes data analytics to support decision-making. The NoSQL DB stores unstructured data to support flexible and scalable data management. The platform schedulers and jobs automate and schedule routine tasks and workflows. The VNF backup and upgrade manager oversees the backup and upgrading of VNFs. The microservice auditor ensures the integrity and compliance of microservices across the platform.
[00101] The platform operation, administration, and maintenance manager (310) may oversee operational aspects of the MANO framework architecture (300). The platform operation, administration, and maintenance manager (310) may be responsible for implementing a load-balancing mechanism used by the ERM (128) to distribute the event request across multiple microservices instances.
[00102] The platform resource adapter and utilities (312) may provide necessary tools and interfaces for interacting with an underlying network infrastructure, i.e., the NFV architecture. The platform resource adapter and utilities (312) may include platform external API adapter and gateway, generic decoder, and indexer (XML, CSV, JSON), docker service adapter (DSA), API adapter and NFV gateway. The platform external API adapter and gateway facilitates seamless integration with external APIs and manages data flow between external systems and the platform. The generic decoder and indexer processes and organizes data from various formats such as XML, comma- separated values (CSV), and JSON, ensuring compatibility and efficient indexing. The DSA is a microservices-based system designed to deploy and manage Container Network Functions (CNFs) and their components (CNFCs) across Docker nodes. It offers REST endpoints for key operations, including uploading container images to a Docker registry, terminating CNFC instances, and creating Docker volumes and networks. CNFs, which are network functions packaged as containers, may consist of multiple CNFCs. The DSA facilitates the deployment, configuration, and management of these components by interacting with Docker's API, ensuring proper setup and scalability within a containerized environment. This approach provides a modular and flexible framework for handling network functions in a virtualized network setup. The API adapter interfaces with services, allowing integration and management of cloud resources. The NFV gateway acts as a bridge for NFV communications, coordinating between NFV components and other platform elements.
[00103] FIG. 4 illustrates another exemplary flow diagram of a method (400) for dynamic routing of an event request, in accordance with an embodiment of the present disclosure, in accordance with an embodiment of the present disclosure.
[00104] At step 402, the method (400) includes receiving (402), by a receiving unit (116), the event request from an interfacing unit (122) via a network interface. In an embodiment, the network interface corresponds to an event routing manager_load balancer (EM_LB) interface. The EM_LB facilitates event-based communication between the plurality of event routing management units (206) and the plurality of load balancers (220-2, 220-3). In an embodiment, the EM_LB interface is configured to enable the at least one load balancer (204-1) to send the event request and receive the event response from same instance (which has published event request) of the event routing management unit having the active state using the header-based request dispatching.
[00105] At step 404, the method (400) includes monitoring (404), by a processing unit (110), a status of each of the plurality of event routing management units (206), wherein the status is one of an active status or an inactive status. [00106] At step 406, the method (400) includes routing (406), by the processing unit (110) via at least one load balancer (204-1), the event request towards one of an event routing management unit with the active status.
[00107] At step 408, the method (400) includes forwarding (408), by the event routing management unit, the event request to at least one microservice (MS) (202) via one of a load balancer of the plurality of load balancers (220-2, 220-3).
[00108] At step 410, the method (400) includes receiving (410), by one of the load balancer of the plurality of load balancers (204-2, 204-3), an event response from the at least one MS (202).
[00109] In an embodiment, selection of one of the event routing management unit from the plurality of event routing management units (206) with the active status is performed using a distribution process and a header-based request dispatching.
[00110] In an embodiment the selection of one of the load balancer from the plurality of load balancers is performed using the distribution process.
[00111] In an embodiment, the method further includes sending, by the at least one load balancer, a timeout request to the plurality of event routing management units via the network interface upon determining that the event request is not performed within a predefined time.
[00112] In another exemplary embodiment, the present disclosure discloses a computer program product comprising a non-transitory computer-readable medium comprising instructions that, when executed by one or more processors, cause the one or more processors to perform a method for dynamic routing of an event request. The method includes receiving, by a receiving unit, the event request from an interfacing unit via a network interface. The method includes monitoring, by a processing unit, a status of each of the plurality of event routing management units. The status is one of an active status or an inactive status. The method further includes routing, by the processing unit via at least one load balancer, the event request towards one of an event routing management unit with the active status. The method further includes forwarding, by the event routing management unit, the event request to at least one microservice via one of a load balancer of the plurality of load balancers. The method further includes receiving, by one of the load balancer of the plurality of load balancers, an event response from the at least one microservice.
[00113] FIG. 5 illustrates an exemplary computer system (500) in which or with which embodiments of the present disclosure may be implemented. As shown in FIG. 5, the computer system (500) may include an external storage device (510), a bus (520), a main memory (530), a read only memory (540), a mass storage device (550), a communication port (560), and a processor (570). A person skilled in the art will appreciate that the computer system (500) may include more than one processor (570) and communication ports (560). Processor (570) may include various modules associated with embodiments of the present disclosure.
[00114] In an embodiment, the communication port (560) may be any of an RS- 232 port for use with a modem-based dialup connection, a 10/100 Ethernet port, a Gigabit or 10 Gigabit port using copper or fiber, a serial port, a parallel port, or other existing or future ports. The communication port (560) may be chosen depending on a network, such a Local Area Network (LAN), Wide Area Network (WAN), or any network to which the computer system (500) connects.
[00115] In an embodiment, the memory (530) may be Random Access Memory (RAM), or any other dynamic storage device commonly known in the art. Read-only memory (540) may be any static storage device(s) e.g., but not limited to, a Programmable Read Only Memory (PROM) chips for storing static information e.g., start-up or Basic Input/Output System (BIOS) instructions for the processor (570).
[00116] In an embodiment, the mass storage (550) may be any current or future mass storage solution, which may be used to store information and/or instructions. Exemplary mass storage solutions include, but are not limited to, Parallel Advanced Technology Attachment (PATA) or Serial Advanced Technology Attachment (SATA) hard disk drives or solid-state drives (internal or external, e.g., having Universal Serial Bus (USB) and/or Firewire interfaces), one or more optical discs, Redundant Array of Independent Disks (RAID) storage, e.g., an array of disks (e.g., SATA arrays).
[00117] In an embodiment, the bus (520) communicatively couples the processor(s) (570) with the other memory, storage and communication blocks. The bus (520) may be, e.g., a Peripheral Component Interconnect (PCI)/PCI Extended (PCI-X) bus, Small Computer System Interface (SCSI), Universal Serial Bus (USB) or the like, for connecting expansion cards, drives and other subsystems as well as other buses, such a front side bus (FSB), which connects the processor (570) to the computer system (500).
[00118] Optionally, operator and administrative interfaces, e.g., a display, keyboard, joystick, and a cursor control device, may also be coupled to the bus (520) to support direct operator interaction with the computer system (500). Other operator and administrative interfaces may be provided through network connections connected through the communication port (560). Components described above are meant only to exemplify various possibilities. In no way should the aforementioned exemplary computer system (500) limit the scope of the present disclosure.
[00119] While the foregoing describes various embodiments of the invention, other and further embodiments of the invention may be devised without departing from the basic scope thereof. The scope of the invention is determined by the claims that follow. The invention is not limited to the described embodiments, versions or examples, which are included to enable a person having ordinary skill in the art to make and use the invention when combined with information and knowledge available to the person having ordinary skills in the art.
[00120] The method and system of the present disclosure may be implemented in a number of ways. For example, the methods and systems of the present disclosure may be implemented by software, hardware, firmware, or any combination of software, hardware, and firmware. The above-described order for the steps of the method is for illustration only, and the steps of the method of the present disclosure are not limited to the order specifically described above unless specifically stated otherwise. Further, in some embodiments, the present disclosure may also be embodied as programs recorded in a recording medium, the programs including machine-readable instructions for implementing the methods according to the present disclosure. Thus, the present disclosure also covers a recording medium storing a program for executing the method according to the present disclosure.
[00121] While considerable emphasis has been placed herein on the preferred embodiments, it will be appreciated that many embodiments can be made and that many changes can be made in the preferred embodiments without departing from the principles of the disclosure. These and other changes in the preferred embodiments of the disclosure will be apparent to those skilled in the art from the disclosure herein, whereby it is to be distinctly understood that the foregoing descriptive matter is to be implemented merely as illustrative of the disclosure and not as a limitation.
[00122] As is evident from the above, the present disclosure provides a technically advanced solution by providing a system and a method for facilitating communication between the ERM and the LB via the EM_LB interface. The EM_LB interface distributes incoming events requests & responses by LB across these ERM instances to ensure that the system scales horizontally and handles increasing loads effectively. The microservice instances are deployed in active-active mode to ensure the availability of a microservice to serve the traffic even if any instance goes down. The load balancer (LB) monitors the health of ERM instances and ensures that events are directed to healthy instances of ERM if any instance goes down, thereby minimizing downtime and maintaining high availability. The EM_LB interface enables real-time adjustments to routing rules and load-balancing strategies without requiring service restarts. The EM_LB interface is used by load balancer to send request and receive response on the same instance of ERM using header-based routing present in LB. The load balancers provide zero-downtime deployments of microservices. The load balancer is configured to perform dynamic mapping of end points (for example, EM_LB interface details of ERM instances using service discovery mechanisms, providing dynamic routing between ERM instances, etc.). In this way, the LBs offload traffic offloading responsibilities from ERM instances and make the ERM available to focus only on processing of the application logic. The EM_LB interface used by load balancers ensures scalability, high availability, fault tolerance, traffic management capabilities and efficient resource utilization.
ADVANTAGES OF THE PRESENT DISCLOSURE
[00123] The present disclosure provides a system and a method for dynamic routing of an event request between a plurality of event routing management units and a plurality of load balancers.
[00124] The present disclosure provides a system and a method for dynamic routing of event requests that enhances the efficiency of handling large volumes of event-based communications in distributed microservice architectures. By utilizing a combination of load balancers and event routing management units, the system ensures optimal resource allocation and reduces response times under varying workloads.
[00125] The present disclosure provides a robust mechanism for monitoring the active status of event routing management units, ensuring that event requests are routed only to available and responsive units. This proactive monitoring prevents faults, improves reliability, and enhances the overall system performance.
[00126] The present disclosure allows for flexible routing strategies, including round-robin distribution, load-based selection, and header-based request dispatching. These strategies can be dynamically adjusted based on real-time conditions, enabling better load management and minimizing processing delays.
[00127] The present disclosure also introduces a timeout mechanism that ensures the timely processing of event requests. If a request is not handled within a predefined time, a timeout alert is generated, allowing for quicker issue resolution and preventing system slowdowns or failures due to unresponsive components. This mechanism enhances fault tolerance and ensures smoother operations in event-driven architectures.

Claims

CLAIMS We claim:
1. A method for dynamic routing of an event request, the method comprising: receiving (402), by a receiving unit (116), the event request from an interfacing unit (122) via a network interface; monitoring (404), by a processing unit (110), a status of each of a plurality of event routing management units (206), wherein the status is one of an active status or an inactive status; routing (406), by the processing unit (110) via at least one load balancer (204- 1), the event request towards one of an event routing management unit with the active status; forwarding (408), by the event routing management unit, the event request to at least one microservice (MS) (202) via one of a load balancer of the plurality of load balancers (220-2, 220-3); and receiving (410), by one of the load balancer of the plurality of load balancers (204-2, 204-3), an event response from the at least one MS (202).
2. The method as claimed in claim 1, wherein selection of one of the event routing management unit from the plurality of event routing management units (206) with the active status is performed using a distribution process and a header-based request dispatching.
3. The method as claimed in claim 2, wherein the selection of one of the load balancer from the plurality of load balancers (204-2, 204-3) is performed using the distribution process.
4. The method as claimed in claim 1, further comprising: sending, by the at least one load balancer (204-1), a timeout request to the plurality of event routing management units (206) via the network interface upon determining that the event request is not performed within a predefined time.
5. The method as claimed in claim 1 wherein the network interface corresponds to an event routing manager_load balancer (EM_LB) interface that facilitates event-based communication between the plurality of event routing management units (206) and the plurality of load balancers (220-2, 220-3).
6. The method as claimed in claim 5, wherein the EM_LB interface is configured to enable the at least one load balancer (204-1) to send the event request and receive the event response from same instance of the event routing management unit having the active state using the header-based request dispatching.
7. A system for dynamic routing of an event request, the system comprising: a receiving unit configured to receive the event request from an interfacing unit via a network interface; a memory (112); and a processing unit (110) coupled with the receiving unit (116) to receive the event request and is further coupled with the memory (112) to execute a set of instructions stored in the memory (112), the processing unit (112) is configured to: monitor a status of each of a plurality of event routing management units (206), wherein the status is one of an active status or an inactive status; route, via at least one load balancer (204-1), the event request towards one of an event routing management unit with the active status; forward, by the event routing management unit, the event request to at least one microservice (MS) (202) via one of a load balancer of the plurality of load balancers (220-2, 220-3); and receive, by one of the load balancer of the plurality of load balancers (204-2, 204-3), an event response from the at least one MS (202).
8. The system as claimed in claim 7, wherein selection of one of the event routing management unit from the plurality of event routing management units (206) with the active status is performed using a distribution process and a header-based request dispatching.
9. The system as claimed in claim 8, wherein the selection of one of the load balancer from the plurality of load balancers (204-2, 204-3) is performed using the distribution process.
10. The system as claimed in claim 7, wherein the at least one load balancer (204-1) is configured to: send a timeout request to the plurality of event routing management units (206) via the network interface upon determining that the event request is not performed within a predefined time.
11. The system as claimed in claim 7, wherein the network interface corresponds to an event routing manager_load balancer (EM_LB) interface that facilitates event-based communication between the plurality of event routing management units (206) and the plurality of load balancers (220-2, 220-3).
12. The system as claimed in claim 11, wherein the EM_LB interface is configured to enable the at least one load balancer (204-1) to send the event request and receive the event response from same instance of the event routing management unit having the active state using the header-based request dispatching.
13. A user equipment (UE) (104) communicatively coupled with a system (108), the coupling comprises steps of: receiving, by the system (108), a connection request; sending, by the system (108), an acknowledgment of the connection request to the UE (104); and transmitting a plurality of signals in response to the connection request, wherein the system (108) is configured to perform dynamic routing of an event request as claimed in claim 7.
14. A computer program product comprising a non-transitory computer-readable medium comprising instructions that, when executed by one or more processors, cause the one or more processors to a method for dynamic routing of an event request, the method comprising: receiving (402), by a receiving unit (116), the event request from an interfacing unit (122) via a network interface; monitoring (404), by a processing unit (110), a status of each of a plurality of event routing management units (206), wherein the status is one of an active status or an inactive status; routing (406), by the processing unit (110) via at least one load balancer (204- 1), the event request towards one of an event routing management unit with the active status; forwarding (408), by the event routing management unit, the event request to at least one microservice (MS) (202) via one of a load balancer of the plurality of load balancers (220-2, 220-3); and receiving (410), by one of the load balancer of the plurality of load balancers (204-2, 204-3), an event response from the at least one MS (202).
PCT/IN2024/052156 2023-10-31 2024-10-29 System and method for dynamic routing of an event request Pending WO2025094198A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
IN202321074275 2023-10-31
IN202321074275 2023-10-31

Publications (1)

Publication Number Publication Date
WO2025094198A1 true WO2025094198A1 (en) 2025-05-08

Family

ID=95582038

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IN2024/052156 Pending WO2025094198A1 (en) 2023-10-31 2024-10-29 System and method for dynamic routing of an event request

Country Status (1)

Country Link
WO (1) WO2025094198A1 (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10110668B1 (en) * 2015-03-31 2018-10-23 Cisco Technology, Inc. System and method for monitoring service nodes
CN112087382B (en) * 2019-06-14 2022-03-29 华为技术有限公司 Service routing method and device
WO2023187574A1 (en) * 2022-03-31 2023-10-05 Jio Platforms Limited System and method for active standby policy based routing in a network

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10110668B1 (en) * 2015-03-31 2018-10-23 Cisco Technology, Inc. System and method for monitoring service nodes
CN112087382B (en) * 2019-06-14 2022-03-29 华为技术有限公司 Service routing method and device
WO2023187574A1 (en) * 2022-03-31 2023-10-05 Jio Platforms Limited System and method for active standby policy based routing in a network

Similar Documents

Publication Publication Date Title
US11563636B1 (en) Dynamic management of network policies between microservices within a service mesh
US11782775B2 (en) Dynamic management of network policies between microservices within a service mesh
US12137100B1 (en) Communications system with dynamic or changing mapping correspondence between the respective set of domain names and the network addresses
US10757180B2 (en) Sender system status-aware load balancing
EP3637733A1 (en) Load balancing engine, client, distributed computing system, and load balancing method
US11546289B1 (en) User-configurable dynamic DNS mapping for virtual services
US20140143401A1 (en) Systems and Methods for Implementing Cloud Computing
US20150319050A1 (en) Method and apparatus for a fully automated engine that ensures performance, service availability, system availability, health monitoring with intelligent dynamic resource scheduling and live migration capabilities
US11522948B1 (en) Dynamic handling of service mesh loads using sliced replicas and cloud functions
US11687399B2 (en) Multi-controller declarative fault management and coordination for microservices
US11943284B2 (en) Overload protection for edge cluster using two tier reinforcement learning models
US8543680B2 (en) Migrating device management between object managers
AU2013201256B2 (en) Differentiated service-based graceful degradation layer
US12231509B2 (en) Apparatus and methods for dynamic scaling and orchestration
US10445136B1 (en) Randomized subrequest selection using request-specific nonce
Ranchal et al. RADical Strategies for engineering web-scale cloud solutions
US11595471B1 (en) Method and system for electing a master in a cloud based distributed system using a serverless framework
WO2025094198A1 (en) System and method for dynamic routing of an event request
US11777814B1 (en) User-configurable alerts for computing servers
WO2025088639A1 (en) System and method for routing event requests in network
WO2025017667A1 (en) Method and system for providing high availability for workflows in a fulfilment management system (fms)
WO2025057236A1 (en) Method and system for distributing a traffic load using an interface
WO2025069056A1 (en) Method and system for managing fault tolerance associated with an auditor service unit
WO2025094197A1 (en) System and method for performing operations by event routing manager
WO2025069083A1 (en) Method and system for distributing data traffic in a network

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 24885179

Country of ref document: EP

Kind code of ref document: A1