[go: up one dir, main page]

WO2025017704A1 - Method and system for processing an alarm - Google Patents

Method and system for processing an alarm Download PDF

Info

Publication number
WO2025017704A1
WO2025017704A1 PCT/IN2024/051277 IN2024051277W WO2025017704A1 WO 2025017704 A1 WO2025017704 A1 WO 2025017704A1 IN 2024051277 W IN2024051277 W IN 2024051277W WO 2025017704 A1 WO2025017704 A1 WO 2025017704A1
Authority
WO
WIPO (PCT)
Prior art keywords
alarm
raise
alarms
category
clear
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
PCT/IN2024/051277
Other languages
French (fr)
Inventor
Aayush Bhatnagar
Sandeep Bisht
Rahul Mishra
Jyothi Durga Prasad Chillapalli
Dipankar DIVY
Boddu PRASAD
Pavithra Sekar
Jhoshi NARESH
Akash Verma
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jio Platforms Ltd
Original Assignee
Jio Platforms Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jio Platforms Ltd filed Critical Jio Platforms Ltd
Publication of WO2025017704A1 publication Critical patent/WO2025017704A1/en
Pending legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/06Management of faults, events, alarms or notifications
    • H04L41/0604Management of faults, events, alarms or notifications using filtering, e.g. reduction of information by using priority, element types, position or time
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/06Management of faults, events, alarms or notifications
    • H04L41/0631Management of faults, events, alarms or notifications using root cause analysis; using analysis of correlation between notifications, alarms or events based on decision criteria, e.g. hierarchy, tree or time analysis
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/06Management of faults, events, alarms or notifications
    • H04L41/0604Management of faults, events, alarms or notifications using filtering, e.g. reduction of information by using priority, element types, position or time
    • H04L41/0609Management of faults, events, alarms or notifications using filtering, e.g. reduction of information by using priority, element types, position or time based on severity or priority
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/06Management of faults, events, alarms or notifications
    • H04L41/0604Management of faults, events, alarms or notifications using filtering, e.g. reduction of information by using priority, element types, position or time
    • H04L41/0622Management of faults, events, alarms or notifications using filtering, e.g. reduction of information by using priority, element types, position or time based on time
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/06Management of faults, events, alarms or notifications
    • H04L41/0686Additional information in the notification, e.g. enhancement of specific meta-data
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/06Management of faults, events, alarms or notifications
    • H04L41/069Management of faults, events, alarms or notifications using logs of notifications; Post-processing of notifications
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/16Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks using machine learning or artificial intelligence
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/50Network service management, e.g. ensuring proper service fulfilment according to agreements
    • H04L41/5061Network service management, e.g. ensuring proper service fulfilment according to agreements characterised by the interaction between service providers and their network customers, e.g. customer relationship management
    • H04L41/5074Handling of user complaints or trouble tickets

Definitions

  • the present invention relates to network management systems (NMS) and alarm processing, more particularly relates to a system and a method for high Transaction Per Second (TPS) alarm processing and end to end lifecycle management.
  • NMS network management systems
  • TPS Transaction Per Second
  • a Network Management System is a sole mediator for moderating and managing a fault, configuration, , accounting, performance and security (FCAPS) data of network elements and provides a single pane of glass for a Network Operations Centre.
  • the visualization and monitoring of use cases is performed by the NMS which is a common platform that can integrate with all network function elements/nodes.
  • the NMS supports different types of protocols by integrating with the devices (hardware & software). Once integration is done all configuration data of the node is extracted, if a node is running successfully and there is a fault in the network function element/node, the network function element/node generates an alarm which is sent to the NMS. But, scenarios where fluctuating, repetitive and high surge of alarms increase multiple data base operations for the alarms make the system slow and may result in data loss.
  • One or more embodiments of the present disclosure provide a system and a method of processing an alarm.
  • a method of processing an alarm includes receiving, by one or more processors, a plurality of alarms from a plurality of network elements. Further, the method includes segregating, by the one or more processor, the plurality of alarms into a raise alarm category and a clear alarm category. Further, the method includes storing, by the one or more processors, the plurality of alarms as segregated into one of the raise alarm category and the clear alarm category in a distributed cache. Each alarm from the plurality of alarms is provided with an identifier, and wherein the identifier of each alarm is produced in a stream.
  • the method includes scheduling, by the one or more processors, to pull at least one batch from the stream to obtain corresponding data from the distributed cache. Further, the method includes consuming, by the one or more processors, the identifier associated with the clear alarm from the stream. Further, the method includes detecting, by the one or more processors, the consumed identifier associated with the clear alarm in the distributed cache for determining corresponding raise alarm category. In an embodiment, the method includes deleting, by the one or more processors, the raise alarm corresponding to the clear alarm from the raise alarm category and storing the raise alarm category in an archive section, when the corresponding raise alarm is determined. In another embodiment, the method includes producing, by the one or more processors, the raise alarm in the stream for retry a clearance of the raise alarm when the corresponding raise alarm is not determined.
  • caching is used for updating Geographic Information (GI) and Points of Interest (POI) enrichment for each alarm from the plurality of alarms.
  • GI Geographic Information
  • POI Points of Interest
  • the enriched alarm is checked for correlation eligibility, wherein the enriched alarm is sent to a correlation service followed by a Trouble Ticket (TT) eligibility.
  • TT Trouble Ticket
  • the method further includes checking, by the one or more processors, each alarm from the plurality of alarms for a planned maintenance event. Further, the method includes checking, by the one or more processors, each alarm from the plurality of alarms for TT eligibility, when the planned maintenance event is unavailable.
  • a system for processing an alarm includes a collector module and a fault manager module.
  • the collector module is configured to receive a plurality of alarms from a plurality of network elements.
  • the fault manager module is configured to segregate the plurality of alarms into a raise alarm category and a clear alarm category. Further, the fault manager module is configured to store the plurality of alarms as segregated into one of the raise alarm category and the clear alarm category in a distributed cache.
  • Each alarm from the plurality of alarms is provided with an identifier, and the identifier of each alarm is produced in a stream. Further, the fault manager module is configured to schedule to pull at least one batch from the stream to obtain corresponding data from the distributed cache.
  • the fault manager module is configured to consume the identifier associated with the clear alarm from the stream. Further, the fault manager module is configured to detect the consumed identifier associated with the clear alarm in the distributed cache for determining corresponding raise alarm category. In an embodiment, further, the fault manager module is configured to delete the raise alarm corresponding to the clear alarm from the raise alarm category and store the raise alarm category in an archive section, when the corresponding raise alarm is determined. In another embodiment, the fault manager module is configured to produce the raise alarm in the stream for retry a clearance of the raise alarm when the corresponding raise alarm is not determined.
  • a non-transitory computer- readable medium having stored thereon computer-readable instructions that, when executed by a processor, cause the processor to receive a plurality of alarms from a plurality of network elements; segregate the plurality of alarms into a raise alarm category and a clear alarm category; store the plurality of alarms as segregated into one of the raise alarm category and the clear alarm category in a distributed cache, wherein each alarm from the plurality of alarms is provided with an identifier, and wherein the identifier of each alarm is produced in a stream; schedule to pull at least one batch from the stream to obtain corresponding data from the distributed cache; consume the identifier associated with the clear alarm from the stream; detect the consumed identifier associated with the clear alarm in the distributed cache for determining corresponding raise alarm category; and perform one of: delete the raise alarm corresponding to the clear alarm from the raise alarm category and storing the raise alarm category in an archive section, when the corresponding raise alarm is determined, and produce the raise alarm in the
  • FIG. 1 is an exemplary block diagram of an environment for processing an alarm, according to various embodiments of the present disclosure.
  • FIG. 2 is a block diagram of a system of FIG. 1, according to various embodiments of the present disclosure.
  • FIG. 3 is an example schematic representation of the system of FIG. 1 in which various entities operations are explained, according to various embodiments of the present system.
  • FIG. 4 shows a block diagram of a system architecture illustrating a relation between various components/elements of the present system, in accordance with an exemplary embodiment of the present disclosure
  • FIG. 5 shows a flow chart illustrating a method for processing an alarm, according to various embodiments of the present disclosure.
  • FIG. 6 shows an example flow chart illustrating a method for processing the alarm, according to various embodiments of the present disclosure.
  • elements in the drawings are illustrated for simplicity and may not have been necessarily been drawn to scale.
  • the flow charts illustrate the method in terms of the most prominent steps involved to help to improve understanding of aspects of the present invention.
  • one or more components of the device may have been represented in the drawings by conventional symbols, and the drawings may show only those specific details that are pertinent to understanding the embodiments of the present invention so as not to obscure the drawings with details that will be readily apparent to those of ordinary skill in the art having benefit of the description herein.
  • first, second etc. may be used herein to describe various elements, components, regions, layers and/or sections, it should be understood that these elements, components, regions, layers and/or sections should not be limited by these terms. These terms are used only to distinguish one element, component, region, layer or section from another region, layer, or a section. Thus, a first element, component, region layer, or section discussed below could be termed a second element, component, region, layer, or section without departing form the scope of the example embodiments.
  • terms such as “processing” or “computing” or “calculating” or “determining” of “displaying” or the like refer to the action and processes of a computer system, or similar electronic computing device/hardware, that manipulates and transforms data represented as physical, electronic quantities within the computer system's registers and memories into other data similarly represented as physical quantities within the computer system memories or registers or other such information storage, transmission or display devices.
  • Various embodiments of the invention provide a method of processing an alarm.
  • the method includes receiving, by one or more processors, a plurality of alarms from a plurality of network elements. Further, the method includes segregating, by the one or more processor, the plurality of alarms into a raise alarm category and a clear alarm category. Further, the method includes storing, by the one or more processors, the plurality of alarms as segregated into one of the raise alarm category and the clear alarm category in a distributed cache.
  • Each alarm from the plurality of alarms is provided with an identifier, and wherein the identifier of each alarm is produced in a stream.
  • the method includes scheduling, by the one or more processors, to pull at least one batch from the stream to obtain corresponding data from the distributed cache. Further, the method includes consuming, by the one or more processors, the identifier associated with the clear alarm from the stream. Further, the method includes detecting, by the one or more processors, the consumed identifier associated with the clear alarm in the distributed cache for determining corresponding raise alarm category. In an embodiment, the method includes deleting, by the one or more processors, the raise alarm corresponding to the clear alarm from the raise alarm category and storing the raise alarm category in an archive section, when the corresponding raise alarm is determined. In another embodiment, the method includes producing, by the one or more processors, the raise alarm in the stream for retry a clearance of the raise alarm when the corresponding raise alarm is not determined.
  • Various embodiments of the invention provide a system and a method for high TPS alarm processing and end to end lifecycle management in network systems.
  • the present system with each of its elements/components offers a higher TPS for alarm processing and end-to-end lifecycle management.
  • the system and method of the present disclosure enables performing updates in a database (DB) therefore preventing double/ repetitive storage of existing / prior stored documents to enable efficient data storage. Since multiple occurrences are being updated in the cache, hits are reduced and performance of the system is increased.
  • DB database
  • FIG. 1 illustrates an exemplary block diagram of an environment (100) for processing an alarm in a communications network (106), according to various embodiments of the present disclosure.
  • the environment (100) comprises a plurality of user equipment’s (UEs) 102-1, 102-2, ,102-n.
  • the at least one UE (102-n) from the plurality of the UEs (102-1, 102-2, 102-n) is configured to connect to a system (108) via the communication network (106).
  • label for the plurality of UEs or one or more UEs is 102.
  • the plurality of UEs (102) may be a wireless device or a communication device that may be a part of the system (108).
  • the wireless device or the UE (102) may include, but are not limited to, a handheld wireless communication device (e.g., a mobile phone, a smart phone, a phablet device, and so on), a wearable computer device (e.g., a headmounted display computer device, a head-mounted camera device, a wristwatch computer device, and so on), a laptop computer, a tablet computer, or another type of portable computer, a media playing device, a portable gaming system, and/or any other type of computer device with wireless communication or VoIP capabilities.
  • a handheld wireless communication device e.g., a mobile phone, a smart phone, a phablet device, and so on
  • a wearable computer device e.g., a headmounted display computer device, a head-mounted camera device, a wristwatch computer device, and so on
  • laptop computer e.g.
  • the UEs may include, but are not limited to, any electrical, electronic, electro-mechanical or an equipment or a combination of one or more of the above devices such as virtual reality (VR) devices, augmented reality (AR) devices, laptop, a general-purpose computer, desktop, personal digital assistant, tablet computer, mainframe computer, or any other computing device, wherein the computing device may include one or more in-built or externally coupled accessories including, but not limited to, a visual aid device such as camera, audio aid, a microphone, a keyboard, input units for receiving input from a user such as touch pad, touch enabled screen, electronic pen and the like. It may be appreciated that the UEs may not be restricted to the mentioned devices and various other devices may be used.
  • the plurality of UEs (102) may include a fixed landline, a landline with assigned extension within the communication network (106).
  • the plurality of UEs (102) may comprise a memory such as a volatile memory (e.g., RAM), a non-volatile memory (e.g., disk memory, FLASH memory, EPROMs, etc.), an unalterable memory, and/or other types of memory.
  • the memory might be configured or designed to store data.
  • the data may pertain to attributes and access rights specifically defined for the plurality of UEs (102).
  • the UE (102) may be accessed by the user, to receive the requests related to an order determined by the system (108).
  • the communication network (106) may use one or more communication interfaces/protocols such as, for example, Voice Over Internet Protocol (VoIP), 802.11 (Wi-Fi), 802.15 (including BluetoothTM), 802.16 (Wi-Max), 802.22, Cellular standards such as Code Division Multiple Access (CDMA), CDMA2000, Wideband CDMA (WCDMA), Radio Frequency Identification (e.g., RFID), Infrared, laser, Near Field Magnetics, etc.
  • VoIP Voice Over Internet Protocol
  • Wi-Fi Wi-Fi
  • 802.15 including BluetoothTM
  • Wi-Max Wi-Max
  • 802.22 Cellular standards such as Code Division Multiple Access (CDMA), CDMA2000, Wideband CDMA (WCDMA), Radio Frequency Identification (e.g., RFID), Infrared, laser, Near Field Magnetics, etc.
  • CDMA Code Division Multiple Access
  • WCDMA Wideband CDMA
  • RFID Radio Frequency Identification
  • Infrared laser, Near Field Magnetics, etc.
  • the system (108) is communicatively coupled to a server (104) via the communication network (106).
  • the server (104) can be, for example, but not limited to a standalone server, a server blade, a server rack, an application server, a bank of servers, a business telephony application server (BTAS), a server farm, a cloud server, an edge server, home server, a virtualized server, one or more processors executing code to function as a server, or the like.
  • the server (104) may operate at various entities or a single entity (include, but is not limited to, a vendor side, a service provider side, a network operator side, a company side, an organization side, a university side, a lab facility side, a business enterprise side, a defence facility side, or any other facility) that provides service.
  • entities or a single entity include, but is not limited to, a vendor side, a service provider side, a network operator side, a company side, an organization side, a university side, a lab facility side, a business enterprise side, a defence facility side, or any other facility.
  • the communication network (106) includes, by way of example but not limitation, one or more of a wireless network, a wired network, an internet, an intranet, a public network, a private network, a packet- switched network, a circuit-switched network, an ad hoc network, an infrastructure network, a Public-Switched Telephone Network (PSTN), a cable network, a cellular network, a satellite network, a fiber optic network, or some combination thereof.
  • PSTN Public-Switched Telephone Network
  • the communication network (106) may include, but is not limited to, a Third Generation (3G), a Fourth Generation (4G), a Fifth Generation (5G), a Sixth Generation (6G), a New Radio (NR), a Narrow Band Internet of Things (NB-IoT), an Open Radio Access Network (O-RAN), and the like.
  • 3G Third Generation
  • 4G Fourth Generation
  • 5G Fifth Generation
  • 6G Sixth Generation
  • NR New Radio
  • NB-IoT Narrow Band Internet of Things
  • OF-RAN Open Radio Access Network
  • the communication network (106) may also include, by way of example but not limitation, at least a portion of one or more networks having one or more nodes that transmit, receive, forward, generate, buffer, store, route, switch, process, or a combination thereof, etc. one or more messages, packets, signals, waves, voltage or current levels, some combination thereof, or so forth.
  • the communication network (106) may also include, by way of example but not limitation, one or more of a wireless network, a wired network, an internet, an intranet, a public network, a private network, a packet-switched network, a circuit-switched network, an ad hoc network, an infrastructure network, a Public-Switched Telephone Network (PSTN), a cable network, a cellular network, a satellite network, a fiber optic network, a VOIP or some combination thereof.
  • PSTN Public-Switched Telephone Network
  • a plurality of network elements (106a) can be, for example, but not limited to a base station that is located in the fixed or stationary part of the communication network (106).
  • the base station may correspond to a remote radio head, a transmission point, an access point or access node, a macro cell, a small cell, a micro cell, a femto cell, a metro cell.
  • the base station enables transmission of radio signals to the UE or mobile transceiver.
  • Such a radio signal may comply with radio signals as, for example, standardized by a 3GPP or, generally, in line with one or more of the above listed systems.
  • a base station may correspond to a NodeB, an eNodeB, a Base Transceiver Station (BTS), an access point, a remote radio head, a transmission point, which may be further divided into a remote unit and a central unit.
  • BTS Base Transceiver Station
  • 3GPP The term “3GPP” is a 3 rd Generation Partnership Project and is a collaborative project between a group of telecommunications associations with the initial goal of developing globally applicable specifications for Third Generation (3G) mobile systems.
  • the 3GPP specifications cover cellular telecommunications technologies, including radio access, core network, and service capabilities, which provide a complete system description for mobile telecommunications.
  • the 3GPP specifications also provide hooks for non-radio access to the core network, and for networking with non-3GPP networks.
  • the system (108) may include one or more processors (202) coupled with a memory (204), wherein the memory (204) may store instructions which when executed by the one or more processors (202) may cause the system (108) executing requests in the communication network (106) or the server (104).
  • FIG. 2 An exemplary representation of the system (108) for such purpose, in accordance with embodiments of the present disclosure, is shown in FIG. 2 as system (108).
  • the system (108) may include one or more processor(s) (202).
  • the one or more processor(s) (202) may be implemented as one or more microprocessors, microcomputers, microcontrollers, edge or fog microcontrollers, digital signal processors, central processing units, logic circuitries, and/or any devices that process data based on operational instructions.
  • the one or more processor(s) (202) may be configured to fetch and execute computer-readable instructions stored in the memory (204) of the system (108).
  • the memory (204) may be configured to store one or more computer-readable instructions or routines in a non- transitory computer readable storage medium, which may be fetched and executed to create or share data packets over a network service.
  • the environment (100) further includes the system (108) communicably coupled to the remote server (104) and each UE of the plurality of UEs (102) via the communication network (106).
  • the remote server (104) is configured to execute the requests in the communication network (106).
  • the system (108) is adapted to be embedded within the remote server (104) or is embedded as the individual entity.
  • the system (108) is designed to provide a centralized and unified view of data and facilitate efficient business operations.
  • the system (108) is authorized to access to update/create/delete one or more parameters of their relationship between the requests for the workflow, which gets reflected in realtime independent of the complexity of network.
  • the system (108) may include an enterprise provisioning server (for example), which may connect with the remote server (104).
  • the enterprise provisioning server provides flexibility for enterprises, ecommerce entities, finance entities, etc., to update/create/delete information related to the requests in real time as per their business needs.
  • a user with administrator rights can access and retrieve the requests for the workflow and perform real-time analysis in the system (108).
  • the system (108) may include, by way of example but not limitation, one or more of a standalone server, a server blade, a server rack, a bank of servers, a business telephony application server (BTAS), a server farm, hardware supporting a part of a cloud service or system, a home server, hardware running a virtualized server, one or more processors executing code to function as a server, one or more machines performing server-side functionality as described herein, at least a portion of any of the above, some combination thereof.
  • BTAS business telephony application server
  • system (108) may operate at various entities or single entity (for example include, but is not limited to, a vendor side, service provider side, a network operator side, a company side, an organization side, a university side, a lab facility side, a business enterprise side, ecommerce side, finance side, a defence facility side, or any other facility) that provides service.
  • entities or single entity for example include, but is not limited to, a vendor side, service provider side, a network operator side, a company side, an organization side, a university side, a lab facility side, a business enterprise side, ecommerce side, finance side, a defence facility side, or any other facility.
  • system (108) is described as an integral part of the remote server (104), without deviating from the scope of the present disclosure. Operational and construction features of the system (108) will be explained in detail with respect to the following figures.
  • FIG. 2 illustrates a block diagram of the system (108) provided for processing the alarm, according to one or more embodiments of the present invention.
  • the system (108) includes the one or more processors (202), the memory (204), an interface (206) (e.g., user interface or the like), a display (208), an input unit (210), and a centralized database (or database) (214).
  • the system (108) may comprise one or more processors (202).
  • the one or more processors (202), hereinafter referred to as the processor (202) may be implemented as one or more microprocessors, microcomputers, microcontrollers, digital signal processors, central processing units, state machines, logic circuitries, single board computers, and/or any devices that manipulate signals based on operational instructions.
  • the system (108) includes one processor. However, it is to be noted that the system (108) may include multiple processors as per the requirement and without deviating from the scope of the present disclosure.
  • the information related to the request may be provided or stored in the memory (204) of the system (108).
  • the processor (202) is configured to fetch and execute computer-readable instructions stored in the memory (204).
  • the memory (204) may be configured to store one or more computer-readable instructions or routines in a non-transitory computer-readable storage medium, which may be fetched and executed to create or share data packets over a network service.
  • the memory (204) may include any non-transitory storage device including, for example, volatile memory such as RAM, or non-volatile memory such as disk memory, EPROMs, FLASH memory, unalterable memory, and the like.
  • the memory (204) may comprise any non-transitory storage device including, for example, volatile memory such as Random- Access Memory (RAM), or non-volatile memory such as Electrically Erasable Programmable Read-only Memory (EPROM), flash memory, and the like.
  • the system (108) may include an interface(s).
  • the interface(s) may comprise a variety of interfaces, for example, interfaces for data input and output devices, referred to as input/output (I/O) devices, storage devices, and the like.
  • the interface(s) may facilitate communication for the system.
  • the interface(s) may also provide a communication pathway for one or more components of the system. Examples of such components include, but are not limited to, processing unit/engine(s) and the centralized database.
  • the processing unit/engine(s) may be implemented as a combination of hardware and programming (for example, programmable instructions) to implement one or more functionalities of the processing engine(s).
  • the information related to the requests may further be configured to render on the interface (206).
  • the interface (206) may include functionality similar to at least a portion of functionality implemented by one or more computer system interfaces such as those described herein and/or generally known to one having ordinary skill in the art.
  • the interface (206) may be rendered on the display (208), implemented using Liquid Crystal Display (LCD) display technology, Organic Light-Emitting Diode (OLED) display technology, and/or other types of conventional display technology.
  • the display (208) may be integrated within the system (108) or connected externally.
  • the input unit(s) (210) may include, but not limited to, keyboard, buttons, scroll wheels, cursors, touchscreen sensors, audio command interfaces, magnetic strip reader, optical scanner, etc.
  • the centralized database (214) may be communicably connected to the processor (202) and the memory (204).
  • the centralized database (214) may be configured to store and retrieve the request pertaining to features, or services or workflow of the system (108), access rights, attributes, approved list, and authentication data provided by an administrator.
  • the remote server (104) may allow the system (108) to update/create/delete one or more parameters of their information related to the request, which provides flexibility to roll out multiple variants of the request as per business needs.
  • the centralized database (214) may be outside the system (108) and communicated through a wired medium and wireless medium.
  • the processor (202) may be implemented as a combination of hardware and programming (for example, programmable instructions) to implement one or more functionalities of the processor (202).
  • programming for the processor (202) may be processor-executable instructions stored on a non-transitory machine-readable storage medium and the hardware for the processor (202) may comprise a processing resource (for example, one or more processors), to execute such instructions.
  • the memory (204) may store instructions that, when executed by the processing resource, implement the processor (202).
  • system (108) may comprise the memory (204) storing the instructions and the processing resource to execute the instructions, or the memory (204) may be separate but accessible to the system (108) and the processing resource.
  • the processor (202) may be implemented by an electronic circuitry.
  • the processor (202) includes a collector module (216) and a fault manager module (218).
  • the collector module (216) and the fault manager (218) module may be implemented as a combination of hardware and programming (for example, programmable instructions) to implement one or more functionalities of the processor (202).
  • programming for example, programmable instructions
  • the programming for the processor (202) may be processor-executable instructions stored on a non-transitory machine-readable storage medium and the hardware for the processor (202) may comprise a processing resource (for example, one or more processors), to execute such instructions.
  • the memory (204) may store instructions that, when executed by the processing resource, implement the processor.
  • the system (108) may comprise the memory (204) storing the instructions and the processing resource to execute the instructions, or the memory (204) may be separate but accessible to the system (108) and the processing resource.
  • the processor (202) may be implemented by the electronic circuitry.
  • the collector module (216) and the fault manager module (218) are communicably coupled to each other.
  • the collector module (216) receives a plurality of alarms from the plurality of network elements (106a).
  • the fault manager module (218) segregates the plurality of alarms into a raise alarm category and a clear alarm category.
  • the raise alarm category refers to a specific classification or type of the alarm that is generated when a fault or an issue is detected within a network infrastructure.
  • the fault or the issue can be, for example, but not limited to hardware failures (e.g., router or switch malfunction), software errors (e.g., protocol issues), connectivity problems (e.g., link failures), or performance degradation (e.g., high latency or packet loss) or the like.
  • the clear alarm category refers to a classification or type of action taken to resolve or acknowledge the alarm that has been raised due to a detected fault or issue within the network infrastructure. In other words, when the fault or the issue is detected within the network (106), the alarm is typically generated to alert the network operator or administrator. Once the fault or the issue has been addressed, the clear alarm category is used to mark the alarm as resolved or acknowledged.
  • the raise alarm category and the clear alarm category are essential for a network administrator to promptly identify and resolve the fault or issue, so as to maintain the reliability, availability, and performance of the network infrastructure.
  • the fault manager module (218) stores the plurality of alarms as segregated into one of the raise alarm category and the clear alarm category in a distributed cache.
  • Each alarm from the plurality of alarms is provided with an identifier, and the identifier of each alarm is produced in a stream.
  • the fault manager module (218) schedules to pull at least one batch from the stream to obtain corresponding data from the distributed cache.
  • the corresponding data is used to fetch a specific alarm instead of getting all the alarms or the plurality of alarms stored in the fault manager module (218).
  • the corresponding data can be, for example, but not limited to a high latency, a packet loss, protocol issues, link error or the like.
  • the fault manager module (218) consumes the identifier associated with the clear alarm from the stream. Consuming the identifier means the fault manager module (218) receives data streams containing information about the network alarms. Upon detecting that the alarm has been resolved or acknowledged (cleared), the fault manager module (218) identifies and processes unique identifier associated with that alarm. This action enables the NMS to manage and maintain an accurate record of network events, ensuring efficient fault handling and overall network performance in alignment within a Fault Management (FM) process.
  • FM Fault Management
  • the fault manager module (218) detects the consumed identifier associated with the clear alarm in the distributed cache for determining corresponding raise alarm category. In an embodiment, further, the fault manager module (218) deletes the raise alarm corresponding to the clear alarm from the raise alarm category and stores the raise alarm category in an archive section, when the corresponding raise alarm is determined. In an example, upon detecting the fault (e.g., high packet loss rates or interface errors or the like) in the plurality of network elements (106a), the fault manager module (218) generates the alarm. This alarm is categorized within a fault management domain. In another embodiment, the fault manager module (218) produces the raise alarm in the stream for retry a clearance of the raise alarm when the corresponding raise alarm is not determined.
  • the fault manager module (218) upon detecting the fault (e.g., high packet loss rates or interface errors or the like) in the plurality of network elements (106a), the fault manager module (218) generates the alarm. This alarm is categorized within a fault management domain. In another embodiment, the fault manager
  • caching is used for updating Geographic Information (GI) and Points of Interest (POI) enrichment for each alarm from the plurality of alarms.
  • GI Geographic Information
  • POI Points of Interest
  • the enriched alarm is checked for correlation eligibility, wherein the enriched alarm is sent to a correlation service followed by a Trouble Ticket (TT) eligibility.
  • TT Trouble Ticket
  • the fault manager module (218) checks each alarm from the plurality of alarms for a planned maintenance event. Further, the fault manager module (218) checks each alarm from the plurality of alarms for TT eligibility, when the planned maintenance event is unavailable.
  • FIG. 3 is an example schematic representation of the system (300) of FIG. 1 in which various entities operations are explained, according to various embodiments of the present system. It is to be noted that the embodiment with respect to FIG. 3 will be explained with respect to the first UE (102-1) and the system (108) for the purpose of description and illustration and should nowhere be construed as limited to the scope of the present disclosure.
  • the first UE (102-1) includes one or more primary processors (305) communicably coupled to the one or more processors (202) of the system (108).
  • the one or more primary processors (305) are coupled with a memory (310) storing instructions which are executed by the one or more primary processors (305). Execution of the stored instructions by the one or more primary processors (305) enables the UE (102-1). The execution of the stored instructions by the one or more primary processors (305) further enables the UE (102-1) to execute the requests in the communication network (106).
  • the one or more processors (202) is configured to transmit a response content related to the request to the UE (102-1). More specifically, the one or more processors (202) of the system (108) is configured to transmit the response content to at least one of the UE (102-1).
  • the kernel (315) is a core component serving as the primary interface between hardware components of the UE (102-1) and the system (108). The kernel (315) is configured to provide the plurality of response contents hosted on the system (108) to access resources available in the communication network (106).
  • the resources include one of a Central Processing Unit (CPU), memory components such as Random Access Memory (RAM) and Read Only Memory (ROM).
  • the system (108) includes the one or more processors (202), the memory (204), the interface (206), the display (208), and the input unit (210).
  • the operations and functions of the one or more processors (202), the memory (204), the interface (206), the display (208), and the input unit (210) are already explained in FIG. 2.
  • the processor (202) includes the transceiver unit (216) and the detecting unit (218). The operations and functions of the transceiver unit (216) and the detecting unit (218) are already explained in FIG. 2.
  • FIG. 4 shows a block diagram of the system architecture (400) illustrating a relation between various components/elements of the present system, in accordance with an exemplary embodiment of the present disclosure.
  • FIG. 4 particularly discloses the flow of receiving alarms by the collector module (216) into the stream which are collected and segregated by the fault manager module (218) into a clear fault manager (402) and a raise fault manager (406).
  • the system architecture (400) discloses a location of a distributed IO Cache (408) and its relation with the fault manager module (218), the raise fault manager (406) and the clear fault manager (402).
  • the system architecture (400) further discloses a FM callback module (410), an enrichment engine (EE) (412), a correlation engine (CE) (414), and a TT service module (416).
  • the enrichment engine (412), the Correlation Engine (CE) (414) and the FM callback module (410) are configured to communicate with each other. Multiple alarms are received from various nodes (e.g., fault manager module (218), the raise fault manager (406) and the clear fault manager (402)).
  • the clear fault manager (402) may include a clear fault manager retry mechanism (not shown).
  • the alarms due to the system fault/failure are raised to a Network Management System (NMS) (not shown).
  • NMS Network Management System
  • a reception of the alarms is located at the NMS.
  • the reception of the alarms is located at the collector module (216) of the NMS.
  • the collector module (216) collects these alarms and produces a stream.
  • the fault manager module (218) consumes alarms from the stream continuously, to insert in the distributed cache (418).
  • the distributed cache (418) is a high IO TPS distributed cache which consists of multi node architecture and an in-memory database that persists on disk that provides reliability and availability.
  • the distributed IO cache (408) updates its occurrence and adds timestamp in the timestamp array and produces the IDs to stream to be consumed by the clear fault manager (402) and the raise fault manager (406).
  • only “IDs” of said data/alarms are streamed towards the raise fault manager (406)/ clear fault manager (402) instead of whole data.
  • the raise fault manager (406) and clear fault manager (402) are scheduled to pull batches from the stream and to get corresponding data from the distributed IO cache (408). The corresponding data is used to fetch the specific alarm instead of getting all the alarms or the plurality of alarms stored in the fault manager module (218).
  • a non-blocking retry mechanism (not shown) frees application threads from going into a time wait state so as to enable higher performance using the stream.
  • caching is used for updating Geographic Information (GI) and Points of Interest (POI) enrichment instead of traversing through the centralized database (214) before inserting in the centralized database (214).
  • the GI and POI enrichment refer to the process of adding additional information to the GI and POI data such as, but not limited to, geographical coordinates (latitude and longitude) along with additional information like name, address, category, and sometimes ratings or reviews.
  • the blocking communication between microservice is reduced, so application threads are not waiting for other service to complete the request. Since multiple occurrences are being updated in the centralized database (214), a number of alarm hits are reduced and performance of the system is increased.
  • the alarm hits refers to the process of minimizing the frequency and impact of alarms generated within the communication network (106).
  • the raise operation is initiated, wherein the alarm is stored or updated in the centralized database (214).
  • the documents may include (for example) an alarm severity, an alarm description, an affected device, a timestamp, event logs, performance metrics, and historical performance data and trends.
  • the alarm severity indicates an impact or seriousness of the fault (e.g., critical, major, minor or the like).
  • the alarm description provides details about the nature of the fault (e.g., hardware failure, link down or the like).
  • the affected device specifies the plurality of network elements (106a) where the fault has been detected.
  • the timestamp records the exact time when the fault was detected or when the alarm was raised.
  • the event logs are sequential records of events that preceded the fault, helping to trace the sequence of events leading to the issue.
  • the performance metrics indicates bandwidth utilization and packet loss rates at the time of the fault occurrence.
  • the historical performance data and trends help in comparing current network behaviour with past patterns, aiding in diagnosing recurring issues or identifying anomalies.
  • the raise operation may involve enrichment, correlation of alarms prior to trouble ticket generation.
  • the raise fault manager (406) consumes raise alarms’ IDs from the stream and then gets alarms from the distributed 10 cache (408) using those IDs. These alarms are processed, metadata and enrichment are updated and inserted in the centralized database (214). These alarms are also processed for planned event, Al based correlation and trouble ticketing.
  • the Al-based correlation refers to the use of Artificial Intelligence (Al) techniques to intelligently correlate and manage alarms that are anticipated or expected due to planned events within the network infrastructure.
  • the clear fault manager (402) consumes clear alarms’ IDs from the stream and checks them in the centralized database (214) for their corresponding raise alarms. If it is found, then they are deleted from an active section and inserted in the archived section. If it is not found, then they are produced in the stream for retry fault manager (FM) to retry the clearance.
  • FM retry fault manager
  • the clear retry fault manager (404) consumes the retry alarm data from the stream and again performs the clearance process as mentioned above. If the raise is not found in the centralized database (214), then it will increase retry count and send back to the retry stream. This process of retry will happen until alarm is cleared or retry threshold is exhausted.
  • the collector module (216) is responsible to collect all the FC APS data from the plurality of network element (106a).
  • the faults (or alarms) are sent to the collector module (216) via various protocols (Simple Network Management Protocol (SNMP), REST, SOAP, Kafka etc.) by the networks elements (206a).
  • SNMP Simple Network Management Protocol
  • the alarm / data is collected and produced in the stream with their alarm types as events ‘raise Alarm’ and ‘clear Alarm’.
  • the fault manager module (218) consumes the alarms from the stream and segregates them into the raise fault manager (406) and the clear fault manager (402).
  • an auditor (not shown) runs with longer interval to find stranded alarms present in the distributed IO cache (408) and process them.
  • the auditor audits both raise alarms and clear alarms and forward towards raise and clear process accordingly. This is done as a failsafe to avoid any loss of alarm in the distributed IO cache (408).
  • the distributed IO cache (408) is located on top of the fault manager module (218), which helps to store the data until future requirement.
  • the stored data within the distributed IO cache (408) may also be updated in case of recurrence attributes. Therefore, if alarm is not present in the distributed IO cache (408), it is inserted, if it is present then the latest alarm is updated with addition of alarm timestamp in timestamp array and increment of occurrence count.
  • These IDs/alarms are streamed toward raise/clear stream based on certain conditions dependent of time or transaction occurrence. Particularly, the conditions are if the IDs/alarm is occurring for the first time or every nth occurrence or if alarm last occurred certain configurable time back.
  • the fault manager module (218) also functions to pump selective data in the stream, while the larger document is stored in the distributed IO cache (408), the document IDs of the alarm are stored in the stream to be received by the raise fault manager (406) and the clear fault manager (402).
  • the document ID may include (for example) an alarm severity ID, an alarm description ID, an affected device ID, a timestamp ID, event log ID, performance metrics ID, and historical performance data and trends ID. Once these IDs are received, the corresponding documents are obtained from the distributed IO cache (408) by the raise fault manager (406) and the clear fault manager (402) using the IDs.
  • the raise operation is initiated, wherein the alarm is stored in the centralized database (214), in case the alarm has been received before, attributes are simply updated in the centralized database (214) and the alarm is stored with updated information, therefore preventing double/ repetitive storage of existing / prior stored documents to enable efficient data storage.
  • the attributes associated with the alarms provide essential information that helps in identifying, categorizing, prioritizing, and resolving network faults efficiently.
  • the attribute can be, for example, but not limited to a severity, a timestamp, an alarm type, an affected object, an acknowledgment status, clearance status or the like. The severity indicates the impact or seriousness of the alarm on network operations.
  • the timestamp records the exact time when the alarm was generated or detected.
  • the alarm type describes the nature or category of the alarm (e.g., hardware failure, link down, configuration error or the like).
  • the affected object identifies a specific network element from the plurality of network elements (106a) or an object (e.g., device, interface) impacted by the alarm.
  • the acknowledgment status indicates whether the alarm has been acknowledged by an operator or not.
  • the clearance status indicates whether the alarm has been cleared (resolved) or not.
  • the raise operation additionally has two to three levels of traversals including Enrichment and Correlation performed by the Enrichment Engine and Correlation Engine.
  • enrichment related attributes are to be retrieved based on profiles and configuration based on the order of level of traversal the node data is retrieved and added to the alarm body thereby enriching the alarm.
  • the profiles refer to predefined configurations or settings that determine how alarms are managed, processed, and presented within the system (108). In other words, the profiles refer to the sets of rules, configurations, or templates that govern the behaviour and handling of alarms within the system (108). If logical enrichment for the alarm is available, that too is performed.
  • the logical enrichment for the alarm refers to the process of enhancing the basic alarm data with additional contextual information (e.g., accurate time, accurate location or the like) and intelligence to facilitate better analysis, diagnosis, and resolution of network issues. This happens if the distributed IO cache (408) is enabled before the alarm insertion. If the alarm is normally inserted then, a parallel request is sent to the EE (412) based on the profile and eligibility of alarm. [0078] These enriched alarms are checked for correlation eligibility if the alarm is ineligible for correlation and thereafter the alarm is sent for TT request. The correlation eligibility intelligently correlates and manages the alarms that are anticipated or expected due to planned events within the network infrastructure.
  • additional contextual information e.g., accurate time, accurate location or the like
  • correlation engine For correlation based on various types of correlations which may be domain/ inter-domain based; normal grouping where parent - child identification is not successful; pre-determined parameters of correlation; or based on time dependent parameters; in order to group the related alarms.
  • the correlation related attributes are also added to the alarms, and a unique ID is given to the correlated groups and sent for TT request to the TT processor/ TTP wherein the alarms will be evaluated for the TT request and if found eligible a TT will be raised accordingly.
  • these alarms are checked for Outage alarms, if outage (Planned Maintenance) is absent, then the alarms are checked for TT eligibility.
  • the clear fault manager (402) obtains data from the stream as well as the distributed IO cache (408) to detect the raise as every clear must have its corresponding raise. On detection of said raise, it shall compare for timestamp less than the clearance timestamp and it shall delete the raise alarm from active section and clearance metadata shall be added in alarm and stored in archived section.
  • the clearance metadata refers to a specific data or information associated with the process of resolving or clearing the alarm once the underlying issue has been addressed in the network infrastructure.
  • FIG. 5 is a flow chart (500) illustrating a method for processing an alarm, according to various embodiments of the present system.
  • the method includes receiving the plurality of alarms from the plurality of network elements (106a).
  • the method includes segregating the plurality of alarms into the raise alarm category and the clear alarm category.
  • the method includes storing the plurality of alarms as segregated into one of the raise alarm category and the clear alarm category in the distributed IO cache (408).
  • Each alarm from the plurality of alarms is provided with the identifier, where the identifier of each alarm is produced in a stream.
  • the method includes scheduling to pull at least one batch from the stream to obtain corresponding data from the distributed IO cache (408).
  • the method includes consuming the identifier associated with the clear alarm from the stream.
  • the method includes detecting the consumed identifier associated with the clear alarm in the distributed cache for determining corresponding raise alarm category.
  • the method includes deleting the raise alarm corresponding to the clear alarm from the raise alarm category and storing the raise alarm category in an archive section, when the corresponding raise alarm is determined.
  • the method includes producing the raise alarm in the stream for retry the clearance of the raise alarm when the corresponding raise alarm is not determined.
  • FIG. 6 shows an example flow chart illustrating a method for processing the alarm, according to various embodiments of the present disclosure.
  • the collector module (216) receives alarms and streams towards the fault manager module (218).
  • the fault manager module (218) stores them in the distributed IO cache (408) while the “IDs” of said alarms are produced in stream and streamed towards the raise fault manager (406)/the clear fault manager (402) instead of whole data.
  • the raise fault manager (406) and the clear fault manager (402) are scheduled to pull batches from the stream and to get corresponding data from the distributed IO cache (408).
  • the clear fault manager (402) consumes clear alarms’ and checks for corresponding raise alarms.
  • the raise alarm is deleted from the active section and inserted in the archived section. Additionally, the non-blocking retry mechanism frees the application threads from going into the time wait state so as to enable higher performance using the stream. If the raise is not found, then they are produced in the stream for retry FM to retry the clearance.
  • the flow chart further shows, for each alarm, caching is used for updating Geographic Information (GI) and Points of Interest (POI) enrichment instead of traversing through the DB (214) before inserting into the DB (214).
  • GI Geographic Information
  • POI Points of Interest
  • the raise operation is initiated, wherein the alarm is stored or updated in the DB (214).
  • the raise operation may involve enrichment, correlation of alarms prior to trouble ticket generation.
  • alarms are checked for outage alarms. If the outage (Planned Maintenance) is not there, then they shall be checked for TT eligibility.
  • the present system (108) with each of its elements/components offers a higher TPS for alarm processing and end-to-end lifecycle management
  • the system (108) and method of the present disclosure enables performing updates in the DB, therefore preventing double/ repetitive storage of existing / prior stored documents to enable efficient data storage. Since multiple occurrences are being updated in the cache, hits are reduced and performance of the system is increased.
  • FIGS. 1-6 A person of ordinary skill in the art will readily ascertain that the illustrated embodiments and steps in description and drawings (FIGS. 1-6) are set out to explain the exemplary embodiments shown, and it should be anticipated that ongoing technological development will change the manner in which particular functions are performed. These examples are presented herein for purposes of illustration, and not limitation. Further, the boundaries of the functional building blocks have been arbitrarily defined herein for the convenience of the description. Alternative boundaries can be defined so long as the specified functions and relationships thereof are appropriately performed. Alternatives (including equivalents, extensions, variations, deviations, etc., of those described herein) will be apparent to persons skilled in the relevant art(s) based on the teachings contained herein. Such alternatives fall within the scope and spirit of the disclosed embodiments.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The present disclosure relates to a method of processing an alarm by one or more processors(202). The method includes scheduling to pull at least one batch from a stream to obtain corresponding data from a distributed cache. Further, the method includes consuming an identifier associated with a clear alarm from the stream. Further, the method includes detecting the consumed identifier associated with the clear alarm in the distributed cache for determining corresponding raise alarm category. In an embodiment, the method includes deleting the raise alarm corresponding to the clear alarm from the raise alarm category and storing the raise alarm category in an archive section, when the corresponding raise alarm is determined. In another embodiment, the method includes producing the raise alarm in the stream for retry a clearance of the raise alarm when the corresponding raise alarm is not determined.

Description

METHOD AND SYSTEM FOR PROCESSING AN ALARM
FIELD OF THE INVENTION
[0001] The present invention relates to network management systems (NMS) and alarm processing, more particularly relates to a system and a method for high Transaction Per Second (TPS) alarm processing and end to end lifecycle management.
BACKGROUND OF THE INVENTION
[0002] Efficient network management is vital for telecom operators to ensure continuous service delivery and maintain operational continuity. This is a critical piece in an overall telecommunication network management solution which enables visualization and management of use cases. A Network Management System (NMS) is a sole mediator for moderating and managing a fault, configuration, , accounting, performance and security (FCAPS) data of network elements and provides a single pane of glass for a Network Operations Centre.
[0003] The visualization and monitoring of use cases is performed by the NMS which is a common platform that can integrate with all network function elements/nodes. The NMS supports different types of protocols by integrating with the devices (hardware & software). Once integration is done all configuration data of the node is extracted, if a node is running successfully and there is a fault in the network function element/node, the network function element/node generates an alarm which is sent to the NMS. But, scenarios where fluctuating, repetitive and high surge of alarms increase multiple data base operations for the alarms make the system slow and may result in data loss.
[0004] Therefore, an efficient system and method are required for processing, monitoring and managing high TPS (Transaction Per Second) alarm processing and end to end lifecycle management. SUMMARY OF THE INVENTION
[0005] One or more embodiments of the present disclosure provide a system and a method of processing an alarm.
[0006] In one aspect of the present invention, a method of processing an alarm is disclosed. The method includes receiving, by one or more processors, a plurality of alarms from a plurality of network elements. Further, the method includes segregating, by the one or more processor, the plurality of alarms into a raise alarm category and a clear alarm category. Further, the method includes storing, by the one or more processors, the plurality of alarms as segregated into one of the raise alarm category and the clear alarm category in a distributed cache. Each alarm from the plurality of alarms is provided with an identifier, and wherein the identifier of each alarm is produced in a stream. Further, the method includes scheduling, by the one or more processors, to pull at least one batch from the stream to obtain corresponding data from the distributed cache. Further, the method includes consuming, by the one or more processors, the identifier associated with the clear alarm from the stream. Further, the method includes detecting, by the one or more processors, the consumed identifier associated with the clear alarm in the distributed cache for determining corresponding raise alarm category. In an embodiment, the method includes deleting, by the one or more processors, the raise alarm corresponding to the clear alarm from the raise alarm category and storing the raise alarm category in an archive section, when the corresponding raise alarm is determined. In another embodiment, the method includes producing, by the one or more processors, the raise alarm in the stream for retry a clearance of the raise alarm when the corresponding raise alarm is not determined.
[0007] In an embodiment, caching is used for updating Geographic Information (GI) and Points of Interest (POI) enrichment for each alarm from the plurality of alarms. [0008] In an embodiment, the enriched alarm is checked for correlation eligibility, wherein the enriched alarm is sent to a correlation service followed by a Trouble Ticket (TT) eligibility.
[0009] In an embodiment, the method further includes checking, by the one or more processors, each alarm from the plurality of alarms for a planned maintenance event. Further, the method includes checking, by the one or more processors, each alarm from the plurality of alarms for TT eligibility, when the planned maintenance event is unavailable.
[0010] In another aspect of the present invention, a system for processing an alarm is disclosed. The system includes a collector module and a fault manager module. The collector module is configured to receive a plurality of alarms from a plurality of network elements. The fault manager module is configured to segregate the plurality of alarms into a raise alarm category and a clear alarm category. Further, the fault manager module is configured to store the plurality of alarms as segregated into one of the raise alarm category and the clear alarm category in a distributed cache. Each alarm from the plurality of alarms is provided with an identifier, and the identifier of each alarm is produced in a stream. Further, the fault manager module is configured to schedule to pull at least one batch from the stream to obtain corresponding data from the distributed cache. Further, the fault manager module is configured to consume the identifier associated with the clear alarm from the stream. Further, the fault manager module is configured to detect the consumed identifier associated with the clear alarm in the distributed cache for determining corresponding raise alarm category. In an embodiment, further, the fault manager module is configured to delete the raise alarm corresponding to the clear alarm from the raise alarm category and store the raise alarm category in an archive section, when the corresponding raise alarm is determined. In another embodiment, the fault manager module is configured to produce the raise alarm in the stream for retry a clearance of the raise alarm when the corresponding raise alarm is not determined. [0011] In another aspect of the present invention, a non-transitory computer- readable medium having stored thereon computer-readable instructions that, when executed by a processor, cause the processor to receive a plurality of alarms from a plurality of network elements; segregate the plurality of alarms into a raise alarm category and a clear alarm category; store the plurality of alarms as segregated into one of the raise alarm category and the clear alarm category in a distributed cache, wherein each alarm from the plurality of alarms is provided with an identifier, and wherein the identifier of each alarm is produced in a stream; schedule to pull at least one batch from the stream to obtain corresponding data from the distributed cache; consume the identifier associated with the clear alarm from the stream; detect the consumed identifier associated with the clear alarm in the distributed cache for determining corresponding raise alarm category; and perform one of: delete the raise alarm corresponding to the clear alarm from the raise alarm category and storing the raise alarm category in an archive section, when the corresponding raise alarm is determined, and produce the raise alarm in the stream for retry a clearance of the raise alarm when the corresponding raise alarm is not determined.
[0012] Other features and aspects of this invention will be apparent from the following description and the accompanying drawings. The features and advantages described in this summary and in the following detailed description are not all- inclusive, and particularly, many additional features and advantages will be apparent to one of ordinary skill in the relevant art, in view of the drawings, specification, and claims hereof. Moreover, it should be noted that the language used in the specification has been principally selected for readability and instructional purposes and may not have been selected to delineate or circumscribe the inventive subject matter, resort to the claims being necessary to determine such inventive subject matter. BRIEF DESCRIPTION OF THE DRAWINGS
[0013] The accompanying drawings, which are incorporated herein, and constitute a part of this disclosure, illustrate exemplary embodiments of the disclosed methods and systems in which like reference numerals refer to the same parts throughout the different drawings. Components in the drawings are not necessarily to scale, emphasis instead being placed upon clearly illustrating the principles of the present disclosure. Some drawings may indicate the components using block diagrams and may not represent the internal circuitry of each component. It will be appreciated by those skilled in the art that disclosure of such drawings includes disclosure of electrical components, electronic components or circuitry commonly used to implement such components.
[0014] FIG. 1 is an exemplary block diagram of an environment for processing an alarm, according to various embodiments of the present disclosure.
[0015] FIG. 2 is a block diagram of a system of FIG. 1, according to various embodiments of the present disclosure.
[0016] FIG. 3 is an example schematic representation of the system of FIG. 1 in which various entities operations are explained, according to various embodiments of the present system.
[0017] FIG. 4 shows a block diagram of a system architecture illustrating a relation between various components/elements of the present system, in accordance with an exemplary embodiment of the present disclosure;
[0018] FIG. 5 shows a flow chart illustrating a method for processing an alarm, according to various embodiments of the present disclosure.
[0019] FIG. 6 shows an example flow chart illustrating a method for processing the alarm, according to various embodiments of the present disclosure. [0020] Further, skilled artisans will appreciate that elements in the drawings are illustrated for simplicity and may not have been necessarily been drawn to scale. For example, the flow charts illustrate the method in terms of the most prominent steps involved to help to improve understanding of aspects of the present invention. Furthermore, in terms of the construction of the device, one or more components of the device may have been represented in the drawings by conventional symbols, and the drawings may show only those specific details that are pertinent to understanding the embodiments of the present invention so as not to obscure the drawings with details that will be readily apparent to those of ordinary skill in the art having benefit of the description herein.
[0021] The foregoing shall be more apparent from the following detailed description of the invention.
DETAILED DESCRIPTION OF THE INVENTION
[0022] Some embodiments of the present disclosure, illustrating all its features, will now be discussed in detail. It must also be noted that as used herein and in the appended claims, the singular forms "a", "an" and "the" include plural references unless the context clearly dictates otherwise.
[0023] Various modifications to the embodiment will be readily apparent to those skilled in the art and the generic principles herein may be applied to other embodiments. However, one of ordinary skill in the art will readily recognize that the present disclosure including the definitions listed here below are not intended to be limited to the embodiments illustrated but is to be accorded the widest scope consistent with the principles and features described herein.
[0024] A person of ordinary skill in the art will readily ascertain that the illustrated steps detailed in the figures and here below are set out to explain the exemplary embodiments shown, and it should be anticipated that ongoing technological development will change the manner in which particular functions are performed. These examples are presented herein for purposes of illustration, and not limitation. Further, the boundaries of the functional building blocks have been arbitrarily defined herein for the convenience of the description. Alternative boundaries can be defined so long as the specified functions and relationships thereof are appropriately performed. Alternatives (including equivalents, extensions, variations, deviations, etc., of those described herein) will be apparent to persons skilled in the relevant art(s) based on the teachings contained herein. Such alternatives fall within the scope and spirit of the disclosed embodiments.
[0025] Before discussing example, embodiments in more detail, it is to be noted that the drawings are to be regarded as being schematic representations and elements that are not necessarily shown to scale. Rather, the various elements are represented such that their function and general purpose becomes apparent to a person skilled in the art. Any connection or coupling between functional blocks, devices, components, or other physical or functional units shown in the drawings or described herein may also be implemented by an indirect connection or coupling. A coupling between components may also be established over a wireless connection. Functional blocks may be implemented in hardware, firmware, software or a combination thereof.
[0026] Further, the flowcharts provided herein, describe the operations as sequential processes. Many of the operations may be performed in parallel, concurrently or simultaneously. In addition, the order of operations maybe re-arranged. The processes may be terminated when their operations are completed, but may also have additional steps not included in the figured. It should be noted, that in some alternative implementations, the functions/acts/ steps noted may occur out of the order noted in the figured. For example, two figures shown in succession may, in fact, be executed substantially concurrently, or may sometimes be executed in the reverse order, depending upon the functionality/acts involved.
[0027] Further, the terms first, second etc. may be used herein to describe various elements, components, regions, layers and/or sections, it should be understood that these elements, components, regions, layers and/or sections should not be limited by these terms. These terms are used only to distinguish one element, component, region, layer or section from another region, layer, or a section. Thus, a first element, component, region layer, or section discussed below could be termed a second element, component, region, layer, or section without departing form the scope of the example embodiments.
[0028] Spatial and functional relationships between elements (for example, between modules) are described using various terms, including “connected,” “engaged,” “interfaced,” and “coupled.” Unless explicitly described as being “direct,” when a relationship between first and second elements is described in the description below, that relationship encompasses a direct relationship where no other intervening elements are present between the first and second elements, and also an indirect relationship where one or more intervening elements are present (either spatially or functionally) between the first and second elements. In contrast, when an element is referred to as being "directly” connected, engaged, interfaced, or coupled to another element, there are no intervening elements present. Other words used to describe the relationship between elements should be interpreted in a like fashion (e.g., "between," versus "directly between," "adjacent," versus "directly adjacent," etc.).
[0029] The terminology used herein is for the purpose of describing particular example embodiments only and is not intended to be limiting. Unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which example embodiments belong. It will be further understood that terms, e.g., those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the relevant art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.
[0030] As used herein, the singular forms “a,” “an,” and “the,” are intended to include the plural forms as well, unless the context clearly indicates otherwise. As used herein, the terms “and/or” and “at least one of’ include any and all combinations of one or more of the associated listed items. It will be further understood that the terms “comprises,” “comprising,” “includes,” and/or “including,” when used herein, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
[0031] Unless specifically stated otherwise, or as is apparent from the description, terms such as “processing” or “computing” or “calculating” or “determining” of “displaying” or the like, refer to the action and processes of a computer system, or similar electronic computing device/hardware, that manipulates and transforms data represented as physical, electronic quantities within the computer system's registers and memories into other data similarly represented as physical quantities within the computer system memories or registers or other such information storage, transmission or display devices.
[0032] Various embodiments of the invention provide a method of processing an alarm. The method includes receiving, by one or more processors, a plurality of alarms from a plurality of network elements. Further, the method includes segregating, by the one or more processor, the plurality of alarms into a raise alarm category and a clear alarm category. Further, the method includes storing, by the one or more processors, the plurality of alarms as segregated into one of the raise alarm category and the clear alarm category in a distributed cache. Each alarm from the plurality of alarms is provided with an identifier, and wherein the identifier of each alarm is produced in a stream. Further, the method includes scheduling, by the one or more processors, to pull at least one batch from the stream to obtain corresponding data from the distributed cache. Further, the method includes consuming, by the one or more processors, the identifier associated with the clear alarm from the stream. Further, the method includes detecting, by the one or more processors, the consumed identifier associated with the clear alarm in the distributed cache for determining corresponding raise alarm category. In an embodiment, the method includes deleting, by the one or more processors, the raise alarm corresponding to the clear alarm from the raise alarm category and storing the raise alarm category in an archive section, when the corresponding raise alarm is determined. In another embodiment, the method includes producing, by the one or more processors, the raise alarm in the stream for retry a clearance of the raise alarm when the corresponding raise alarm is not determined.
[0033] Various embodiments of the invention provide a system and a method for high TPS alarm processing and end to end lifecycle management in network systems. The present system with each of its elements/components offers a higher TPS for alarm processing and end-to-end lifecycle management. The system and method of the present disclosure enables performing updates in a database (DB) therefore preventing double/ repetitive storage of existing / prior stored documents to enable efficient data storage. Since multiple occurrences are being updated in the cache, hits are reduced and performance of the system is increased.
[0034] FIG. 1 illustrates an exemplary block diagram of an environment (100) for processing an alarm in a communications network (106), according to various embodiments of the present disclosure. The environment (100) comprises a plurality of user equipment’s (UEs) 102-1, 102-2, ,102-n. The at least one UE (102-n) from the plurality of the UEs (102-1, 102-2, 102-n) is configured to connect to a system (108) via the communication network (106). Hereafter, label for the plurality of UEs or one or more UEs is 102.
[0035] In accordance with yet another aspect of the exemplary embodiment, the plurality of UEs (102) may be a wireless device or a communication device that may be a part of the system (108). The wireless device or the UE (102) may include, but are not limited to, a handheld wireless communication device (e.g., a mobile phone, a smart phone, a phablet device, and so on), a wearable computer device (e.g., a headmounted display computer device, a head-mounted camera device, a wristwatch computer device, and so on), a laptop computer, a tablet computer, or another type of portable computer, a media playing device, a portable gaming system, and/or any other type of computer device with wireless communication or VoIP capabilities. In an embodiment, the UEs may include, but are not limited to, any electrical, electronic, electro-mechanical or an equipment or a combination of one or more of the above devices such as virtual reality (VR) devices, augmented reality (AR) devices, laptop, a general-purpose computer, desktop, personal digital assistant, tablet computer, mainframe computer, or any other computing device, wherein the computing device may include one or more in-built or externally coupled accessories including, but not limited to, a visual aid device such as camera, audio aid, a microphone, a keyboard, input units for receiving input from a user such as touch pad, touch enabled screen, electronic pen and the like. It may be appreciated that the UEs may not be restricted to the mentioned devices and various other devices may be used. A person skilled in the art will appreciate that the plurality of UEs (102) may include a fixed landline, a landline with assigned extension within the communication network (106).
[0036] The plurality of UEs (102) may comprise a memory such as a volatile memory (e.g., RAM), a non-volatile memory (e.g., disk memory, FLASH memory, EPROMs, etc.), an unalterable memory, and/or other types of memory. In one implementation, the memory might be configured or designed to store data. The data may pertain to attributes and access rights specifically defined for the plurality of UEs (102). The UE (102) may be accessed by the user, to receive the requests related to an order determined by the system (108). The communication network (106), may use one or more communication interfaces/protocols such as, for example, Voice Over Internet Protocol (VoIP), 802.11 (Wi-Fi), 802.15 (including Bluetooth™), 802.16 (Wi-Max), 802.22, Cellular standards such as Code Division Multiple Access (CDMA), CDMA2000, Wideband CDMA (WCDMA), Radio Frequency Identification (e.g., RFID), Infrared, laser, Near Field Magnetics, etc.
[0037] The system (108) is communicatively coupled to a server (104) via the communication network (106). The server (104) can be, for example, but not limited to a standalone server, a server blade, a server rack, an application server, a bank of servers, a business telephony application server (BTAS), a server farm, a cloud server, an edge server, home server, a virtualized server, one or more processors executing code to function as a server, or the like. In an implementation, the server (104) may operate at various entities or a single entity (include, but is not limited to, a vendor side, a service provider side, a network operator side, a company side, an organization side, a university side, a lab facility side, a business enterprise side, a defence facility side, or any other facility) that provides service.
[0038] The communication network (106) includes, by way of example but not limitation, one or more of a wireless network, a wired network, an internet, an intranet, a public network, a private network, a packet- switched network, a circuit-switched network, an ad hoc network, an infrastructure network, a Public-Switched Telephone Network (PSTN), a cable network, a cellular network, a satellite network, a fiber optic network, or some combination thereof. The communication network (106) may include, but is not limited to, a Third Generation (3G), a Fourth Generation (4G), a Fifth Generation (5G), a Sixth Generation (6G), a New Radio (NR), a Narrow Band Internet of Things (NB-IoT), an Open Radio Access Network (O-RAN), and the like.
[0039] The communication network (106) may also include, by way of example but not limitation, at least a portion of one or more networks having one or more nodes that transmit, receive, forward, generate, buffer, store, route, switch, process, or a combination thereof, etc. one or more messages, packets, signals, waves, voltage or current levels, some combination thereof, or so forth. The communication network (106) may also include, by way of example but not limitation, one or more of a wireless network, a wired network, an internet, an intranet, a public network, a private network, a packet-switched network, a circuit-switched network, an ad hoc network, an infrastructure network, a Public-Switched Telephone Network (PSTN), a cable network, a cellular network, a satellite network, a fiber optic network, a VOIP or some combination thereof.
[0040] A plurality of network elements (106a) can be, for example, but not limited to a base station that is located in the fixed or stationary part of the communication network (106). The base station may correspond to a remote radio head, a transmission point, an access point or access node, a macro cell, a small cell, a micro cell, a femto cell, a metro cell. The base station enables transmission of radio signals to the UE or mobile transceiver. Such a radio signal may comply with radio signals as, for example, standardized by a 3GPP or, generally, in line with one or more of the above listed systems. Thus, a base station may correspond to a NodeB, an eNodeB, a Base Transceiver Station (BTS), an access point, a remote radio head, a transmission point, which may be further divided into a remote unit and a central unit.
[0041] 3GPP: The term “3GPP” is a 3rd Generation Partnership Project and is a collaborative project between a group of telecommunications associations with the initial goal of developing globally applicable specifications for Third Generation (3G) mobile systems. The 3GPP specifications cover cellular telecommunications technologies, including radio access, core network, and service capabilities, which provide a complete system description for mobile telecommunications. The 3GPP specifications also provide hooks for non-radio access to the core network, and for networking with non-3GPP networks.
[0042] The system (108) may include one or more processors (202) coupled with a memory (204), wherein the memory (204) may store instructions which when executed by the one or more processors (202) may cause the system (108) executing requests in the communication network (106) or the server (104). An exemplary representation of the system (108) for such purpose, in accordance with embodiments of the present disclosure, is shown in FIG. 2 as system (108). In an embodiment, the system (108) may include one or more processor(s) (202). The one or more processor(s) (202) may be implemented as one or more microprocessors, microcomputers, microcontrollers, edge or fog microcontrollers, digital signal processors, central processing units, logic circuitries, and/or any devices that process data based on operational instructions. Among other capabilities, the one or more processor(s) (202) may be configured to fetch and execute computer-readable instructions stored in the memory (204) of the system (108). The memory (204) may be configured to store one or more computer-readable instructions or routines in a non- transitory computer readable storage medium, which may be fetched and executed to create or share data packets over a network service.
[0043] The environment (100) further includes the system (108) communicably coupled to the remote server (104) and each UE of the plurality of UEs (102) via the communication network (106). The remote server (104) is configured to execute the requests in the communication network (106).
[0044] The system (108) is adapted to be embedded within the remote server (104) or is embedded as the individual entity. The system (108) is designed to provide a centralized and unified view of data and facilitate efficient business operations. The system (108) is authorized to access to update/create/delete one or more parameters of their relationship between the requests for the workflow, which gets reflected in realtime independent of the complexity of network.
[0045] In another embodiment, the system (108) may include an enterprise provisioning server (for example), which may connect with the remote server (104). The enterprise provisioning server provides flexibility for enterprises, ecommerce entities, finance entities, etc., to update/create/delete information related to the requests in real time as per their business needs. A user with administrator rights can access and retrieve the requests for the workflow and perform real-time analysis in the system (108).
[0046] The system (108) may include, by way of example but not limitation, one or more of a standalone server, a server blade, a server rack, a bank of servers, a business telephony application server (BTAS), a server farm, hardware supporting a part of a cloud service or system, a home server, hardware running a virtualized server, one or more processors executing code to function as a server, one or more machines performing server-side functionality as described herein, at least a portion of any of the above, some combination thereof. In an implementation, system (108) may operate at various entities or single entity (for example include, but is not limited to, a vendor side, service provider side, a network operator side, a company side, an organization side, a university side, a lab facility side, a business enterprise side, ecommerce side, finance side, a defence facility side, or any other facility) that provides service.
[0047] However, for the purpose of description, the system (108) is described as an integral part of the remote server (104), without deviating from the scope of the present disclosure. Operational and construction features of the system (108) will be explained in detail with respect to the following figures.
[0048] FIG. 2 illustrates a block diagram of the system (108) provided for processing the alarm, according to one or more embodiments of the present invention. As per the illustrated embodiment, the system (108) includes the one or more processors (202), the memory (204), an interface (206) (e.g., user interface or the like), a display (208), an input unit (210), and a centralized database (or database) (214). Further the system (108) may comprise one or more processors (202). The one or more processors (202), hereinafter referred to as the processor (202) may be implemented as one or more microprocessors, microcomputers, microcontrollers, digital signal processors, central processing units, state machines, logic circuitries, single board computers, and/or any devices that manipulate signals based on operational instructions. As per the illustrated embodiment, the system (108) includes one processor. However, it is to be noted that the system (108) may include multiple processors as per the requirement and without deviating from the scope of the present disclosure.
[0049] The information related to the request may be provided or stored in the memory (204) of the system (108). Among other capabilities, the processor (202) is configured to fetch and execute computer-readable instructions stored in the memory (204). The memory (204) may be configured to store one or more computer-readable instructions or routines in a non-transitory computer-readable storage medium, which may be fetched and executed to create or share data packets over a network service. The memory (204) may include any non-transitory storage device including, for example, volatile memory such as RAM, or non-volatile memory such as disk memory, EPROMs, FLASH memory, unalterable memory, and the like.
[0050] The memory (204) may comprise any non-transitory storage device including, for example, volatile memory such as Random- Access Memory (RAM), or non-volatile memory such as Electrically Erasable Programmable Read-only Memory (EPROM), flash memory, and the like. In an embodiment, the system (108) may include an interface(s). The interface(s) may comprise a variety of interfaces, for example, interfaces for data input and output devices, referred to as input/output (I/O) devices, storage devices, and the like. The interface(s) may facilitate communication for the system. The interface(s) may also provide a communication pathway for one or more components of the system. Examples of such components include, but are not limited to, processing unit/engine(s) and the centralized database. The processing unit/engine(s) may be implemented as a combination of hardware and programming (for example, programmable instructions) to implement one or more functionalities of the processing engine(s).
[0051] The information related to the requests may further be configured to render on the interface (206). The interface (206) may include functionality similar to at least a portion of functionality implemented by one or more computer system interfaces such as those described herein and/or generally known to one having ordinary skill in the art. The interface (206) may be rendered on the display (208), implemented using Liquid Crystal Display (LCD) display technology, Organic Light-Emitting Diode (OLED) display technology, and/or other types of conventional display technology. The display (208) may be integrated within the system (108) or connected externally. Further the input unit(s) (210) may include, but not limited to, keyboard, buttons, scroll wheels, cursors, touchscreen sensors, audio command interfaces, magnetic strip reader, optical scanner, etc. [0052] The centralized database (214) may be communicably connected to the processor (202) and the memory (204). The centralized database (214) may be configured to store and retrieve the request pertaining to features, or services or workflow of the system (108), access rights, attributes, approved list, and authentication data provided by an administrator. Further the remote server (104) may allow the system (108) to update/create/delete one or more parameters of their information related to the request, which provides flexibility to roll out multiple variants of the request as per business needs. In another embodiment, the centralized database (214) may be outside the system (108) and communicated through a wired medium and wireless medium.
[0053] Further, the processor (202), in an embodiment, may be implemented as a combination of hardware and programming (for example, programmable instructions) to implement one or more functionalities of the processor (202). In the examples described herein, such combinations of hardware and programming may be implemented in several different ways. For example, the programming for the processor (202) may be processor-executable instructions stored on a non-transitory machine-readable storage medium and the hardware for the processor (202) may comprise a processing resource (for example, one or more processors), to execute such instructions. In the present examples, the memory (204) may store instructions that, when executed by the processing resource, implement the processor (202). In such examples, the system (108) may comprise the memory (204) storing the instructions and the processing resource to execute the instructions, or the memory (204) may be separate but accessible to the system (108) and the processing resource. In other examples, the processor (202) may be implemented by an electronic circuitry.
[0054] In order for the system (108) to process the alarm, the processor (202) includes a collector module (216) and a fault manager module (218). The collector module (216) and the fault manager (218) module may be implemented as a combination of hardware and programming (for example, programmable instructions) to implement one or more functionalities of the processor (202). In the examples
Y1 described herein, such combinations of hardware and programming may be implemented in several different ways. For example, the programming for the processor (202) may be processor-executable instructions stored on a non-transitory machine-readable storage medium and the hardware for the processor (202) may comprise a processing resource (for example, one or more processors), to execute such instructions. In the present examples, the memory (204) may store instructions that, when executed by the processing resource, implement the processor. In such examples, the system (108) may comprise the memory (204) storing the instructions and the processing resource to execute the instructions, or the memory (204) may be separate but accessible to the system (108) and the processing resource. In other examples, the processor (202) may be implemented by the electronic circuitry.
[0055] In order for the system (108) to process the alarm, the collector module (216) and the fault manager module (218) are communicably coupled to each other. In an example embodiment, the collector module (216) receives a plurality of alarms from the plurality of network elements (106a). The fault manager module (218) segregates the plurality of alarms into a raise alarm category and a clear alarm category. The raise alarm category refers to a specific classification or type of the alarm that is generated when a fault or an issue is detected within a network infrastructure. The fault or the issue can be, for example, but not limited to hardware failures (e.g., router or switch malfunction), software errors (e.g., protocol issues), connectivity problems (e.g., link failures), or performance degradation (e.g., high latency or packet loss) or the like. The clear alarm category refers to a classification or type of action taken to resolve or acknowledge the alarm that has been raised due to a detected fault or issue within the network infrastructure. In other words, when the fault or the issue is detected within the network (106), the alarm is typically generated to alert the network operator or administrator. Once the fault or the issue has been addressed, the clear alarm category is used to mark the alarm as resolved or acknowledged. The raise alarm category and the clear alarm category are essential for a network administrator to promptly identify and resolve the fault or issue, so as to maintain the reliability, availability, and performance of the network infrastructure.
[0056] Further, the fault manager module (218) stores the plurality of alarms as segregated into one of the raise alarm category and the clear alarm category in a distributed cache. Each alarm from the plurality of alarms is provided with an identifier, and the identifier of each alarm is produced in a stream. Further, the fault manager module (218) schedules to pull at least one batch from the stream to obtain corresponding data from the distributed cache. The corresponding data is used to fetch a specific alarm instead of getting all the alarms or the plurality of alarms stored in the fault manager module (218). The corresponding data can be, for example, but not limited to a high latency, a packet loss, protocol issues, link error or the like. Further, the fault manager module (218) consumes the identifier associated with the clear alarm from the stream. Consuming the identifier means the fault manager module (218) receives data streams containing information about the network alarms. Upon detecting that the alarm has been resolved or acknowledged (cleared), the fault manager module (218) identifies and processes unique identifier associated with that alarm. This action enables the NMS to manage and maintain an accurate record of network events, ensuring efficient fault handling and overall network performance in alignment within a Fault Management (FM) process.
[0057] Further, the fault manager module (218) detects the consumed identifier associated with the clear alarm in the distributed cache for determining corresponding raise alarm category. In an embodiment, further, the fault manager module (218) deletes the raise alarm corresponding to the clear alarm from the raise alarm category and stores the raise alarm category in an archive section, when the corresponding raise alarm is determined. In an example, upon detecting the fault (e.g., high packet loss rates or interface errors or the like) in the plurality of network elements (106a), the fault manager module (218) generates the alarm. This alarm is categorized within a fault management domain. In another embodiment, the fault manager module (218) produces the raise alarm in the stream for retry a clearance of the raise alarm when the corresponding raise alarm is not determined.
[0058] In an embodiment, caching is used for updating Geographic Information (GI) and Points of Interest (POI) enrichment for each alarm from the plurality of alarms. In an embodiment, the enriched alarm is checked for correlation eligibility, wherein the enriched alarm is sent to a correlation service followed by a Trouble Ticket (TT) eligibility.
[0059] Further, the fault manager module (218) checks each alarm from the plurality of alarms for a planned maintenance event. Further, the fault manager module (218) checks each alarm from the plurality of alarms for TT eligibility, when the planned maintenance event is unavailable.
[0060] FIG. 3 is an example schematic representation of the system (300) of FIG. 1 in which various entities operations are explained, according to various embodiments of the present system. It is to be noted that the embodiment with respect to FIG. 3 will be explained with respect to the first UE (102-1) and the system (108) for the purpose of description and illustration and should nowhere be construed as limited to the scope of the present disclosure.
[0061] As mentioned earlier, the first UE (102-1) includes one or more primary processors (305) communicably coupled to the one or more processors (202) of the system (108). The one or more primary processors (305) are coupled with a memory (310) storing instructions which are executed by the one or more primary processors (305). Execution of the stored instructions by the one or more primary processors (305) enables the UE (102-1). The execution of the stored instructions by the one or more primary processors (305) further enables the UE (102-1) to execute the requests in the communication network (106).
[0062] As mentioned earlier, the one or more processors (202) is configured to transmit a response content related to the request to the UE (102-1). More specifically, the one or more processors (202) of the system (108) is configured to transmit the response content to at least one of the UE (102-1). The kernel (315) is a core component serving as the primary interface between hardware components of the UE (102-1) and the system (108). The kernel (315) is configured to provide the plurality of response contents hosted on the system (108) to access resources available in the communication network (106). The resources include one of a Central Processing Unit (CPU), memory components such as Random Access Memory (RAM) and Read Only Memory (ROM).
[0063] As per the illustrated embodiment, the system (108) includes the one or more processors (202), the memory (204), the interface (206), the display (208), and the input unit (210). The operations and functions of the one or more processors (202), the memory (204), the interface (206), the display (208), and the input unit (210) are already explained in FIG. 2. For the sake of brevity, we are not explaining the same operations (or repeated information) in the patent disclosure. Further, the processor (202) includes the transceiver unit (216) and the detecting unit (218). The operations and functions of the transceiver unit (216) and the detecting unit (218) are already explained in FIG. 2. For the sake of brevity, we are not explaining the same operations (or repeated information) in the patent disclosure.
[0064] FIG. 4 shows a block diagram of the system architecture (400) illustrating a relation between various components/elements of the present system, in accordance with an exemplary embodiment of the present disclosure. FIG. 4 particularly discloses the flow of receiving alarms by the collector module (216) into the stream which are collected and segregated by the fault manager module (218) into a clear fault manager (402) and a raise fault manager (406). The system architecture (400) discloses a location of a distributed IO Cache (408) and its relation with the fault manager module (218), the raise fault manager (406) and the clear fault manager (402).
[0065] The system architecture (400) further discloses a FM callback module (410), an enrichment engine (EE) (412), a correlation engine (CE) (414), and a TT service module (416).The enrichment engine (412), the Correlation Engine (CE) (414) and the FM callback module (410) are configured to communicate with each other. Multiple alarms are received from various nodes (e.g., fault manager module (218), the raise fault manager (406) and the clear fault manager (402)). The clear fault manager (402) may include a clear fault manager retry mechanism (not shown).
[0066] In accordance with an exemplary embodiment of the invention, the alarms due to the system fault/failure are raised to a Network Management System (NMS) (not shown). A reception of the alarms is located at the NMS. In an example, the reception of the alarms is located at the collector module (216) of the NMS. The collector module (216) collects these alarms and produces a stream. The fault manager module (218) consumes alarms from the stream continuously, to insert in the distributed cache (418). The distributed cache (418) is a high IO TPS distributed cache which consists of multi node architecture and an in-memory database that persists on disk that provides reliability and availability. In an aspect, if the same alarm is already present in the distributed IO cache (408), then the distributed IO cache (408) updates its occurrence and adds timestamp in the timestamp array and produces the IDs to stream to be consumed by the clear fault manager (402) and the raise fault manager (406).
[0067] In another aspect, only “IDs” of said data/alarms are streamed towards the raise fault manager (406)/ clear fault manager (402) instead of whole data. In another aspect, the raise fault manager (406) and clear fault manager (402) are scheduled to pull batches from the stream and to get corresponding data from the distributed IO cache (408). The corresponding data is used to fetch the specific alarm instead of getting all the alarms or the plurality of alarms stored in the fault manager module (218). In yet another aspect, a non-blocking retry mechanism (not shown) frees application threads from going into a time wait state so as to enable higher performance using the stream. In yet another aspect, for each alarm, caching is used for updating Geographic Information (GI) and Points of Interest (POI) enrichment instead of traversing through the centralized database (214) before inserting in the centralized database (214). In an embodiment, the GI and POI enrichment refer to the process of adding additional information to the GI and POI data such as, but not limited to, geographical coordinates (latitude and longitude) along with additional information like name, address, category, and sometimes ratings or reviews. In yet another aspect, the blocking communication between microservice is reduced, so application threads are not waiting for other service to complete the request. Since multiple occurrences are being updated in the centralized database (214), a number of alarm hits are reduced and performance of the system is increased. The alarm hits refers to the process of minimizing the frequency and impact of alarms generated within the communication network (106).
[0068] In yet another aspect, once documents (e.g., alarm related documents or the like) are received by the raise fault manager (406), the raise operation is initiated, wherein the alarm is stored or updated in the centralized database (214). The documents may include (for example) an alarm severity, an alarm description, an affected device, a timestamp, event logs, performance metrics, and historical performance data and trends. The alarm severity indicates an impact or seriousness of the fault (e.g., critical, major, minor or the like). The alarm description provides details about the nature of the fault (e.g., hardware failure, link down or the like). The affected device specifies the plurality of network elements (106a) where the fault has been detected. The timestamp records the exact time when the fault was detected or when the alarm was raised. The event logs are sequential records of events that preceded the fault, helping to trace the sequence of events leading to the issue. The performance metrics indicates bandwidth utilization and packet loss rates at the time of the fault occurrence. The historical performance data and trends help in comparing current network behaviour with past patterns, aiding in diagnosing recurring issues or identifying anomalies.
[0069] Additionally, the raise operation may involve enrichment, correlation of alarms prior to trouble ticket generation. In yet another aspect, the raise fault manager (406) consumes raise alarms’ IDs from the stream and then gets alarms from the distributed 10 cache (408) using those IDs. These alarms are processed, metadata and enrichment are updated and inserted in the centralized database (214). These alarms are also processed for planned event, Al based correlation and trouble ticketing. The Al-based correlation refers to the use of Artificial Intelligence (Al) techniques to intelligently correlate and manage alarms that are anticipated or expected due to planned events within the network infrastructure.
[0070] In yet another aspect, the clear fault manager (402) consumes clear alarms’ IDs from the stream and checks them in the centralized database (214) for their corresponding raise alarms. If it is found, then they are deleted from an active section and inserted in the archived section. If it is not found, then they are produced in the stream for retry fault manager (FM) to retry the clearance.
[0071] In yet another aspect, the clear retry fault manager (404) consumes the retry alarm data from the stream and again performs the clearance process as mentioned above. If the raise is not found in the centralized database (214), then it will increase retry count and send back to the retry stream. This process of retry will happen until alarm is cleared or retry threshold is exhausted.
[0072] In accordance with an exemplary embodiment of the invention, the collector module (216) is responsible to collect all the FC APS data from the plurality of network element (106a). The faults (or alarms) are sent to the collector module (216) via various protocols (Simple Network Management Protocol (SNMP), REST, SOAP, Kafka etc.) by the networks elements (206a). The alarm / data is collected and produced in the stream with their alarm types as events ‘raise Alarm’ and ‘clear Alarm’. The fault manager module (218) consumes the alarms from the stream and segregates them into the raise fault manager (406) and the clear fault manager (402).
[0073] In accordance with an additional embodiment, in the fault manager module (218), an auditor (not shown) runs with longer interval to find stranded alarms present in the distributed IO cache (408) and process them. The auditor audits both raise alarms and clear alarms and forward towards raise and clear process accordingly. This is done as a failsafe to avoid any loss of alarm in the distributed IO cache (408).
[0074] In accordance with the exemplary embodiment, in case of a burst/ upsurge of alarms (higher Transaction Per Second (TPS)), the distributed IO cache (408) is located on top of the fault manager module (218), which helps to store the data until future requirement. The stored data within the distributed IO cache (408) may also be updated in case of recurrence attributes. Therefore, if alarm is not present in the distributed IO cache (408), it is inserted, if it is present then the latest alarm is updated with addition of alarm timestamp in timestamp array and increment of occurrence count. These IDs/alarms are streamed toward raise/clear stream based on certain conditions dependent of time or transaction occurrence. Particularly, the conditions are if the IDs/alarm is occurring for the first time or every nth occurrence or if alarm last occurred certain configurable time back.
[0075] In accordance with an additional embodiment, the fault manager module (218) also functions to pump selective data in the stream, while the larger document is stored in the distributed IO cache (408), the document IDs of the alarm are stored in the stream to be received by the raise fault manager (406) and the clear fault manager (402). The document ID may include (for example) an alarm severity ID, an alarm description ID, an affected device ID, a timestamp ID, event log ID, performance metrics ID, and historical performance data and trends ID. Once these IDs are received, the corresponding documents are obtained from the distributed IO cache (408) by the raise fault manager (406) and the clear fault manager (402) using the IDs.
[0076] In accordance with the exemplary embodiment, once documents are received, the raise operation is initiated, wherein the alarm is stored in the centralized database (214), in case the alarm has been received before, attributes are simply updated in the centralized database (214) and the alarm is stored with updated information, therefore preventing double/ repetitive storage of existing / prior stored documents to enable efficient data storage. The attributes associated with the alarms provide essential information that helps in identifying, categorizing, prioritizing, and resolving network faults efficiently. The attribute can be, for example, but not limited to a severity, a timestamp, an alarm type, an affected object, an acknowledgment status, clearance status or the like. The severity indicates the impact or seriousness of the alarm on network operations. The timestamp records the exact time when the alarm was generated or detected. The alarm type describes the nature or category of the alarm (e.g., hardware failure, link down, configuration error or the like). The affected object identifies a specific network element from the plurality of network elements (106a) or an object (e.g., device, interface) impacted by the alarm. The acknowledgment status indicates whether the alarm has been acknowledged by an operator or not. The clearance status indicates whether the alarm has been cleared (resolved) or not. The raise operation additionally has two to three levels of traversals including Enrichment and Correlation performed by the Enrichment Engine and Correlation Engine.
[0077] In accordance with an exemplary embodiment, enrichment related attributes (such as host particulars, nodes static data, etc.) are to be retrieved based on profiles and configuration based on the order of level of traversal the node data is retrieved and added to the alarm body thereby enriching the alarm. The profiles refer to predefined configurations or settings that determine how alarms are managed, processed, and presented within the system (108). In other words, the profiles refer to the sets of rules, configurations, or templates that govern the behaviour and handling of alarms within the system (108). If logical enrichment for the alarm is available, that too is performed. The logical enrichment for the alarm refers to the process of enhancing the basic alarm data with additional contextual information (e.g., accurate time, accurate location or the like) and intelligence to facilitate better analysis, diagnosis, and resolution of network issues. This happens if the distributed IO cache (408) is enabled before the alarm insertion. If the alarm is normally inserted then, a parallel request is sent to the EE (412) based on the profile and eligibility of alarm. [0078] These enriched alarms are checked for correlation eligibility if the alarm is ineligible for correlation and thereafter the alarm is sent for TT request. The correlation eligibility intelligently correlates and manages the alarms that are anticipated or expected due to planned events within the network infrastructure. If the alarm is eligible, it is accordingly sent to correlation engine (CE) for correlation based on various types of correlations which may be domain/ inter-domain based; normal grouping where parent - child identification is not successful; pre-determined parameters of correlation; or based on time dependent parameters; in order to group the related alarms. Once correlated, the correlation related attributes are also added to the alarms, and a unique ID is given to the correlated groups and sent for TT request to the TT processor/ TTP wherein the alarms will be evaluated for the TT request and if found eligible a TT will be raised accordingly. Regardless of correlation, these alarms are checked for Outage alarms, if outage (Planned Maintenance) is absent, then the alarms are checked for TT eligibility.
[0079] In accordance with an exemplary embodiment, the clear fault manager (402) obtains data from the stream as well as the distributed IO cache (408) to detect the raise as every clear must have its corresponding raise. On detection of said raise, it shall compare for timestamp less than the clearance timestamp and it shall delete the raise alarm from active section and clearance metadata shall be added in alarm and stored in archived section. The clearance metadata refers to a specific data or information associated with the process of resolving or clearing the alarm once the underlying issue has been addressed in the network infrastructure. However, due to fluctuations/ time lapse within the process if the clear fault manager (402) is unable to locate its corresponding raise, then a retry count will be added in clear alarm data and it shall be streamed to the Retry Stream. In case even the Retry is unable to detect the Raise, then a retry count will be added in clear alarm data and it shall be streamed to the Retry Stream with an additional count to the retry count check which may be set at a pre-defined number to prevent to many retry attempts. [0080] If clear find raise alarm for clearance and TT is generated on that alarm then TT termination request will be sent towards TT service. The Alarms (active & archived both) are then fetched from the centralized database (214) for reporting and visualization in the interface (206).
[0081] FIG. 5 is a flow chart (500) illustrating a method for processing an alarm, according to various embodiments of the present system.
[0082] At step 502, the method includes receiving the plurality of alarms from the plurality of network elements (106a). At step 504, the method includes segregating the plurality of alarms into the raise alarm category and the clear alarm category. At step 506, the method includes storing the plurality of alarms as segregated into one of the raise alarm category and the clear alarm category in the distributed IO cache (408). Each alarm from the plurality of alarms is provided with the identifier, where the identifier of each alarm is produced in a stream.
[0083] At step 508, the method includes scheduling to pull at least one batch from the stream to obtain corresponding data from the distributed IO cache (408). At step 510, the method includes consuming the identifier associated with the clear alarm from the stream. At step 512, the method includes detecting the consumed identifier associated with the clear alarm in the distributed cache for determining corresponding raise alarm category.
[0084] In an embodiment, at step 514, the method includes deleting the raise alarm corresponding to the clear alarm from the raise alarm category and storing the raise alarm category in an archive section, when the corresponding raise alarm is determined. In another embodiment, at step 516, the method includes producing the raise alarm in the stream for retry the clearance of the raise alarm when the corresponding raise alarm is not determined.
[0085] FIG. 6 shows an example flow chart illustrating a method for processing the alarm, according to various embodiments of the present disclosure. [0086] At step 602, the collector module (216) receives alarms and streams towards the fault manager module (218). At step 604, the fault manager module (218) stores them in the distributed IO cache (408) while the “IDs” of said alarms are produced in stream and streamed towards the raise fault manager (406)/the clear fault manager (402) instead of whole data. At step 606, the raise fault manager (406) and the clear fault manager (402) are scheduled to pull batches from the stream and to get corresponding data from the distributed IO cache (408). At step 608, the clear fault manager (402) consumes clear alarms’ and checks for corresponding raise alarms. If the raise alarm found is found, the raise alarm is deleted from the active section and inserted in the archived section. Additionally, the non-blocking retry mechanism frees the application threads from going into the time wait state so as to enable higher performance using the stream. If the raise is not found, then they are produced in the stream for retry FM to retry the clearance.
[0087] The flow chart further shows, for each alarm, caching is used for updating Geographic Information (GI) and Points of Interest (POI) enrichment instead of traversing through the DB (214) before inserting into the DB (214). The blocking communication between microservice is reduced, so application threads are not waiting for other service to complete the request. Since multiple occurrences are being updated in cache reducing DB hits and increasing performance of the system.
[0088] At 612, once documents are received by the raise fault manager (406), the raise operation is initiated, wherein the alarm is stored or updated in the DB (214). At 614, additionally, the raise operation may involve enrichment, correlation of alarms prior to trouble ticket generation. At 616, regardless of correlation, alarms are checked for outage alarms. If the outage (Planned Maintenance) is not there, then they shall be checked for TT eligibility.
[0089] Technically advanced solution of the invention: [0090] The present system (108) with each of its elements/components offers a higher TPS for alarm processing and end-to-end lifecycle management The system (108) and method of the present disclosure enables performing updates in the DB, therefore preventing double/ repetitive storage of existing / prior stored documents to enable efficient data storage. Since multiple occurrences are being updated in the cache, hits are reduced and performance of the system is increased.
[0091] A person of ordinary skill in the art will readily ascertain that the illustrated embodiments and steps in description and drawings (FIGS. 1-6) are set out to explain the exemplary embodiments shown, and it should be anticipated that ongoing technological development will change the manner in which particular functions are performed. These examples are presented herein for purposes of illustration, and not limitation. Further, the boundaries of the functional building blocks have been arbitrarily defined herein for the convenience of the description. Alternative boundaries can be defined so long as the specified functions and relationships thereof are appropriately performed. Alternatives (including equivalents, extensions, variations, deviations, etc., of those described herein) will be apparent to persons skilled in the relevant art(s) based on the teachings contained herein. Such alternatives fall within the scope and spirit of the disclosed embodiments.
[0092] Method steps: A person of ordinary skill in the art will readily ascertain that the illustrated steps are set out to explain the exemplary embodiments shown, and it should be anticipated that ongoing technological development will change the manner in which particular functions are performed. These examples are presented herein for purposes of illustration, and not limitation. Further, the boundaries of the functional building blocks have been arbitrarily defined herein for the convenience of the description. Alternative boundaries can be defined so long as the specified functions and relationships thereof are appropriately performed. Alternatives (including equivalents, extensions, variations, deviations, etc., of those described herein) will be apparent to persons skilled in the relevant art(s) based on the teachings contained herein. Such alternatives fall within the scope and spirit of the disclosed embodiments. [0093] The present invention offers multiple advantages over the prior art and the above listed are a few examples to emphasize on some of the advantageous features. The listed advantages are to be read in a non-limiting manner.
REFERENCE NUMERALS
[0094] Environment - 100
[0095] UEs- 102, 102- 1 - 102-n
[0096] Server - 104
[0097] Communication network - 106
[0098] Plurality of network elements -106a
[0099] System - 108
[00100] Processor - 202
[00101] Memory - 204
[00102] Interface - 206
[00103] Display - 208
[00104] Input unit - 210
[00105] Centralized Database - 214
[00106] Collector module- 216
[00107] Fault manager module - 218
[00108] System - 300
[00109] Primary processors -305
[00110] Memory- 310
[00111] Kernel- 315
[00112] System architecture - 400 [00113] Clear fault manager - 402
[00114] Clear retry fault manager - 404
[00115] Raise fault manager - 406
[00116] Distributed IO cache - 408
[00117] FM callback module - 410
[00118] Enrichment engine (EE) - 412
[00119] Correlation Engine (CE) - 414
[00120] TT service module - 416

Claims

We Claim:
1. A method of processing an alarm, the method comprising the steps of: receiving, by one or more processors (202), a plurality of alarms from a plurality of network elements (106a); segregating, by the one or more processor (202), the plurality of alarms into a raise alarm category and a clear alarm category; storing, by the one or more processors (202), the plurality of alarms as segregated into one of the raise alarm category and the clear alarm category in a distributed IO cache (408), wherein each alarm from the plurality of alarms is provided with an identifier, and wherein the identifier of each alarm is produced in a stream; scheduling, by the one or more processors (202), to pull at least one batch from the stream to obtain corresponding data from the distributed IO cache (408); consuming, by the one or more processors, the identifier associated with the clear alarm from the stream; detecting, by the one or more processors (202), the consumed identifier associated with the clear alarm in the distributed cache for determining corresponding raise alarm category; and performing, by the one or more processors (202), one of: deleting, by the one or more processors (202), the raise alarm corresponding to the clear alarm from the raise alarm category and storing the raise alarm category in an archive section, when the corresponding raise alarm is determined, and producing, by the one or more processors (202), the raise alarm in the stream for retry a clearance of the raise alarm when the corresponding raise alarm is not determined.
2. The method as claimed in claim 1, wherein caching is used for updating Geographic Information (GI) and Points of Interest (POI) enrichment for each alarm from the plurality of alarms.
3. The method as claimed in claim 2, wherein the enriched alarm is checked for correlation eligibility, wherein the enriched alarm is sent to a correlation service followed by a Trouble Ticket (TT) eligibility.
4. The method as claimed in claim 1, wherein the method further comprises the steps of: checking, by the one or more processors (202), each alarm from the plurality of alarms for a planned maintenance event; and checking, by the one or more processors (202), each alarm from the plurality of alarms for TT eligibility, when the planned maintenance event is unavailable.
5. A system (108) for processing an alarm, the system (108) comprising: a collector module (216) configured to: receive a plurality of alarms from a plurality of network elements (106a); a fault manager module (218) configured to: segregate the plurality of alarms into a raise alarm category and a clear alarm category; store the plurality of alarms as segregated into one of the raise alarm category and the clear alarm category in a distributed cache, wherein each alarm from the plurality of alarms is provided with an identifier, and wherein the identifier of each alarm is produced in a stream; schedule to pull at least one batch from the stream to obtain corresponding data from the distributed cache; consume the identifier associated with the clear alarm from the stream; detect the consumed identifier associated with the clear alarm in the distributed cache for determining corresponding raise alarm category; and perform one of: delete the raise alarm corresponding to the clear alarm from the raise alarm category and store the raise alarm category in an archive section, when the corresponding raise alarm is determined, and produce the raise alarm in the stream for retry a clearance of the raise alarm when the corresponding raise alarm is not determined.
6. The system (108) as claimed in claim 5, wherein caching is used for updating GI and POI enrichment for each alarm from the plurality of alarms.
7. The system (108) as claimed in claim 6, wherein the enriched alarm is checked for correlation eligibility, wherein the enriched alarm is sent to a correlation service followed by a TT eligibility.
8. The system (108) as claimed in claim 5, wherein the fault manager module (218) is further configured to: check each alarm from the plurality of alarms for a planned maintenance event; and check each alarm from the plurality of alarms for TT eligibility, when the planned maintenance event is unavailable.
9. A non-transitory computer-readable medium having stored thereon computer- readable instructions that, when executed by a processor, cause the processor to: receive a plurality of alarms from a plurality of network elements (106a); segregate the plurality of alarms into a raise alarm category and a clear alarm category; store the plurality of alarms as segregated into one of the raise alarm category and the clear alarm category in a distributed cache, wherein each alarm from the plurality of alarms is provided with an identifier, and wherein the identifier of each alarm is produced in a stream; schedule to pull at least one batch from the stream to obtain corresponding data from the distributed cache; consume the identifier associated with the clear alarm from the stream; detect the consumed identifier associated with the clear alarm in the distributed cache for determining corresponding raise alarm category; and perform one of: delete the raise alarm corresponding to the clear alarm from the raise alarm category and storing the raise alarm category in an archive section, when the corresponding raise alarm is determined, and produce the raise alarm in the stream for retry a clearance of the raise alarm when the corresponding raise alarm is not determined.
10. A User Equipment (UE) (102-1), comprising: one or more primary processors (305) communicatively coupled to one or more processors (202) of a system (108), the one or more primary processors (305) coupled with a memory (310), wherein said memory (310) stores instructions which when executed by the one or more primary processors (305) causes the UE (102-1) to: send, a plurality of alarms to the one or more processers (202); wherein the one or more processors (202) is configured to perform the steps as claimed in claim 1.
PCT/IN2024/051277 2023-07-17 2024-07-17 Method and system for processing an alarm Pending WO2025017704A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
IN202321048150 2023-07-17
IN202321048150 2023-07-17

Publications (1)

Publication Number Publication Date
WO2025017704A1 true WO2025017704A1 (en) 2025-01-23

Family

ID=94281286

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IN2024/051277 Pending WO2025017704A1 (en) 2023-07-17 2024-07-17 Method and system for processing an alarm

Country Status (1)

Country Link
WO (1) WO2025017704A1 (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10637737B2 (en) * 2017-03-28 2020-04-28 Ca Technologies, Inc. Managing alarms from distributed applications
CN114430562A (en) * 2022-02-10 2022-05-03 中盈优创资讯科技有限公司 5G alarm real-time clearing and delay clearing method and device

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10637737B2 (en) * 2017-03-28 2020-04-28 Ca Technologies, Inc. Managing alarms from distributed applications
CN114430562A (en) * 2022-02-10 2022-05-03 中盈优创资讯科技有限公司 5G alarm real-time clearing and delay clearing method and device

Similar Documents

Publication Publication Date Title
US11805005B2 (en) Systems and methods for predictive assurance
US9811443B2 (en) Dynamic trace level control
JP6396887B2 (en) System, method, apparatus, and non-transitory computer readable storage medium for providing mobile device support services
CN104202201A (en) Log processing method and device and terminal
US20200252317A1 (en) Mitigating failure in request handling
US12159253B2 (en) Outage risk detection alerts
US11360745B2 (en) Code generation for log-based mashups
US9594622B2 (en) Contacting remote support (call home) and reporting a catastrophic event with supporting documentation
US8799460B2 (en) Method and system of providing a summary of web application performance monitoring
CN110727563A (en) Cloud service alarm method and device for preset customer
US20250080399A1 (en) Action Recommendations for Operational Issues
CN115190008A (en) Fault processing method, fault processing device, electronic device and storage medium
US12438766B2 (en) Service dependencies based on relationship network graph
WO2025017704A1 (en) Method and system for processing an alarm
JPWO2013161522A1 (en) Log collection server, log collection system, and log collection method
US20250138896A1 (en) Smart job generation for incident response
US20240202286A1 (en) Event pattern prediction
WO2019241199A1 (en) System and method for predictive maintenance of networked devices
WO2025017698A1 (en) System and method for determining root cause of one or more issues at a plurality of network nodes
US20250351043A1 (en) System and method for analyzing network performance based on cell id
WO2025062438A1 (en) Mmethod and system for synchronizing an inventory database
WO2025057243A1 (en) System and method to manage routing of requests in network
WO2025052455A1 (en) System and method for identifying state of subscriber
WO2025052456A1 (en) System and method for managing data in database
WO2025013005A1 (en) System and method for ticket management of planned events in a network

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 24842695

Country of ref document: EP

Kind code of ref document: A1