[go: up one dir, main page]

WO2025013005A1 - System and method for ticket management of planned events in a network - Google Patents

System and method for ticket management of planned events in a network Download PDF

Info

Publication number
WO2025013005A1
WO2025013005A1 PCT/IN2024/051039 IN2024051039W WO2025013005A1 WO 2025013005 A1 WO2025013005 A1 WO 2025013005A1 IN 2024051039 W IN2024051039 W IN 2024051039W WO 2025013005 A1 WO2025013005 A1 WO 2025013005A1
Authority
WO
WIPO (PCT)
Prior art keywords
alarm
network
processor
alarms
creation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
PCT/IN2024/051039
Other languages
French (fr)
Inventor
Aayush Bhatnagar
Sandeep Bisht
Rahul Mishra
Akash Verma
Somya Mishra
Deepanshu Singla
Namrata Rammurat KASHYAP
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jio Platforms Ltd
Original Assignee
Jio Platforms Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jio Platforms Ltd filed Critical Jio Platforms Ltd
Publication of WO2025013005A1 publication Critical patent/WO2025013005A1/en
Pending legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/50Network service management, e.g. ensuring proper service fulfilment according to agreements
    • H04L41/5061Network service management, e.g. ensuring proper service fulfilment according to agreements characterised by the interaction between service providers and their network customers, e.g. customer relationship management
    • H04L41/5074Handling of user complaints or trouble tickets
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/06Management of faults, events, alarms or notifications
    • H04L41/0604Management of faults, events, alarms or notifications using filtering, e.g. reduction of information by using priority, element types, position or time
    • H04L41/0609Management of faults, events, alarms or notifications using filtering, e.g. reduction of information by using priority, element types, position or time based on severity or priority
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/06Management of faults, events, alarms or notifications
    • H04L41/0631Management of faults, events, alarms or notifications using root cause analysis; using analysis of correlation between notifications, alarms or events based on decision criteria, e.g. hierarchy, tree or time analysis

Definitions

  • the present invention generally relates to alarm management in a network, and in particular, to a system and a method of handling and managing alarms with regard to Planned Events (PE) in a network.
  • PE Planned Events
  • Network Alarm Monitoring Systems monitor equipment for notable events - otherwise referred to as "alarms".
  • An alarm may be indicative of an issue related to a hardware, software, or network related issue.
  • an alarm may indicate that a network equipment is failing and a network outage about to occur.
  • an Alarm Monitoring System should organize and interpret the severity of incoming alarms in real-time, which will help to coordinate maintenance and repair efforts throughout the network. It is an essential part of any Network Management System (NMS).
  • NMS Network Management System
  • Alarms can be broadly divided in to two types: 'critical alarms' and 'standard alarms'. Each has their own advantages but also comes with an associated set of challenges. Standard alarms are typically low-priority in nature and are often expected or even scheduled. A critical alarm draws attention to a serious emergency. These can range from things that simply impact on planned production schedules (such as a broken machine) to incidents that threaten the physical safety of staff (such as a fire). Critical alarms are often linked to serious situations with multiple factors that need to be taken into account, often creating confusion amongst responders and managers alike.
  • PE Planned Events
  • PE Planned Event
  • other nodes of the cluster may try to communicate with the node and upon failing may create some alarms.
  • a system for ticket management of planned events in a network includes a fault processor configured to receive one or more alarms related to operation of one or more nodes or services operational in a network.
  • the fault processor is further configured to identify an alarm, from the one or more alarms, raised during a planned event (PE).
  • the alarm is identified based on one or more of operational task data, a start schedule, and an end schedule of the PE.
  • the PE indicates a time window for performing maintenance on a node in the network.
  • the fault processor is further configured to update attributes of the alarm raised during the PE, using data present in a PE profile.
  • the fault processor is further configured to send a request, corresponding to the alarm, to a TT processor for creation or termination of a TT.
  • the system further includes the TT processor configured to address the request for creation or termination of the, within a predefined time period after end of the PE.
  • the data present in the PE profile provides details of the PE and includes one or more of Service Affecting flag, change ID or PE ID, impacted network element ID, PE start time, PE end time, date when PE row was inserted in database, and a unique key.
  • the operational task data comprises one of a Service Affecting Flag, PE identification (ID), impacted network element ID, and a unique key.
  • the fault processor sends the request for creation and termination of the TT using a Hypertext Transfer Protocol (HTTP) stream or message stream.
  • the attributes of the alarm comprise one or more of Change ID (CID), NonService Affecting (NSA) CID, Service Affecting (SA) CID, planned maintenance, and an outage information.
  • a method of ticket management of planned events includes receiving, by a fault processor one or more alarms related to operation of one or more nodes or services operational in a network.
  • the method further includes identifying, by the fault processor, an alarm, from the one or more alarms, raised during a planned event (PE).
  • the alarm is identified based on one or more of operational task data, a start schedule, and an end schedule of the PE.
  • the method further includes updating, by the fault processor, attributes of the alarm raised during the PE, using data present in a PE profile.
  • the method further includes sending, by the fault processor, a request, to a TT processor for creation or termination of a TT corresponding to the alarm.
  • the method further includes addressing, by the TT processor, the request for creation or termination of the TT, within a predefined time period after end of the PE.
  • the data present in the PE profile provides details of the PE and includes one or more of Service Affecting flag, change ID or PE ID, impacted network element ID, PE start time, PE end time, date when PE row was inserted in database, and a unique key.
  • the operational task data includes identities of nodes expected to become inactive during the PE.
  • the PE indicates a time window for performing maintenance on a node in the network.
  • the method further comprises creating, by the TT processor, the TT based on a user input provided through a user interface (UI).
  • UI user interface
  • the request for creation and termination of the TT is sent to the TT processor using a Hypertext Transfer Protocol (HTTP) stream or message stream.
  • HTTP Hypertext Transfer Protocol
  • the TT processor stores and retrieves the alarm from a persist storage for creation and termination of the TT.
  • the TT processor is further configured to obtain the alarm from the persist storage, determine that a status of the alarm is active, and change a service affecting attribute of the alarm to Service Affecting (SA) and a TT status of the alarm to initiated, when severity of the alarm is critical or major.
  • the attributes of the alarm comprise one or more of Change ID (CID), Non-Service Affecting (NSA) CID, Service Affecting (SA) CID, planned maintenance, and an outage information.
  • FIG. 1 illustrates a network architecture of a system for ticket management of planned events in a network, according to one or more embodiments of the present disclosure
  • FIG. 2 illustrates a block diagram of the system for ticket management of planned events in a network, according to various embodiments of the present system
  • FIG. 3 illustrates a block diagram of the system communicating with a node for ticket management of planned events in a network, according to various embodiments of the present system
  • FIG. 4 illustrates a system operation architecture for Trouble Ticket (TT) management during a planned event, according to one or more embodiments of the present disclosure
  • FIG. 5 illustrates a flow chart of a method of creation of a Trouble Ticket (TT) during a planned event, according to one or more embodiments of the present disclosure
  • FIG. 6 illustrates a flow chart of a method of Trouble Ticket (TT) termination for alarm clearance, according to one or more embodiments of the present disclosure
  • FIG. 7 illustrates a snapshot of a User Interface (UI) for ticket management, according to one or more embodiments of the present disclosure.
  • FIG. 8A illustrates a pie chart representation of PE category data
  • FIG. 8B illustrates a pie chart representation of PE service data.
  • the present disclosure introduces an outage flag to identify if a resolution is needed for the alarm raised during PE (Planned Event).
  • the flag indicates that an explicit resolution is not needed for such alarms because of a known ongoing Planned Event (PE).
  • PE Planned Event
  • the invention checks for major and critical alarms for the particular node for which the PE was planned and addresses the same.
  • TT whenever an alarm is raised during the time of PE, no TT will be raised / assigned.
  • PE Planned Event
  • TT creation may be done manually along with profile-based automation.
  • FIG. 1 illustrates a network architecture of a system for ticket management of planned events in a network.
  • the network architecture comprises a plurality of network nodes 102-1, 102-2, ,102-n. At least one of the network nodes 102-1 through 102-n may be configured to connect to a server 105.
  • a network node whose alarms are handled is referred as node 102.
  • the node 102 may comprise a memory such as a volatile memory (e.g., RAM), a nonvolatile memory (e.g., disk memory, FLASH memory, EPROMs, etc.), an unalterable memory, and/or other types of memory.
  • the memory might be configured or designed to store data.
  • the node 102 may connect with the server 105 for sending alarms.
  • the node 102 may be configured to connect with the server 105 through a communication network 110.
  • the communication network 110 may use one or more communication interfaces/protocols such as, for example, VoIP, 802.11 (Wi-Fi), 802.15 (including BluetoothTM), 802.16 (Wi-Max), 802.22, Cellular standards such as CDMA, CDMA2000, WCDMA, Radio Frequency (e.g., RFID), Infrared, laser, Near Field Magnetics, etc.
  • VoIP Voice over IP
  • Wi-Fi Wi-Fi
  • 802.15 including BluetoothTM
  • 802.16 Wi-Max
  • 802.22 Cellular standards such as CDMA, CDMA2000, WCDMA, Radio Frequency (e.g., RFID), Infrared, laser, Near Field Magnetics, etc.
  • RFID Radio Frequency
  • the server 105 may include by way of example but not limitation, one or more of a standalone server, a server blade, a server rack, a bank of servers, a business telephony application server (BTAS), a server farm, hardware supporting a part of a cloud service or system, a home server, hardware running a virtualized server, one or more processors executing code to function as a server, one or more machines performing server-side functionality as described herein, at least a portion of any of the above, some combination thereof.
  • the entity may include, but is not limited to, a vendor, a network operator, a company, an organization, a university, a lab facility, a business enterprise, a defence facility, or any other facility that provides content.
  • the server 105 may be communicably connected to a system 125, via the communication network 110.
  • the system 125 may be configured to access services subscribed by enterprises, and additional services as mentioned above.
  • the plurality of nodes 102 may include end devices and intermediary devices.
  • the end devices serve as originator of data or information flowing through the communication network 110.
  • the end devices may include workstations, laptops, desktop computers, printers, scanners, servers (file servers, web Servers), mobile phones, tablets, and smart phones.
  • the intermediary devices are configured to forward data from one point to another in a communication network 110.
  • the intermediary devices may include hubs, modems, switches, routers, bridges, repeaters, security firewalls, and wireless access points.
  • the communication network 110 includes, by way of example but not limitation, one or more of a wireless network, a wired network, an internet, an intranet, a public network, a private network, a packet-switched network, a circuit-switched network, an ad hoc network, an infrastructure network, a Public-Switched Telephone Network (PSTN), a cable network, a cellular network, a satellite network, a fiber optic network, or some combination thereof.
  • PSTN Public-Switched Telephone Network
  • the communication network 110 may include, but is not limited to, a Third Generation (3G), a Fourth Generation (4G), a Fifth Generation (5G), a Sixth Generation (6G), a New Radio (NR), a Narrow Band Internet of Things (NB-IoT), an Open Radio Access Network (O-RAN), and the like.
  • 3G Third Generation
  • 4G Fourth Generation
  • 5G Fifth Generation
  • 6G Sixth Generation
  • NR New Radio
  • NB-IoT Narrow Band Internet of Things
  • OF-RAN Open Radio Access Network
  • the communication network 110 may also include, by way of example but not limitation, at least a portion of one or more networks having one or more nodes that transmit, receive, forward, generate, buffer, store, route, switch, process, or a combination thereof, etc. one or more messages, packets, signals, waves, voltage or current levels, some combination thereof, or so forth.
  • the network may also include, by way of example but not limitation, one or more of a wireless network, a wired network, an internet, an intranet, a public network, a private network, a packet-switched network, a circuit-switched network, an ad hoc network, an infrastructure network, a Public-Switched Telephone Network (PSTN), a cable network, a cellular network, a satellite network, a fiber optic network, a VOIP or some combination thereof.
  • PSTN Public-Switched Telephone Network
  • the system 125 is communicably coupled to the server 105 and each of the first node 102-1, the second node 102-2, and the third node 102-n via the communication network 110.
  • the system 125 is configured for ticket management of planned events.
  • the system 125 is adapted to be embedded within the server 105 or is embedded as an individual entity. However, for the purpose of description, the system 125 is described as an integral part of the server 105, without deviating from the scope of the present disclosure.
  • the system 125 may be generic in nature and may be integrated with any application including a System Management Facility (SMF), an Access and Mobility Management Function (AMF), a Business Telephony Application Server (BTAS), a Converged Telephony Application Server (CTAS), any SIP (Session Initiation Protocol) Application Server which interacts with core Internet Protocol Multimedia Subsystem (IMS) on Industrial Control System (ISC) interface as defined by Third Generation Partnership Project (3GPP) to host a wide array of cloud telephony enterprise services, a System Information Blocks (SIB)/ and a Mobility Management Entity (MME).
  • SIF System Management Facility
  • AMF Access and Mobility Management Function
  • BTAS Business Telephony Application Server
  • CAS Converged Telephony Application Server
  • IMS Internet Protocol Multimedia Subsystem
  • ISC Industrial Control System
  • 3GPP Third Generation Partnership Project
  • SIB System Information Blocks
  • MME Mobility Management Entity
  • Session Management Function is a control function that manages user sessions including establishment, modification and release of sessions, and allocates IP addresses for IP PDU sessions.
  • the SMF communicates indirectly with the UE through the AMF that relays session-related messages between the devices and the SMF.
  • Access and Mobility Management Function is a key component in 5G mobile networks, responsible for managing access to the network and handling mobility -related functions for user equipment (UE), such as smartphones, tablets, and loT devices. AMF works closely with other network functions to facilitate seamless connectivity, mobility, and quality of service for mobile users.
  • BTAS Business Telephony Application Server
  • IVR interactive voice response
  • CTAS Converged Telephony Application Server
  • SIP (Session Initiation Protocol) application server is a server-based system that facilitates the establishment, management, and termination of communication sessions using the SIP protocol.
  • SIP application servers play a central role in IP-based telecommunications networks, enabling a wide range of real-time communication services, including voice calls, video calls, instant messaging, presence, and multimedia conferencing.
  • IMS Internet Protocol Multimedia Subsystem
  • Cloud telephony enterprise services refer to communication solutions delivered over the cloud that cater specifically to the needs of businesses and organizations. These services leverage cloud technology to provide scalable, flexible, and cost-effective communication solutions, including voice calls, messaging, collaboration tools, and contact center capabilities.
  • SIBs System Information Blocks
  • a base station eNodeB in LTE, NodeB in UMTS, or eNB in 5G
  • SIBs contain network-related information necessary for UEs to access and operate within the network efficiently. These blocks are periodically transmitted over broadcast channels, allowing UEs to receive and decode them even when they are not actively engaged in communication.
  • the Mobility Management Entity is a key network element responsible for managing mobility -related functions for user equipment (UE) or mobile devices.
  • the MME is part of the Evolved Packet Core (EPC) network in LTE and the 5G Core (5GC) network in 5G, serving as a control plane entity that handles signaling and control procedures for mobility management.
  • EPC Evolved Packet Core
  • 5GC 5G Core
  • FIG. 2 illustrates a block diagram of the system 125 for ticket management of planned events in a network, according to one or more embodiments of the present disclosure.
  • the system 125 includes one or more processors 205, a memory 210, and an input/output interface unit 215.
  • the one or more processors 205 may be implemented as one or more microprocessors, microcomputers, microcontrollers, digital signal processors, central processing units, state machines, logic circuitries, single board computers, and/or any devices that manipulate signals based on operational instructions.
  • the system 125 includes the processor 205.
  • the system 125 may include multiple processors as per the requirement and without deviating from the scope of the present disclosure.
  • the processor 205 is configured to fetch and execute computer-readable instructions stored in the memory 210.
  • the memory 210 may be configured to store one or more computer-readable instructions or routines in a non-transitory computer-readable storage medium, which may be fetched and executed to create or share data packets over a network service.
  • the memory 210 may include any non-transitory storage device including, for example, volatile memory such as RAM, or non-volatile memory such as EPROM, flash memory, and the like.
  • the input/output (I/O) interface unit 215 includes a variety of interfaces, for example, interfaces for data input and output devices, referred to as Input/Output (I/O) devices, storage devices, and the like.
  • the I/O interface unit 215 facilitates communication of the system 125.
  • the I/O interface unit 215 provides a communication pathway for one or more components of the system 125. Examples of such components include, but are not limited to, the nodes 102, a database 220, and a distributed cache 225.
  • the database 220 is one of, but is not limited to, a centralized database, a cloud-based database, a commercial database, an open-source database, a distributed database, an end-user database, a graphical database, a No-Structured Query Language (NoSQL) database, an object- oriented database, a personal database, an in-memory database, a document-based database, a time series database, a wide column database, a key value database, a search database, a cache database, and so forth.
  • NoStructured Query Language (NoSQL) database No-Structured Query Language
  • object- oriented database a personal database
  • an in-memory database a document-based database
  • a time series database a time series database
  • a wide column database a key value database
  • search database a cache database, and so forth.
  • the foregoing examples of the database 220 types are non-limiting and may not be mutually exclusive e.g.
  • the distributed cache 225 is a pool of Random- Access Memory (RAM) of multiple networked computers into a single in-memory data store for use as a data cache to provide fast access to data.
  • RAM Random- Access Memory
  • the distributed cache 225 is essential for applications that need to scale across multiple servers or are distributed geographically.
  • the distributed cache 225 ensures that data is available close to where it’ s needed, even if the original data source is remote or under heavy load.
  • the processor 205 may be implemented as a combination of hardware and programming (for example, programmable instructions) to implement one or more functionalities of the processor 205.
  • programming for the processor 205 may be processor-executable instructions stored on a non-transitory machine-readable storage medium and the hardware for the processor 205 may comprise a processing resource (for example, one or more processors), to execute such instructions.
  • the memory 210 may store instructions that, when executed by the processing resource, implement the processor 205.
  • the system 125 may comprise the memory 210 storing the instructions and the processing resource to execute the instructions, or the memory 210 may be separate but accessible to the system 125 and the processing resource.
  • the processor 205 may be implemented by electronic circuitry.
  • the processor 205 implements a fault processor 230 communicably coupled to each of the first node 102-1, the second node 102-2, and the third node 102-n.
  • the fault processor 230 is configured to receive one or more alarms related to operation of one or more nodes or services operational in a network.
  • the alarms act are notifications or alerts generated by the node 102 for indicating issues or anomalies within a computer network.
  • the alarms are crucial for network administrators as they provide early warnings about potential problems, allowing them to take proactive measures to prevent downtime or service disruptions.
  • the alarms can indicate various issues including hardware failures, performance degradation, security breaches, configuration issues, service outages, and capacity problems.
  • attributes When an alarm is raised, certain attributes are added to the alarm. Such attributes include outage whose value is set as False, planned maintenance whose default value is set as NA, and alarm type (critical, major, Service Affecting (SA), Non-Service Affecting (NSA)).
  • SA Service Affecting
  • NSA Non-Service Affecting
  • the fault processor 230 is further configured to identify an alarm, from the one or more alarms, raised during a planned event (PE).
  • the PE indicates a time window for performing maintenance on a node in the network.
  • the alarm is identified based on one or more of operational task data, a start schedule, and an end schedule of the PE.
  • the operational task data comprises one of a Service Affecting Flag, PE identification (ID), impacted network element ID, and a unique key.
  • the fault processor 230 is further configured to update attributes of the alarm raised during the PE, using data present in a PE profile.
  • the attributes of the alarm comprise one or more of Change ID (CID), Non-Service Affecting (NSA) CID, Service Affecting (SA) CID, planned maintenance, and an outage information.
  • the data present in the PE profile provides details of the PE and includes one or more of Service Affecting flag, change ID or PE ID, impacted network element ID, PE start time, PE end time, date when PE row was inserted in database, and a unique key.
  • the fault processor 230 is further configured to send a request, corresponding to the alarm, to a TT processor 230 for creation or termination of a TT.
  • the fault processor 230 sends the request for creation and termination of the TT using a Hypertext Transfer Protocol (HTTP) stream or message stream.
  • HTTP Hypertext Transfer Protocol
  • the processor 205 further implements the TT processor 235 configured to address the request for creation or termination of the, within a predefined time period after end of the PE.
  • the TT processor 235 is configured to obtain the alarm from the persist storage, determine that a status of the alarm is active, and change a service affecting attribute of the alarm to SA and a TT status of the alarm to initiated, when severity of the alarm is critical or major.
  • FIG. 3 illustrating a block diagram of the system 125 communicating with a first node 102-1 for ticket management of planned events in a network, a preferred embodiment of the system 125 is described. It is to be noted that the embodiment with respect to FIG. 3 will be explained with respect to the first node 102-1 for the purpose of description and illustration and should nowhere be construed as limited to the scope of the present disclosure.
  • the first node 102-1 includes one or more primary processors 305 communicably coupled to the processor 205 of the system 125.
  • the one or more primary processors 305 are coupled with a memory unit 310 storing instructions which are executed by the one or more primary processors 305. Execution of the stored instructions by the one or more primary processors 305 enables the first node 102-1 to provide an alarm corresponding to an event.
  • the first node 102-1 further includes a kernel 315 which is a core component serving as the primary interface between hardware components of the first network device 110a and the plurality of services at the database 220.
  • the kernel 315 is configured to provide the plurality of services on the first node 102-1 to resources available in the communication network 110.
  • the resources include one of a Central Processing Unit (CPU), memory components such as Random Access Memory (RAM) and Read Only Memory (ROM).
  • CPU Central Processing Unit
  • RAM Random Access Memory
  • ROM Read Only Memory
  • the fault processor 230 of the processor 205 is communicably connected to the kernel 315 of the first node 102-1.
  • the fault processor 230 is configured to receive one or more alarms related to operation of one or more nodes or services operational in a network.
  • the alarms act are notifications or alerts generated by the first node 102-1 for indicating issues or anomalies within a computer network.
  • the alarms are crucial for network administrators as they provide early warnings about potential problems, allowing them to take proactive measures to prevent downtime or service disruptions.
  • the alarms can indicate various issues including hardware failures, performance degradation, security breaches, configuration issues, service outages, and capacity problems.
  • the fault processor 230 is further configured to identify an alarm, from the one or more alarms, raised during a planned event (PE).
  • the PE indicates a time window for performing maintenance on a node in the network.
  • the alarm is identified based on one or more of operational task data, a start schedule, and an end schedule of the PE.
  • the operational task data comprises one of a Service Affecting Flag, PE identification (ID), impacted network element ID, and a unique key.
  • the fault processor 230 is further configured to update attributes of the alarm raised during the PE, using data present in a PE profile.
  • the attributes of the alarm comprise one or more of Change ID (CID), Non-Service Affecting (NSA) CID, Service Affecting (SA) CID, planned maintenance, and an outage information.
  • the data present in the PE profile provides details of the PE and includes one or more of Service Affecting flag, change ID or PE ID, impacted network element ID, PE start time, PE end time, date when PE row was inserted in database, and a unique key.
  • the fault processor 230 is further configured to send a request, corresponding to the alarm, to a TT processor 230 for creation or termination of a TT.
  • the fault processor 230 sends the request for creation and termination of the TT using a Hypertext Transfer Protocol (HTTP) stream or message stream.
  • HTTP Hypertext Transfer Protocol
  • the processor 205 further implements the TT processor 235 configured to address the request for creation or termination of the, within a predefined time period after end of the PE.
  • the TT processor 235 is configured to obtain the alarm from the persist storage, determine that a status of the alarm is active, and change a service affecting attribute of the alarm to SA and a TT status of the alarm to initiated, when severity of the alarm is critical or major.
  • FIG. 4 illustrates a system operation architecture for Trouble Ticket (TT) management during a planned event, according to one or more embodiments of the present disclosure.
  • a collector component 405 collects stream data from network elements, parses and transforms the stream data into alarms of standardized format, and pushes the alarms into an alarm stream 408. The alarms can be of two types, raise alarm and clear alarm.
  • An FM master module 410 consumes the alarms from the alarm stream 408 and stores them into the distributed cache 225. Upon identifying that an alarm is already stored, the FM master module 410 updates occurrence count and timestamp array of the alarm in the distributed cache 225.
  • the raise FM module 415 fetches the alarms from the distributed cache 225 based on their unique identifiers and performs various operations on the alarms, such as planned event processing, Al-based correlation to identify patterns or related events, and trouble ticketing to initiate incident management processes.
  • the raise FM module 415 also updates metadata associated with the alarms, enriches the alarms with additional information or context, and inserts them into the database 220.
  • the clear FM module 420 retrieves clear alarms corresponding to the unique alarm identifiers from the distributed cache 225 and check the database 220 for presence of associated raise alarms.
  • the clear FM module 420 deletes the raise alarms from an active section when the associated raise alarms are identified to be present and stream the clear alarms for retrying when the associated raise alarms are identified to be absent. After deleting the raise alarms from the active section, the clear FM module 420 adds clearance metadata to the alarms, and stores them in an archived section of the database 220.
  • a retry FM module 425 check the database 220 for presence of the raise alarms corresponding to retry alarm data and deletes the raise alarms from the active section when identified to be present. If the raise alarms are not found, the retry FM module 425 increments the retry count and reproduces the data into the retry stream for subsequent retries.
  • the enrichment engine 404 performs enrichment of the alarms. Enrichment means appending new attributes into the alarms, and may be of two types, physical enrichment and logical enrichment.
  • the correlation engine 402 classifies the alarms into parent alarms or child alarms associated with the parent alarms.
  • the correlation engine 402 classifies the alarms based on a policy configured for a network in a node raising the alarms is present.
  • the correlation engine 402 may identify association between the parent alarms and the child alarms based on a point of interaction (POI) relationship, or intra, inter, cross domain relationships between the alarms.
  • POI point of interaction
  • the correlation engine 402 schedules a task with a specified time interval for handling the child alarms post identification of the parent alarms.
  • the Trouble Ticket (TT) processor 235 generates a trouble ticket with identities of the parent alarms and identities of the child alarms associated with the parent alarms.
  • FIG. 5 illustrates a flow chart of a method 500 of creation of a Trouble Ticket (TT) during a planned event, according to one or more embodiments of the present disclosure.
  • TT Trouble Ticket
  • the method 500 includes the step of receiving an alarm corresponding to a network event, by a fault processor (230).
  • the alarm may indicate a hardware, software, or network issue associated with a node or a node instance present in a network.
  • the method 500 includes the step of determining, by the fault processor (230), whether the alarm is received during a planned event. Receipt of the alarm during the planned event is determined on the basis of operational task data, start date and time, and end date and time of the planned event.
  • the operational task includes data of node identities (IDs) used for identification of nodes for which the alarm is raised and nodes which were expected to go down during the planned event.
  • IDs node identities
  • the fault processor (230) updates attributes of the alarm using data present in a planned event profile.
  • the attributes of the alarm that are updated may include alarm Change ID (CID), Non-Service Affecting (NSA) CID, Service Affecting (SA) CID, planned maintenance, outage, and additional text, at step 515.
  • the method 500 includes the step of determining, by the fault processor (230), whether the alarm type is SA or if the alarm is automatic TT eligible.
  • An SA alarm refers to an alarm that directly impacts delivery or quality of a service provided to users of a network. These alarms are critical because they indicate issues that lead to service disruptions or degradation.
  • Automatic TT eligible refers to criteria where a trouble ticket is automatically raised without checking or fulfilment of any condition.
  • the method 500 includes the step of sending a request, by the fault processor (230), to a TT processor (235) for TT creation when it is determined, at step 510, that the alarm is raised during the planned event or it is determined, at step 520, that the alarm is SA or auto TT eligible.
  • the request for TT creation is sent using Hypertext Transfer Protocol (HTTP) or as a message stream. Sending of such requests to the TT Processor (235) can be configured by a user. The user may define a manner or protocol using which the request for TT creation can be sent.
  • HTTP Hypertext Transfer Protocol
  • the method 500 includes the step of checking presence of the alarm in persist storage, checking whether auto TT is enable, and checking whether TT number is not applicable or failed by checking database. When either of these conditions is satisfied, the method proceeds to step 540.
  • the method 500 includes the step of updating TT status to transit. Further, a timer task is initiated.
  • the method 500 includes the step of checking presence of alarm data in the persist storage based on incident ID, and checking status of the alarm and the TT status.
  • the method proceeds to step 550.
  • the method 500 includes the step of determining whether severity of the alarm is critical or major. When the severity is determined to be critical or major, service affecting value is changed from NSA to SA, at step 555. Alternatively, when the severity is not determined to be critical or major, the method proceeds to step 560.
  • the method 500 includes the step of updating TT status to initiated and sending a request for TT creation to the TT processor (235).
  • the method 500 includes the step of determining whether response of the TT processor (235) is OK and if status is successful. If not, TT status is changed as failed in the persist storage, at step 570. If yes, the method proceeds to step 575.
  • the method 500 includes the step of sending the alarm for final TT creation and updating the alarm in the persist storage. Successively, at step 580, it is determined if the step was performed successfully at step 575. If not, TT creation is retried, at step 585. If yes, the TT status and TT number are updated, at step 590.
  • PE Planned Event
  • the invention further includes performing an audit at during outage END and application bootstrap.
  • the timer will trigger to audit the alarms raised during the planned event.
  • a search is performed for the operational task values of PE data in Network Element Identification (NE ID)/ Change Identification (CI Name) of alarms where a TT has not been created, and alarms are still active.
  • the search is only performed for the planned event duration i.e. raise timestamp is in between planned event outage start and planned event outage end.
  • auto TT is raised.
  • all the records are moved from an active planned event to a history planned event table.
  • the timer is associated on outage end data whenever there is any record in the active table, and subsequently the same steps are performed.
  • the Application will fetch all the records from the persist storage whose planned event creation time is greater than scheduler last run-time and SMID starts with ‘C’. For each record, value of operation task will be checked for Single CI / Bulk CI. For Single CI case - data is inserted as it is into an NMS. For BULK CI case - Bulk CI file is fetched from the TT processor (235) using a Representational State Transfer (REST) web service. Subsequently, all attachments using GET URL and collect the Cis related to ‘Bulk CI’ fde are collected from the output and the attachment ‘Bulkci. csv’ is downloaded by including the respective Cis in the URL. A timer task is associated which will run on planned event outage end date.
  • REST Representational State Transfer
  • FIG. 6 illustrates a flow chart of a method 600 of Trouble Ticket (TT) termination for alarm clearance, according to one or more embodiments of the present disclosure.
  • TT Trouble Ticket
  • the method 600 includes the step of receiving an alarm by a fault processor (230) for clearance.
  • the method 600 includes the step of determining, by the fault processor (230), if in alarm data TT number is not NA, if alarm active with same TT number, and if TT status is true or TT number equal to initiated. When such conditions are identified to be true, the alarm is sent to the TT processor (235) for TT termination on basis of configuration, using HTTP or a message stream.
  • the method 600 includes the step of receiving the alarm by the TT processor (235) for TT termination.
  • the method 600 includes the step of fetching alarm data from a persist storage and determining if TT number is initiated. If the TT number is determined to be not initiated, TT creation is initiated and a predetermined waiting period is used for TT termination, at step 625. Alternatively, if the TT number is determined to be initiated, a TT termination request is sent to the TT processor (235), at step 630.
  • the method 600 includes the step of determining if response of the TT processor (235) is OK and if the status is successful. If these conditions are not found to be true, the alarm is sent for retrying of TT termination, at step 640. Alternatively, if these conditions are found to be true, the alarm is sent for update to terminate and the TT status is closed as resolved in the persist storage, at step 645.
  • the method 600 includes the step of updating the TT status and other parameters in the persist storage, based on different operations including CREATE, TERMINATE, ACKNOWLEDGE FROM UI, UN ACKNOWLEDGE FROM UI etc.
  • the method 600 includes the step of fetching TT document from the persist storage and sending a response to the fault processor (230).
  • FIG. 7 illustrates a snapshot of a User Interface (UI) for ticket management, according to one or more embodiments of the present disclosure.
  • the UI displays PE active/history data, service type, SMID with PE start and end time, category of PE, status, assignment group, and person responsible for the same. Filtering of data in a column wise manner is also allowed, for example filtering based on service name or category name.
  • FIG. 8A illustrates a pie chart representation of PE category data
  • FIG. 8B illustrates a pie chart representation of PE service data.
  • a single entry of dummy category change value, a single entry of network standard change, and eleven entries of network NORMAL change can be seen in pie chart shown in FIG. 8A, corresponding to the entries represented in FIG. 7.
  • a single entry of dummy category network value, four entries of MPLS network, and eight entries of LTE network can be seen in pie chart shown in FIG. 8B, corresponding to the entries represented in FIG. 7.
  • the present invention further discloses a network equipment comprising one or more processors coupled with a memory.
  • the memory stores instructions which when executed by the one or more processors causes the network equipment to transmit, to the system 125, one or more alarms related to operation of one or more nodes or services operational in a network.
  • the system 125 creates or terminates a trouble ticket, corresponding to the alarm, within a predefined time period after end of a planned event.
  • the present invention further discloses a non-transitory computer-readable medium having stored thereon computer-readable instructions.
  • the computer-readable instructions are executed by the processor 205.
  • the processor 205 is configured to receive one or more alarms related to operation of one or more nodes or services operational in a network.
  • the processor 205 is further configured to identify an alarm from the one or more alarms raised during a planned event (PE).
  • the alarm is identified based on one or more of operational task data, a start schedule, and an end schedule of the PE.
  • the PE indicates a time window for performing maintenance on a node in the network.
  • the processor 205 is further configured to update attributes of the alarm raised during the PE, using data present in a PE profile.
  • the processor 205 is further configured to send a request, corresponding to the alarm, to a TT processor for creation or termination of a TT.
  • the processor 205 is further configured to address the request for creation or termination of the TT, within a predefined time period after end of the PE.
  • the above described technique (of ticket management) of the present disclosure provide multiple advantages, including efficient management of alarms raised during a known outage i.e. a planned event.
  • the invention reduces manual effort by helping identify critical and major alarms which need attention even when raised during planned events.
  • the present invention offers multiple advantages over the prior art and the above listed are a few examples to emphasize on some of the advantageous features.
  • the listed advantages are to be read in a non-limiting manner.
  • a server may include or comprise, by way of example but not limitation, one or more of a standalone server, a server blade, a server rack, a bank of servers, a server farm, hardware supporting a part of a cloud service or system, a home server, hardware running a virtualized server, one or more processors executing code to function as a server, one or more machines performing server-side functionality as described herein, at least a portion of any of the above, some combination thereof.
  • the entity may include, but is not limited to, a vendor, a network operator, a company, an organization, a university, a lab facility, a business enterprise, a defence facility, or any other facility that provides content.
  • a network may include, by way of example but not limitation, at least a portion of one or more networks having one or more nodes that transmit, receive, forward, generate, buffer, store, route, switch, process, or a combination thereof, etc. one or more messages, packets, signals, waves, voltage or current levels, some combination thereof, or so forth.
  • the network may also include, by way of example but not limitation, one or more of a wireless network, a wired network, an internet, an intranet, a public network, a private network, a packet- switched network, a circuit- switched network, an ad hoc network, an infrastructure network, a Public-Switched Telephone Network (PSTN), a cable network, a cellular network, a satellite network, a fiber optic network, or some combination thereof.
  • PSTN Public-Switched Telephone Network
  • a wireless device or a user equipment may include, but are not limited to, a handheld wireless communication device (e.g., a mobile phone, a smart phone, a phablet device, and so on), a wearable computer device (e.g., a head-mounted display computer device, a head-mounted camera device, a wristwatch computer device, and so on), a Global Positioning System (GPS) device, a laptop computer, a tablet computer, or another type of portable computer, a media playing device, a portable gaming system, and/or any other type of computer device with wireless communication capabilities, and the like.
  • the UEs may communicate with the system via set of executable instructions residing on any operating system.
  • the UEs may include, but are not limited to, any electrical, electronic, electro-mechanical or an equipment or a combination of one or more of the above devices such as virtual reality (VR) devices, augmented reality (AR) devices, laptop, a general- purpose computer, desktop, personal digital assistant, tablet computer, mainframe computer, or any other computing device, wherein the computing device may include one or more inbuilt or externally coupled accessories including, but not limited to, a visual aid device such as camera, audio aid, a microphone, a keyboard, input devices for receiving input from a user such as touch pad, touch enabled screen, electronic pen and the like. It may be appreciated that the UEs may not be restricted to the mentioned devices and various other devices may be used.
  • a system may include one or more processors coupled with a memory, wherein the memory may store instructions which when executed by the one or more processors may cause the system to perform offloading/onloading of broadcasting or multicasting content in networks.
  • the system may include one or more processor(s).
  • the one or more processor(s) may be implemented as one or more microprocessors, microcomputers, microcontrollers, edge or fog microcontrollers, digital signal processors, central processing units, logic circuitries, and/or any devices that process data based on operational instructions.
  • the one or more processor(s) may be configured to fetch and execute computer-readable instructions stored in a memory of the system.
  • the memory may be configured to store one or more computer-readable instructions or routines in a non-transitory computer readable storage medium, which may be fetched and executed to create or share data packets over a network service.
  • the memory may comprise any non-transitory storage device including, for example, volatile memory such as Random-Access Memory (RAM), or non-volatile memory such as Electrically Erasable Programmable Read-only Memory (EPROM), flash memory, and the like.
  • the system may include an interface(s).
  • the interface(s) may comprise a variety of interfaces, for example, interfaces for data input and output devices, referred to as input/output (VO) devices, storage devices, and the like.
  • the interface(s) may facilitate communication for the system.
  • the interface(s) may also provide a communication pathway for one or more components of the system. Examples of such components include, but are not limited to, processing uniVengine(s) and a database.
  • the processing uniVengine(s) may be implemented as a combination of hardware and programming (for example, programmable instructions) to implement one or more functionalities of the processing engine(s). In examples described herein, such combinations of hardware and programming may be implemented in several different ways.
  • the programming for the processing engine(s) may be processor executable instructions stored on a non-transitory machine-readable storage medium and the hardware for the processing engine(s) may comprise a processing resource (for example, one or more processors), to execute such instructions.
  • the machine-readable storage medium may store instructions that, when executed by the processing resource, implement the processing engine(s).
  • the system may include the machine -readable storage medium storing the instructions and the processing resource to execute the instructions, or the machine-readable storage medium may be separate but accessible to the system and the processing resource.
  • the processing engine(s) may be implemented by electronic circuitry.
  • the database may comprise data that may be either stored or generated as a result of functionalities implemented by any of the components of the processor or the processing engines.
  • a computer system may include an external storage device, a bus, a main memory, a read-only memory, a mass storage device, communication port(s), and a processor.
  • the communication port(s) may be any of an RS-232 port for use with a modem-based dialup connection, a 10/100 Ethernet port, a Gigabit or 10 Gigabit port using copper or fiber, a serial port, a parallel port, or other existing or future ports.
  • the communication port(s) may be chosen depending on a network, such a Local Area Network (LAN), Wide Area Network (WAN), or any network to which the computer system connects.
  • LAN Local Area Network
  • WAN Wide Area Network
  • the main memory may be random access memory (RAM), or any other dynamic storage device commonly known in the art.
  • the read-only memory may be any static storage device(s) including, but not limited to, a Programmable Read Only Memory (PROM) chips for storing static information e.g., start-up or basic input/output system (BIOS) instructions for the processor.
  • the mass storage device may be any current or future mass storage solution, which may be used to store information and/or instructions.
  • the bus communicatively couples the processor with the other memory, storage, and communication blocks.
  • the bus can be, e.g.
  • PCI Peripheral Component Interconnect
  • PCI- X PCI Extended
  • SCSI Small Computer System Interface
  • USB universal serial bus
  • operator and administrative interfaces e.g. a display, keyboard, and a cursor control device, may also be coupled to the bus to support direct operator interaction with the computer system.
  • Other operator and administrative interfaces may be provided through network connections connected through the communication port(s).
  • One or more processors -205 are included in the central processing unit -205;

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Business, Economics & Management (AREA)
  • General Business, Economics & Management (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

A system and a method of ticket management of planned events is described. The method includes receiving alarms related to operation of nodes or services operational in a network. An alarm raised during a planned event (PE) is identified from the alarms. The alarm is identified based on one or more of operational task data, a start schedule, and an end schedule of the PE, the PE indicating a time window for performing maintenance on a node in the network. Attributes of the alarm raised during the PE are updated using data present in a PE profile. A request corresponding to the alarm is sent to a TT processor (235) for creation or termination of a TT, and the request is addressed within a predefined time period after end of the PE.

Description

SYSTEM AND METHOD FOR TICKET MANAGEMENT OF PLANNED EVENTS IN A NETWORK
FIELD OF THE INVENTION
[0001] The present invention generally relates to alarm management in a network, and in particular, to a system and a method of handling and managing alarms with regard to Planned Events (PE) in a network.
BACKGROUND OF THE INVENTION
[0002] Network Alarm Monitoring Systems monitor equipment for notable events - otherwise referred to as "alarms". An alarm may be indicative of an issue related to a hardware, software, or network related issue. In one scenario, an alarm may indicate that a network equipment is failing and a network outage about to occur. Typically, an Alarm Monitoring System should organize and interpret the severity of incoming alarms in real-time, which will help to coordinate maintenance and repair efforts throughout the network. It is an essential part of any Network Management System (NMS).
[0003] Alarms can be broadly divided in to two types: 'critical alarms' and 'standard alarms'. Each has their own advantages but also comes with an associated set of challenges. Standard alarms are typically low-priority in nature and are often expected or even scheduled. A critical alarm draws attention to a serious emergency. These can range from things that simply impact on planned production schedules (such as a broken machine) to incidents that threaten the physical safety of staff (such as a fire). Critical alarms are often linked to serious situations with multiple factors that need to be taken into account, often creating confusion amongst responders and managers alike.
[0004] Good and effective alarm management systems ensure that organisations can enjoy substantial long term cost savings due to less down time during significant incidents, whilst also allowing control room staff to concentrate on the day to day running of a facility. It also means that critical alarms can be picked up and managed in a more efficient and depending upon the priority level.
[0005] By automating the process by which alarms are dealt with, we can eliminate the issue of conflict when it comes to critical vs standard alarms. A critical alarm management system will both sift through notifications for high-priority and high-risk alarms and ensure that they get a swift and comprehensive response.
[0006] In the Network Management System, there are several possible events for raising the alarms at any particular time. Some of the alarms raised may need some external resolution. So, these alarms have to be assigned to the concerned resources. This is done by assigning a Trouble Ticket (TT) and managing and tracking the progress of the TT. Ultimately, the objective is that the issue should be resolved and the specific TT is terminated once the issue is resolved. This lifecycle between creation and termination of TTs is called TT management. The TT creation can be also done manually along with profile-based automation.
[0007] It is common in network management to schedule Planned Events (PE). For example, a Planned Event (PE) may help in the maintenance activity of the node. Whenever any planned maintenance is going on at any node, the time window as per the schedule will be communicated to all the teams concerned. That window is referred to as Planned Event (PE). During the time of PE, other nodes of the cluster may try to communicate with the node and upon failing may create some alarms. Even in systems when no tickets are assigned based upon alarms raised during a PE, there may still be some automated tickets raised for the nodes which are not aware about the planned events. These alarms may get piled up and clog the system by raising TT, which is not desired.
[0008] But at the same time, not all alarms raised during the outage can be ignored and there may have been raised some alarms, which in normal times may need TT assignment. For example, critical alarms and the like.
[0009] Therefore, there is a need for systems and methods for managing alarms more efficiently, especially with regard to planned events. The alarm should be managed based upon priority and false alarms should be identified so that false tickets are not raised and the system does not get clogged with piling false tickets.
SUMMARY OF THE INVENTION
[0010] One or more embodiments of the present disclosure provide a system and method of ticket management of planned events in a network. [0011] In one aspect of the present invention, a system for ticket management of planned events in a network is disclosed. The system includes a fault processor configured to receive one or more alarms related to operation of one or more nodes or services operational in a network. The fault processor is further configured to identify an alarm, from the one or more alarms, raised during a planned event (PE). The alarm is identified based on one or more of operational task data, a start schedule, and an end schedule of the PE. The PE indicates a time window for performing maintenance on a node in the network. The fault processor is further configured to update attributes of the alarm raised during the PE, using data present in a PE profile. The fault processor is further configured to send a request, corresponding to the alarm, to a TT processor for creation or termination of a TT. The system further includes the TT processor configured to address the request for creation or termination of the, within a predefined time period after end of the PE.
[0012] In one aspect, the data present in the PE profile provides details of the PE and includes one or more of Service Affecting flag, change ID or PE ID, impacted network element ID, PE start time, PE end time, date when PE row was inserted in database, and a unique key. The operational task data comprises one of a Service Affecting Flag, PE identification (ID), impacted network element ID, and a unique key. The fault processor sends the request for creation and termination of the TT using a Hypertext Transfer Protocol (HTTP) stream or message stream. The attributes of the alarm comprise one or more of Change ID (CID), NonService Affecting (NSA) CID, Service Affecting (SA) CID, planned maintenance, and an outage information.
[0013] In another aspect of the present invention, a method of ticket management of planned events is described. The method includes receiving, by a fault processor one or more alarms related to operation of one or more nodes or services operational in a network. The method further includes identifying, by the fault processor, an alarm, from the one or more alarms, raised during a planned event (PE). The alarm is identified based on one or more of operational task data, a start schedule, and an end schedule of the PE. The method further includes updating, by the fault processor, attributes of the alarm raised during the PE, using data present in a PE profile. The method further includes sending, by the fault processor, a request, to a TT processor for creation or termination of a TT corresponding to the alarm. The method further includes addressing, by the TT processor, the request for creation or termination of the TT, within a predefined time period after end of the PE. [0014] In one aspect, the data present in the PE profile provides details of the PE and includes one or more of Service Affecting flag, change ID or PE ID, impacted network element ID, PE start time, PE end time, date when PE row was inserted in database, and a unique key. The operational task data includes identities of nodes expected to become inactive during the PE. The PE indicates a time window for performing maintenance on a node in the network. The method further comprises creating, by the TT processor, the TT based on a user input provided through a user interface (UI). The request for creation and termination of the TT is sent to the TT processor using a Hypertext Transfer Protocol (HTTP) stream or message stream. The TT processor stores and retrieves the alarm from a persist storage for creation and termination of the TT. The TT processor is further configured to obtain the alarm from the persist storage, determine that a status of the alarm is active, and change a service affecting attribute of the alarm to Service Affecting (SA) and a TT status of the alarm to initiated, when severity of the alarm is critical or major. The attributes of the alarm comprise one or more of Change ID (CID), Non-Service Affecting (NSA) CID, Service Affecting (SA) CID, planned maintenance, and an outage information.
[0015] Other features and aspects of this invention will be apparent from the following description and the accompanying drawings. The features and advantages described in this summary and in the following detailed description are not all-inclusive, and particularly, many additional features and advantages will be apparent to one of ordinary skill in the relevant art, in view of the drawings, specification, and claims hereof. Moreover, it should be noted that the language used in the specification has been principally selected for readability and instructional purposes and may not have been selected to delineate or circumscribe the inventive subject matter, resort to the claims being necessary to determine such inventive subject matter.
BRIEF DESCRIPTION OF THE DRAWINGS
[0016] The accompanying drawings, which are incorporated herein, and constitute a part of this disclosure, illustrate exemplary embodiments of the disclosed methods and systems in which like reference numerals refer to the same parts throughout the different drawings. Components in the drawings are not necessarily to scale, emphasis instead being placed upon clearly illustrating the principles of the present disclosure. Some drawings may indicate the components using block diagrams and may not represent the internal circuitry of each component. It will be appreciated by those skilled in the art that disclosure of such drawings includes disclosure of electrical components, electronic components or circuitry commonly used to implement such components.
[0017] FIG. 1 illustrates a network architecture of a system for ticket management of planned events in a network, according to one or more embodiments of the present disclosure;
[0018] FIG. 2 illustrates a block diagram of the system for ticket management of planned events in a network, according to various embodiments of the present system;
[0019] FIG. 3 illustrates a block diagram of the system communicating with a node for ticket management of planned events in a network, according to various embodiments of the present system;
[0020] FIG. 4 illustrates a system operation architecture for Trouble Ticket (TT) management during a planned event, according to one or more embodiments of the present disclosure;
[0021] FIG. 5 illustrates a flow chart of a method of creation of a Trouble Ticket (TT) during a planned event, according to one or more embodiments of the present disclosure;
[0022] FIG. 6 illustrates a flow chart of a method of Trouble Ticket (TT) termination for alarm clearance, according to one or more embodiments of the present disclosure;
[0023] FIG. 7 illustrates a snapshot of a User Interface (UI) for ticket management, according to one or more embodiments of the present disclosure; and
[0024] FIG. 8A illustrates a pie chart representation of PE category data and FIG. 8B illustrates a pie chart representation of PE service data.
[0025] The foregoing shall be more apparent from the following detailed description of the invention.
DETAILED DESCRIPTION OF THE INVENTION
[0026] Some embodiments of the present disclosure, illustrating all its features, will now be discussed in detail. It must also be noted that as used herein and in the appended claims, the singular forms "a", "an" and "the" include plural references unless the context clearly dictates otherwise.
[0027] Various modifications to the embodiment will be readily apparent to those skilled in the art and the generic principles herein may be applied to other embodiments. However, one of ordinary skill in the art will readily recognize that the present disclosure including the definitions listed here below are not intended to be limited to the embodiments illustrated but is to be accorded the widest scope consistent with the principles and features described herein.
[0028] A person of ordinary skill in the art will readily ascertain that the illustrated steps detailed in the figures and here below are set out to explain the exemplary embodiments shown, and it should be anticipated that ongoing technological development will change the manner in which particular functions are performed. These examples are presented herein for purposes of illustration, and not limitation. Further, the boundaries of the functional building blocks have been arbitrarily defined herein for the convenience of the description. Alternative boundaries can be defined so long as the specified functions and relationships thereof are appropriately performed. Alternatives (including equivalents, extensions, variations, deviations, etc., of those described herein) will be apparent to persons skilled in the relevant art(s) based on the teachings contained herein. Such alternatives fall within the scope and spirit of the disclosed embodiments.
[0029] The present disclosure introduces an outage flag to identify if a resolution is needed for the alarm raised during PE (Planned Event). The flag indicates that an explicit resolution is not needed for such alarms because of a known ongoing Planned Event (PE). After the PE outage is over, the invention checks for major and critical alarms for the particular node for which the PE was planned and addresses the same.
[0030] In accordance with an embodiment, whenever an alarm is raised during the time of PE, no TT will be raised / assigned. For example, when a Planned Event (PE) is scheduled for maintenance activity of a node, TT creation may be done manually along with profile-based automation.
[0031] FIG. 1 illustrates a network architecture of a system for ticket management of planned events in a network. The network architecture comprises a plurality of network nodes 102-1, 102-2, ,102-n. At least one of the network nodes 102-1 through 102-n may be configured to connect to a server 105. For ease of disclosure, a network node whose alarms are handled is referred as node 102.
[0032] The node 102 may comprise a memory such as a volatile memory (e.g., RAM), a nonvolatile memory (e.g., disk memory, FLASH memory, EPROMs, etc.), an unalterable memory, and/or other types of memory. In one implementation, the memory might be configured or designed to store data. The node 102 may connect with the server 105 for sending alarms. The node 102 may be configured to connect with the server 105 through a communication network 110. The communication network 110 may use one or more communication interfaces/protocols such as, for example, VoIP, 802.11 (Wi-Fi), 802.15 (including Bluetooth™), 802.16 (Wi-Max), 802.22, Cellular standards such as CDMA, CDMA2000, WCDMA, Radio Frequency (e.g., RFID), Infrared, laser, Near Field Magnetics, etc.
[0033] The server 105 may include by way of example but not limitation, one or more of a standalone server, a server blade, a server rack, a bank of servers, a business telephony application server (BTAS), a server farm, hardware supporting a part of a cloud service or system, a home server, hardware running a virtualized server, one or more processors executing code to function as a server, one or more machines performing server-side functionality as described herein, at least a portion of any of the above, some combination thereof. In an embodiment, the entity may include, but is not limited to, a vendor, a network operator, a company, an organization, a university, a lab facility, a business enterprise, a defence facility, or any other facility that provides content.
[0034] Further, the server 105 may be communicably connected to a system 125, via the communication network 110. The system 125 may be configured to access services subscribed by enterprises, and additional services as mentioned above.
[0035] A person skilled in the art will appreciate that the plurality of nodes 102 may include end devices and intermediary devices. The end devices serve as originator of data or information flowing through the communication network 110. For example, the end devices may include workstations, laptops, desktop computers, printers, scanners, servers (file servers, web Servers), mobile phones, tablets, and smart phones. The intermediary devices are configured to forward data from one point to another in a communication network 110. For example, the intermediary devices may include hubs, modems, switches, routers, bridges, repeaters, security firewalls, and wireless access points.
[0036] The communication network 110 includes, by way of example but not limitation, one or more of a wireless network, a wired network, an internet, an intranet, a public network, a private network, a packet-switched network, a circuit-switched network, an ad hoc network, an infrastructure network, a Public-Switched Telephone Network (PSTN), a cable network, a cellular network, a satellite network, a fiber optic network, or some combination thereof. The communication network 110 may include, but is not limited to, a Third Generation (3G), a Fourth Generation (4G), a Fifth Generation (5G), a Sixth Generation (6G), a New Radio (NR), a Narrow Band Internet of Things (NB-IoT), an Open Radio Access Network (O-RAN), and the like.
[0037] The communication network 110 may also include, by way of example but not limitation, at least a portion of one or more networks having one or more nodes that transmit, receive, forward, generate, buffer, store, route, switch, process, or a combination thereof, etc. one or more messages, packets, signals, waves, voltage or current levels, some combination thereof, or so forth. The network may also include, by way of example but not limitation, one or more of a wireless network, a wired network, an internet, an intranet, a public network, a private network, a packet-switched network, a circuit-switched network, an ad hoc network, an infrastructure network, a Public-Switched Telephone Network (PSTN), a cable network, a cellular network, a satellite network, a fiber optic network, a VOIP or some combination thereof.
[0038] The system 125 is communicably coupled to the server 105 and each of the first node 102-1, the second node 102-2, and the third node 102-n via the communication network 110. The system 125 is configured for ticket management of planned events. The system 125 is adapted to be embedded within the server 105 or is embedded as an individual entity. However, for the purpose of description, the system 125 is described as an integral part of the server 105, without deviating from the scope of the present disclosure.
[0039] In various embodiments, the system 125 may be generic in nature and may be integrated with any application including a System Management Facility (SMF), an Access and Mobility Management Function (AMF), a Business Telephony Application Server (BTAS), a Converged Telephony Application Server (CTAS), any SIP (Session Initiation Protocol) Application Server which interacts with core Internet Protocol Multimedia Subsystem (IMS) on Industrial Control System (ISC) interface as defined by Third Generation Partnership Project (3GPP) to host a wide array of cloud telephony enterprise services, a System Information Blocks (SIB)/ and a Mobility Management Entity (MME).
[0040] Session Management Function (SMF) is a control function that manages user sessions including establishment, modification and release of sessions, and allocates IP addresses for IP PDU sessions. The SMF communicates indirectly with the UE through the AMF that relays session-related messages between the devices and the SMF. [0041] Access and Mobility Management Function (AMF) is a key component in 5G mobile networks, responsible for managing access to the network and handling mobility -related functions for user equipment (UE), such as smartphones, tablets, and loT devices. AMF works closely with other network functions to facilitate seamless connectivity, mobility, and quality of service for mobile users.
[0042] Business Telephony Application Server (BTAS) is a server-based system that provides telephony services and applications for businesses. It serves as a central platform for managing and delivering various voice communication services, such as voice calls, voicemail, conferencing, and interactive voice response (IVR) systems.
[0043] Converged Telephony Application Server (CTAS) is a server-based system that integrates various telephony and communication services into a single platform, enabling businesses to streamline their communication infrastructure and offer a wide range of communication features. CTAS combines traditional telephony services with advanced IPbased communication capabilities to provide a unified and cohesive communication experience.
[0044] SIP (Session Initiation Protocol) application server is a server-based system that facilitates the establishment, management, and termination of communication sessions using the SIP protocol. SIP application servers play a central role in IP-based telecommunications networks, enabling a wide range of real-time communication services, including voice calls, video calls, instant messaging, presence, and multimedia conferencing.
[0045] Internet Protocol Multimedia Subsystem (IMS) is a standardized architecture that enables the delivery of multimedia communication services over IP networks, including voice, video, messaging, and presence services. IMS is designed to provide a framework for delivering real-time communication services in a flexible, scalable, and interoperable manner.
[0046] Cloud telephony enterprise services refer to communication solutions delivered over the cloud that cater specifically to the needs of businesses and organizations. These services leverage cloud technology to provide scalable, flexible, and cost-effective communication solutions, including voice calls, messaging, collaboration tools, and contact center capabilities.
[0047] System Information Blocks (SIBs) are messages broadcast by a base station (eNodeB in LTE, NodeB in UMTS, or eNB in 5G) to provide essential information to mobile devices (UEs) in a cellular network. SIBs contain network-related information necessary for UEs to access and operate within the network efficiently. These blocks are periodically transmitted over broadcast channels, allowing UEs to receive and decode them even when they are not actively engaged in communication.
[0048] In the context of mobile networks, specifically in the LTE (Long-Term Evolution) and 5G architectures, the Mobility Management Entity (MME) is a key network element responsible for managing mobility -related functions for user equipment (UE) or mobile devices. The MME is part of the Evolved Packet Core (EPC) network in LTE and the 5G Core (5GC) network in 5G, serving as a control plane entity that handles signaling and control procedures for mobility management.
[0049] Operational and construction features of the system 125 will be explained in detail successively with respect to different figures. FIG. 2 illustrates a block diagram of the system 125 for ticket management of planned events in a network, according to one or more embodiments of the present disclosure.
[0050] As per the illustrated embodiment, the system 125 includes one or more processors 205, a memory 210, and an input/output interface unit 215. The one or more processors 205, hereinafter referred to as the processor 205, may be implemented as one or more microprocessors, microcomputers, microcontrollers, digital signal processors, central processing units, state machines, logic circuitries, single board computers, and/or any devices that manipulate signals based on operational instructions. As per the illustrated embodiment, the system 125 includes the processor 205. However, it is to be noted that the system 125 may include multiple processors as per the requirement and without deviating from the scope of the present disclosure. Among other capabilities, the processor 205 is configured to fetch and execute computer-readable instructions stored in the memory 210. The memory 210 may be configured to store one or more computer-readable instructions or routines in a non-transitory computer-readable storage medium, which may be fetched and executed to create or share data packets over a network service. The memory 210 may include any non-transitory storage device including, for example, volatile memory such as RAM, or non-volatile memory such as EPROM, flash memory, and the like.
[0051] In an embodiment, the input/output (I/O) interface unit 215 includes a variety of interfaces, for example, interfaces for data input and output devices, referred to as Input/Output (I/O) devices, storage devices, and the like. The I/O interface unit 215 facilitates communication of the system 125. In one embodiment, the I/O interface unit 215 provides a communication pathway for one or more components of the system 125. Examples of such components include, but are not limited to, the nodes 102, a database 220, and a distributed cache 225.
[0052] The database 220 is one of, but is not limited to, a centralized database, a cloud-based database, a commercial database, an open-source database, a distributed database, an end-user database, a graphical database, a No-Structured Query Language (NoSQL) database, an object- oriented database, a personal database, an in-memory database, a document-based database, a time series database, a wide column database, a key value database, a search database, a cache database, and so forth. The foregoing examples of the database 220 types are non-limiting and may not be mutually exclusive e.g., a database can be both commercial and cloud-based, or both relational and open-source, etc.
[0053] The distributed cache 225 is a pool of Random- Access Memory (RAM) of multiple networked computers into a single in-memory data store for use as a data cache to provide fast access to data. The distributed cache 225 is essential for applications that need to scale across multiple servers or are distributed geographically. The distributed cache 225 ensures that data is available close to where it’ s needed, even if the original data source is remote or under heavy load.
[0054] Further, the processor 205, in an embodiment, may be implemented as a combination of hardware and programming (for example, programmable instructions) to implement one or more functionalities of the processor 205. In the examples described herein, such combinations of hardware and programming may be implemented in several different ways. For example, the programming for the processor 205 may be processor-executable instructions stored on a non-transitory machine-readable storage medium and the hardware for the processor 205 may comprise a processing resource (for example, one or more processors), to execute such instructions. In the present examples, the memory 210 may store instructions that, when executed by the processing resource, implement the processor 205. In such examples, the system 125 may comprise the memory 210 storing the instructions and the processing resource to execute the instructions, or the memory 210 may be separate but accessible to the system 125 and the processing resource. In other examples, the processor 205 may be implemented by electronic circuitry.
[0055] For the system 125 to manage tickets of planned events in a network, the processor 205 implements a fault processor 230 communicably coupled to each of the first node 102-1, the second node 102-2, and the third node 102-n. The fault processor 230 is configured to receive one or more alarms related to operation of one or more nodes or services operational in a network. The alarms act are notifications or alerts generated by the node 102 for indicating issues or anomalies within a computer network. The alarms are crucial for network administrators as they provide early warnings about potential problems, allowing them to take proactive measures to prevent downtime or service disruptions. The alarms can indicate various issues including hardware failures, performance degradation, security breaches, configuration issues, service outages, and capacity problems.
When an alarm is raised, certain attributes are added to the alarm. Such attributes include outage whose value is set as False, planned maintenance whose default value is set as NA, and alarm type (critical, major, Service Affecting (SA), Non-Service Affecting (NSA)).
[0056] The fault processor 230 is further configured to identify an alarm, from the one or more alarms, raised during a planned event (PE). The PE indicates a time window for performing maintenance on a node in the network. The alarm is identified based on one or more of operational task data, a start schedule, and an end schedule of the PE. The operational task data comprises one of a Service Affecting Flag, PE identification (ID), impacted network element ID, and a unique key. The fault processor 230 is further configured to update attributes of the alarm raised during the PE, using data present in a PE profile. The attributes of the alarm comprise one or more of Change ID (CID), Non-Service Affecting (NSA) CID, Service Affecting (SA) CID, planned maintenance, and an outage information. The data present in the PE profile provides details of the PE and includes one or more of Service Affecting flag, change ID or PE ID, impacted network element ID, PE start time, PE end time, date when PE row was inserted in database, and a unique key. The fault processor 230 is further configured to send a request, corresponding to the alarm, to a TT processor 230 for creation or termination of a TT. The fault processor 230 sends the request for creation and termination of the TT using a Hypertext Transfer Protocol (HTTP) stream or message stream.
[0057] The processor 205 further implements the TT processor 235 configured to address the request for creation or termination of the, within a predefined time period after end of the PE. For creation of the TT, the TT processor 235 is configured to obtain the alarm from the persist storage, determine that a status of the alarm is active, and change a service affecting attribute of the alarm to SA and a TT status of the alarm to initiated, when severity of the alarm is critical or major. [0058] Referring to FIG. 3 illustrating a block diagram of the system 125 communicating with a first node 102-1 for ticket management of planned events in a network, a preferred embodiment of the system 125 is described. It is to be noted that the embodiment with respect to FIG. 3 will be explained with respect to the first node 102-1 for the purpose of description and illustration and should nowhere be construed as limited to the scope of the present disclosure.
[0059] The first node 102-1 includes one or more primary processors 305 communicably coupled to the processor 205 of the system 125. The one or more primary processors 305 are coupled with a memory unit 310 storing instructions which are executed by the one or more primary processors 305. Execution of the stored instructions by the one or more primary processors 305 enables the first node 102-1 to provide an alarm corresponding to an event. The first node 102-1 further includes a kernel 315 which is a core component serving as the primary interface between hardware components of the first network device 110a and the plurality of services at the database 220. The kernel 315 is configured to provide the plurality of services on the first node 102-1 to resources available in the communication network 110. The resources include one of a Central Processing Unit (CPU), memory components such as Random Access Memory (RAM) and Read Only Memory (ROM).
[0060] In the preferred embodiment, the fault processor 230 of the processor 205 is communicably connected to the kernel 315 of the first node 102-1. The fault processor 230 is configured to receive one or more alarms related to operation of one or more nodes or services operational in a network. The alarms act are notifications or alerts generated by the first node 102-1 for indicating issues or anomalies within a computer network. The alarms are crucial for network administrators as they provide early warnings about potential problems, allowing them to take proactive measures to prevent downtime or service disruptions. The alarms can indicate various issues including hardware failures, performance degradation, security breaches, configuration issues, service outages, and capacity problems.
[0061] The fault processor 230 is further configured to identify an alarm, from the one or more alarms, raised during a planned event (PE). The PE indicates a time window for performing maintenance on a node in the network. The alarm is identified based on one or more of operational task data, a start schedule, and an end schedule of the PE. The operational task data comprises one of a Service Affecting Flag, PE identification (ID), impacted network element ID, and a unique key. The fault processor 230 is further configured to update attributes of the alarm raised during the PE, using data present in a PE profile. The attributes of the alarm comprise one or more of Change ID (CID), Non-Service Affecting (NSA) CID, Service Affecting (SA) CID, planned maintenance, and an outage information. The data present in the PE profile provides details of the PE and includes one or more of Service Affecting flag, change ID or PE ID, impacted network element ID, PE start time, PE end time, date when PE row was inserted in database, and a unique key. The fault processor 230 is further configured to send a request, corresponding to the alarm, to a TT processor 230 for creation or termination of a TT. The fault processor 230 sends the request for creation and termination of the TT using a Hypertext Transfer Protocol (HTTP) stream or message stream.
[0062] The processor 205 further implements the TT processor 235 configured to address the request for creation or termination of the, within a predefined time period after end of the PE. For creation of the TT, the TT processor 235 is configured to obtain the alarm from the persist storage, determine that a status of the alarm is active, and change a service affecting attribute of the alarm to SA and a TT status of the alarm to initiated, when severity of the alarm is critical or major.
[0063] FIG. 4 illustrates a system operation architecture for Trouble Ticket (TT) management during a planned event, according to one or more embodiments of the present disclosure. A collector component 405 collects stream data from network elements, parses and transforms the stream data into alarms of standardized format, and pushes the alarms into an alarm stream 408. The alarms can be of two types, raise alarm and clear alarm. An FM master module 410 consumes the alarms from the alarm stream 408 and stores them into the distributed cache 225. Upon identifying that an alarm is already stored, the FM master module 410 updates occurrence count and timestamp array of the alarm in the distributed cache 225.
[0064] The raise FM module 415 fetches the alarms from the distributed cache 225 based on their unique identifiers and performs various operations on the alarms, such as planned event processing, Al-based correlation to identify patterns or related events, and trouble ticketing to initiate incident management processes. The raise FM module 415 also updates metadata associated with the alarms, enriches the alarms with additional information or context, and inserts them into the database 220. The clear FM module 420 retrieves clear alarms corresponding to the unique alarm identifiers from the distributed cache 225 and check the database 220 for presence of associated raise alarms. The clear FM module 420 deletes the raise alarms from an active section when the associated raise alarms are identified to be present and stream the clear alarms for retrying when the associated raise alarms are identified to be absent. After deleting the raise alarms from the active section, the clear FM module 420 adds clearance metadata to the alarms, and stores them in an archived section of the database 220. A retry FM module 425 check the database 220 for presence of the raise alarms corresponding to retry alarm data and deletes the raise alarms from the active section when identified to be present. If the raise alarms are not found, the retry FM module 425 increments the retry count and reproduces the data into the retry stream for subsequent retries.
[0065] The enrichment engine 404 performs enrichment of the alarms. Enrichment means appending new attributes into the alarms, and may be of two types, physical enrichment and logical enrichment. Post enrichment of the alarms by the enrichment engine 235, the correlation engine 402 classifies the alarms into parent alarms or child alarms associated with the parent alarms. The correlation engine 402 classifies the alarms based on a policy configured for a network in a node raising the alarms is present. The correlation engine 402 may identify association between the parent alarms and the child alarms based on a point of interaction (POI) relationship, or intra, inter, cross domain relationships between the alarms.
[0066] The correlation engine 402 schedules a task with a specified time interval for handling the child alarms post identification of the parent alarms. The Trouble Ticket (TT) processor 235 generates a trouble ticket with identities of the parent alarms and identities of the child alarms associated with the parent alarms.
[0067] FIG. 5 illustrates a flow chart of a method 500 of creation of a Trouble Ticket (TT) during a planned event, according to one or more embodiments of the present disclosure. For the purpose of description, the method 500 is described with the embodiments as illustrated in FIGS. 1 to 4 and should nowhere be construed as limiting the scope of the present disclosure.
[0068] At step 505, the method 500 includes the step of receiving an alarm corresponding to a network event, by a fault processor (230). The alarm may indicate a hardware, software, or network issue associated with a node or a node instance present in a network.
[0069] At step 510, the method 500 includes the step of determining, by the fault processor (230), whether the alarm is received during a planned event. Receipt of the alarm during the planned event is determined on the basis of operational task data, start date and time, and end date and time of the planned event. The operational task includes data of node identities (IDs) used for identification of nodes for which the alarm is raised and nodes which were expected to go down during the planned event. When it is determined that the alarm is received during the planned event, the fault processor (230) updates attributes of the alarm using data present in a planned event profile. The attributes of the alarm that are updated may include alarm Change ID (CID), Non-Service Affecting (NSA) CID, Service Affecting (SA) CID, planned maintenance, outage, and additional text, at step 515.
[0070] At step 520, the method 500 includes the step of determining, by the fault processor (230), whether the alarm type is SA or if the alarm is automatic TT eligible. An SA alarm refers to an alarm that directly impacts delivery or quality of a service provided to users of a network. These alarms are critical because they indicate issues that lead to service disruptions or degradation. Automatic TT eligible refers to criteria where a trouble ticket is automatically raised without checking or fulfilment of any condition. When it is determined that the alarm is neither SA nor auto TT eligible, the fault processor (230) does not send the alarm for TT creation, at step 525.
[0071] At step 530, the method 500 includes the step of sending a request, by the fault processor (230), to a TT processor (235) for TT creation when it is determined, at step 510, that the alarm is raised during the planned event or it is determined, at step 520, that the alarm is SA or auto TT eligible. The request for TT creation is sent using Hypertext Transfer Protocol (HTTP) or as a message stream. Sending of such requests to the TT Processor (235) can be configured by a user. The user may define a manner or protocol using which the request for TT creation can be sent.
[0072] At step 535, the method 500 includes the step of checking presence of the alarm in persist storage, checking whether auto TT is enable, and checking whether TT number is not applicable or failed by checking database. When either of these conditions is satisfied, the method proceeds to step 540.
[0073] At step 540, the method 500 includes the step of updating TT status to transit. Further, a timer task is initiated.
[0074] At step 545, the method 500 includes the step of checking presence of alarm data in the persist storage based on incident ID, and checking status of the alarm and the TT status. When the alarm data is found in the persist storage, the method proceeds to step 550.
[0075] At step 550, the method 500 includes the step of determining whether severity of the alarm is critical or major. When the severity is determined to be critical or major, service affecting value is changed from NSA to SA, at step 555. Alternatively, when the severity is not determined to be critical or major, the method proceeds to step 560.
[0076] At step 560, the method 500 includes the step of updating TT status to initiated and sending a request for TT creation to the TT processor (235).
[0077] At step 565, the method 500 includes the step of determining whether response of the TT processor (235) is OK and if status is successful. If not, TT status is changed as failed in the persist storage, at step 570. If yes, the method proceeds to step 575.
[0078] At step 575, the method 500 includes the step of sending the alarm for final TT creation and updating the alarm in the persist storage. Successively, at step 580, it is determined if the step was performed successfully at step 575. If not, TT creation is retried, at step 585. If yes, the TT status and TT number are updated, at step 590.
[0079] During the method 500, for PE data ingestion from the TT processor (235), active instance of application will run schedular (configurable time) for pulling Planned Event (PE) profile data from a persist storage/database. For example, as shown in following table:
Figure imgf000019_0001
[0080] In accordance with an embodiment, the invention further includes performing an audit at during outage END and application bootstrap. When the outage ends, the timer will trigger to audit the alarms raised during the planned event. A search is performed for the operational task values of PE data in Network Element Identification (NE ID)/ Change Identification (CI Name) of alarms where a TT has not been created, and alarms are still active. The search is only performed for the planned event duration i.e. raise timestamp is in between planned event outage start and planned event outage end. During the search, if active alarms are found, auto TT is raised. Also, all the records are moved from an active planned event to a history planned event table. On application bootstrap, the timer is associated on outage end data whenever there is any record in the active table, and subsequently the same steps are performed.
[0081] The Application will fetch all the records from the persist storage whose planned event creation time is greater than scheduler last run-time and SMID starts with ‘C’. For each record, value of operation task will be checked for Single CI / Bulk CI. For Single CI case - data is inserted as it is into an NMS. For BULK CI case - Bulk CI file is fetched from the TT processor (235) using a Representational State Transfer (REST) web service. Subsequently, all attachments using GET URL and collect the Cis related to ‘Bulk CI’ fde are collected from the output and the attachment ‘Bulkci. csv’ is downloaded by including the respective Cis in the URL. A timer task is associated which will run on planned event outage end date.
[0082] FIG. 6 illustrates a flow chart of a method 600 of Trouble Ticket (TT) termination for alarm clearance, according to one or more embodiments of the present disclosure. For the purpose of description, the method 600 is described with the embodiments as illustrated in FIGS. 1 to 4 and should nowhere be construed as limiting the scope of the present disclosure.
[0083] At step 605, the method 600 includes the step of receiving an alarm by a fault processor (230) for clearance.
[0084] At step 610, the method 600 includes the step of determining, by the fault processor (230), if in alarm data TT number is not NA, if alarm active with same TT number, and if TT status is true or TT number equal to initiated. When such conditions are identified to be true, the alarm is sent to the TT processor (235) for TT termination on basis of configuration, using HTTP or a message stream.
[0085] At step 615, the method 600 includes the step of receiving the alarm by the TT processor (235) for TT termination.
[0086] At step 620, the method 600 includes the step of fetching alarm data from a persist storage and determining if TT number is initiated. If the TT number is determined to be not initiated, TT creation is initiated and a predetermined waiting period is used for TT termination, at step 625. Alternatively, if the TT number is determined to be initiated, a TT termination request is sent to the TT processor (235), at step 630.
[0087] At step 635, the method 600 includes the step of determining if response of the TT processor (235) is OK and if the status is successful. If these conditions are not found to be true, the alarm is sent for retrying of TT termination, at step 640. Alternatively, if these conditions are found to be true, the alarm is sent for update to terminate and the TT status is closed as resolved in the persist storage, at step 645.
[0088] At step 650, the method 600 includes the step of updating the TT status and other parameters in the persist storage, based on different operations including CREATE, TERMINATE, ACKNOWLEDGE FROM UI, UN ACKNOWLEDGE FROM UI etc.
[0089] At step 655, the method 600 includes the step of fetching TT document from the persist storage and sending a response to the fault processor (230).
[0090] FIG. 7 illustrates a snapshot of a User Interface (UI) for ticket management, according to one or more embodiments of the present disclosure. The UI displays PE active/history data, service type, SMID with PE start and end time, category of PE, status, assignment group, and person responsible for the same. Filtering of data in a column wise manner is also allowed, for example filtering based on service name or category name.
[0091] FIG. 8A illustrates a pie chart representation of PE category data and FIG. 8B illustrates a pie chart representation of PE service data. Specifically, a single entry of dummy category change value, a single entry of network standard change, and eleven entries of network NORMAL change can be seen in pie chart shown in FIG. 8A, corresponding to the entries represented in FIG. 7. Further, a single entry of dummy category network value, four entries of MPLS network, and eight entries of LTE network can be seen in pie chart shown in FIG. 8B, corresponding to the entries represented in FIG. 7.
[0092] The present invention further discloses a network equipment comprising one or more processors coupled with a memory. The memory stores instructions which when executed by the one or more processors causes the network equipment to transmit, to the system 125, one or more alarms related to operation of one or more nodes or services operational in a network. The system 125 creates or terminates a trouble ticket, corresponding to the alarm, within a predefined time period after end of a planned event. [0093] The present invention further discloses a non-transitory computer-readable medium having stored thereon computer-readable instructions. The computer-readable instructions are executed by the processor 205. The processor 205 is configured to receive one or more alarms related to operation of one or more nodes or services operational in a network. The processor 205 is further configured to identify an alarm from the one or more alarms raised during a planned event (PE). The alarm is identified based on one or more of operational task data, a start schedule, and an end schedule of the PE. The PE indicates a time window for performing maintenance on a node in the network. The processor 205 is further configured to update attributes of the alarm raised during the PE, using data present in a PE profile. The processor 205 is further configured to send a request, corresponding to the alarm, to a TT processor for creation or termination of a TT. The processor 205 is further configured to address the request for creation or termination of the TT, within a predefined time period after end of the PE.
[0094] A person of ordinary skill in the art will readily ascertain that the illustrated embodiments and steps in description and drawings (FIGS.1-8) are set out to explain the exemplary embodiments shown, and it should be anticipated that ongoing technological development will change the manner in which particular functions are performed. These examples are presented herein for purposes of illustration, and not limitation. Further, the boundaries of the functional building blocks have been arbitrarily defined herein for the convenience of the description. Alternative boundaries can be defined so long as the specified functions and relationships thereof are appropriately performed. Alternatives (including equivalents, extensions, variations, deviations, etc., of those described herein) will be apparent to persons skilled in the relevant art(s) based on the teachings contained herein. Such alternatives fall within the scope and spirit of the disclosed embodiments.
[0095] The above described technique (of ticket management) of the present disclosure provide multiple advantages, including efficient management of alarms raised during a known outage i.e. a planned event. The invention reduces manual effort by helping identify critical and major alarms which need attention even when raised during planned events.
[0096] The present invention offers multiple advantages over the prior art and the above listed are a few examples to emphasize on some of the advantageous features. The listed advantages are to be read in a non-limiting manner.
[0097] Server: A server may include or comprise, by way of example but not limitation, one or more of a standalone server, a server blade, a server rack, a bank of servers, a server farm, hardware supporting a part of a cloud service or system, a home server, hardware running a virtualized server, one or more processors executing code to function as a server, one or more machines performing server-side functionality as described herein, at least a portion of any of the above, some combination thereof. In an embodiment, the entity may include, but is not limited to, a vendor, a network operator, a company, an organization, a university, a lab facility, a business enterprise, a defence facility, or any other facility that provides content.
[0098] Network: A network may include, by way of example but not limitation, at least a portion of one or more networks having one or more nodes that transmit, receive, forward, generate, buffer, store, route, switch, process, or a combination thereof, etc. one or more messages, packets, signals, waves, voltage or current levels, some combination thereof, or so forth. The network may also include, by way of example but not limitation, one or more of a wireless network, a wired network, an internet, an intranet, a public network, a private network, a packet- switched network, a circuit- switched network, an ad hoc network, an infrastructure network, a Public-Switched Telephone Network (PSTN), a cable network, a cellular network, a satellite network, a fiber optic network, or some combination thereof.
[0099] UE/ Wireless Device: A wireless device or a user equipment (UE) may include, but are not limited to, a handheld wireless communication device (e.g., a mobile phone, a smart phone, a phablet device, and so on), a wearable computer device (e.g., a head-mounted display computer device, a head-mounted camera device, a wristwatch computer device, and so on), a Global Positioning System (GPS) device, a laptop computer, a tablet computer, or another type of portable computer, a media playing device, a portable gaming system, and/or any other type of computer device with wireless communication capabilities, and the like. In an embodiment, the UEs may communicate with the system via set of executable instructions residing on any operating system. In an embodiment, the UEs may include, but are not limited to, any electrical, electronic, electro-mechanical or an equipment or a combination of one or more of the above devices such as virtual reality (VR) devices, augmented reality (AR) devices, laptop, a general- purpose computer, desktop, personal digital assistant, tablet computer, mainframe computer, or any other computing device, wherein the computing device may include one or more inbuilt or externally coupled accessories including, but not limited to, a visual aid device such as camera, audio aid, a microphone, a keyboard, input devices for receiving input from a user such as touch pad, touch enabled screen, electronic pen and the like. It may be appreciated that the UEs may not be restricted to the mentioned devices and various other devices may be used. [00100] System (for example, computing system): A system may include one or more processors coupled with a memory, wherein the memory may store instructions which when executed by the one or more processors may cause the system to perform offloading/onloading of broadcasting or multicasting content in networks. An exemplary representation of the system for such purpose, in accordance with embodiments of the present disclosure. In an embodiment, the system may include one or more processor(s). The one or more processor(s) may be implemented as one or more microprocessors, microcomputers, microcontrollers, edge or fog microcontrollers, digital signal processors, central processing units, logic circuitries, and/or any devices that process data based on operational instructions. Among other capabilities, the one or more processor(s) may be configured to fetch and execute computer-readable instructions stored in a memory of the system. The memory may be configured to store one or more computer-readable instructions or routines in a non-transitory computer readable storage medium, which may be fetched and executed to create or share data packets over a network service. The memory may comprise any non-transitory storage device including, for example, volatile memory such as Random-Access Memory (RAM), or non-volatile memory such as Electrically Erasable Programmable Read-only Memory (EPROM), flash memory, and the like. In an embodiment, the system may include an interface(s). The interface(s) may comprise a variety of interfaces, for example, interfaces for data input and output devices, referred to as input/output (VO) devices, storage devices, and the like. The interface(s) may facilitate communication for the system. The interface(s) may also provide a communication pathway for one or more components of the system. Examples of such components include, but are not limited to, processing uniVengine(s) and a database. The processing uniVengine(s) may be implemented as a combination of hardware and programming (for example, programmable instructions) to implement one or more functionalities of the processing engine(s). In examples described herein, such combinations of hardware and programming may be implemented in several different ways. For example, the programming for the processing engine(s) may be processor executable instructions stored on a non-transitory machine-readable storage medium and the hardware for the processing engine(s) may comprise a processing resource (for example, one or more processors), to execute such instructions. In the present examples, the machine-readable storage medium may store instructions that, when executed by the processing resource, implement the processing engine(s). In such examples, the system may include the machine -readable storage medium storing the instructions and the processing resource to execute the instructions, or the machine-readable storage medium may be separate but accessible to the system and the processing resource. In other examples, the processing engine(s) may be implemented by electronic circuitry. In an aspect, the database may comprise data that may be either stored or generated as a result of functionalities implemented by any of the components of the processor or the processing engines.
[00101] Computer System: A computer system may include an external storage device, a bus, a main memory, a read-only memory, a mass storage device, communication port(s), and a processor. A person skilled in the art will appreciate that the computer system may include more than one processor and communication ports. The communication port(s) may be any of an RS-232 port for use with a modem-based dialup connection, a 10/100 Ethernet port, a Gigabit or 10 Gigabit port using copper or fiber, a serial port, a parallel port, or other existing or future ports. The communication port(s) may be chosen depending on a network, such a Local Area Network (LAN), Wide Area Network (WAN), or any network to which the computer system connects. The main memory may be random access memory (RAM), or any other dynamic storage device commonly known in the art. The read-only memory may be any static storage device(s) including, but not limited to, a Programmable Read Only Memory (PROM) chips for storing static information e.g., start-up or basic input/output system (BIOS) instructions for the processor. The mass storage device may be any current or future mass storage solution, which may be used to store information and/or instructions. The bus communicatively couples the processor with the other memory, storage, and communication blocks. The bus can be, e.g. a Peripheral Component Interconnect (PCI) / PCI Extended (PCI- X) bus, Small Computer System Interface (SCSI), universal serial bus (USB), or the like, for connecting expansion cards, drives, and other subsystems as well as other buses, such a front side bus (FSB), which connects the processor to the computer system. Optionally, operator and administrative interfaces, e.g. a display, keyboard, and a cursor control device, may also be coupled to the bus to support direct operator interaction with the computer system. Other operator and administrative interfaces may be provided through network connections connected through the communication port(s). In no way should the aforementioned exemplary computer system limit the scope of the present disclosure.
REFERENCE NUMERALS
[00102] Node - 102;
[00103] Server - 105;
[00104] Communication network - 110; [00105] System - 125;
[00106] One or more processors -205;
[00107] Memory - 210;
[00108] Input/output interface unit - 215;
[00109] Database - 220;
[00110] Distributed cache - 225;
[00111] Fault processor - 230;
[00112] Trouble Ticket (TT) processor - 235;
[00113] Primary processor of first node - 305;
[00114] Memory unit of first node - 310;
[00115] Kernel of the first node - 315;
[00116] Correlation engine - 402;
[00117] Enrichment engine - 404;
[00118] Collector component - 405;
[00119] Alarm stream - 408;
[00120] FM master - 410;
[00121] Raise FM module - 415;
[00122] Clear FM module - 420; and
[00123] Retry FM module - 425.

Claims

We Claim:
1. A method of ticket management of planned events in a network, the method comprising the steps of: receiving, by a fault processor (230), one or more alarms related to operation of one or more nodes or services operational in a network; identifying, by the fault processor (230), an alarm, from the one or more alarms, raised during a planned event (PE), wherein the alarm is identified based on one or more of operational task data, a start schedule, and an end schedule of the PE; updating, by the fault processor (230), attributes of the alarm raised during the PE, using data present in a PE profile; sending, by the fault processor (230), a request, to a TT processor (235) for creation or termination of a TT corresponding to the alarm; and addressing, by the TT processor (235), the request for creation or termination of the TT, within a predefined time period after end of the PE.
2. The method as claimed in claim 1, wherein the data present in the PE profile provides details of the PE and includes one or more of Service Affecting flag, change ID or PE ID, impacted network element ID, PE start time, PE end time, date when PE row was inserted in database, and a unique key.
3. The method as claimed in claim 1, wherein the operational task data includes identities of nodes expected to become inactive during the PE.
4. The method as claimed in claim 1, the PE indicates a time window for performing maintenance on a node in the network.
5. The method as claimed in claim 1, comprising creating, by the TT processor (230), the TT based on a user input provided through a user interface (UI).
6. The method as claimed in claim 1, wherein the request for creation and termination of the TT is sent to the TT processor (230) using a Hypertext Transfer Protocol (HTTP) stream or message stream.
7. The method as claimed in claim 1, wherein the TT processor (230) stores and retrieves the alarm from a persist storage for creation and termination of the TT.
8. The method as claimed in claim 6, wherein for creation of the TT, the TT processor (230): obtains the alarm from the persist storage; determines that a status of the alarm is active; and changes a service affecting attribute of the alarm to Service Affecting (SA) and a TT status of the alarm to initiated, when severity of the alarm is critical or major.
9. The method as claimed in claim 1, wherein the attributes of the alarm comprise one or more of Change ID (CID), Non-Service Affecting (NSA) CID, Service Affecting (SA) CID, planned maintenance, and an outage information.
10. A system for ticket management of planned events in a network, the system comprising: a fault processor (230) configured to: receive one or more alarms related to operation of one or more nodes or services operational in a network; identify an alarm, from the one or more alarms, raised during a planned event (PE), wherein the alarm is identified based on one or more of operational task data, a start schedule, and an end schedule of the PE; update attributes of the alarm raised during the PE, using data present in a PE profile; and send a request, corresponding to the alarm, to a TT processor (230) for creation or termination of a TT, and the TT processor (235) configured to address the request for creation or termination of the TT, within a predefined time period after end of the PE.
11. The system as claimed in claim 10, wherein the data present in the PE profile provides details of the PE and includes one or more of Service Affecting flag, change ID or PE ID, impacted network element ID, PE start time, PE end time, date when PE row was inserted in database, and a unique key.
12. The system as claimed in claim 10, wherein the PE indicates a time window for performing maintenance on a node in the network.
13. The system as claimed in claim 10, wherein the operational task data comprises one of a Service Affecting Flag, PE identification (ID), impacted network element ID, and a unique key.
14. The system as claimed in claim 10, wherein the fault processor (230) sends the request for creation and termination of the TT using a Hypertext Transfer Protocol (HTTP) stream or message stream.
15. The system as claimed in claim 10, wherein the attributes of the alarm comprise one or more of Change ID (CID), Non-Service Affecting (NSA) CID, Service Affecting (SA) CID, planned maintenance, and an outage information.
16. The system as claimed in claim 10, wherein for creation of the TT, the TT processor (235) is configured to: obtain the alarm from the persist storage; determine that a status of the alarm is active; and change a service affecting attribute of the alarm to SA and a TT status of the alarm to initiated, when severity of the alarm is critical or major.
17. A network equipment (102) comprising: one or more processors coupled with a memory, wherein said memory stores instructions which when executed by the one or more processors causes the network equipment to: transmit one or more alarms related to operation of one or more nodes or services operational in a network, wherein the one or more processors is configured to perform the steps as claimed in claim 1.
18. A non-transitory computer-readable medium having stored thereon computer-readable instructions that, when executed by a processor (205), cause the processor (205) to: 1 receive one or more alarms related to operation of one or more nodes or services operational in a network; identify an alarm from the one or more alarms raised during a planned event (PE), wherein the alarm is identified based on one or more of operational task data, a start schedule, and an end schedule of the PE, the PE indicates a time window for performing maintenance on a node in the network; update attributes of the alarm raised during the PE, using data present in a PE profile; send a request, corresponding to the alarm, to a TT processor (230) for creation or termination of a TT; and address the request for creation or termination of the TT, within a predefined time period after end of the PE.
PCT/IN2024/051039 2023-07-09 2024-07-03 System and method for ticket management of planned events in a network Pending WO2025013005A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
IN202321046103 2023-07-09
IN202321046103 2023-07-09

Publications (1)

Publication Number Publication Date
WO2025013005A1 true WO2025013005A1 (en) 2025-01-16

Family

ID=94215070

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IN2024/051039 Pending WO2025013005A1 (en) 2023-07-09 2024-07-03 System and method for ticket management of planned events in a network

Country Status (1)

Country Link
WO (1) WO2025013005A1 (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11252052B1 (en) * 2020-11-13 2022-02-15 Accenture Global Solutions Limited Intelligent node failure prediction and ticket triage solution
US20230129123A1 (en) * 2021-10-26 2023-04-27 Dell Products L.P. Monitoring and Management System for Automatically Generating an Issue Prediction for a Trouble Ticket

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11252052B1 (en) * 2020-11-13 2022-02-15 Accenture Global Solutions Limited Intelligent node failure prediction and ticket triage solution
US20230129123A1 (en) * 2021-10-26 2023-04-27 Dell Products L.P. Monitoring and Management System for Automatically Generating an Issue Prediction for a Trouble Ticket

Similar Documents

Publication Publication Date Title
CN108600029B (en) A configuration file updating method, device, terminal device and storage medium
US9172593B2 (en) System and method for identifying problems on a network
US12261977B2 (en) Charging and collection function in microservices
CN107241242A (en) A kind of data processing method and device
US9083668B2 (en) Adaptor based communications systems, apparatus, and methods
CN110727563B (en) Cloud service alarm method and device for preset customers
CN103916526B (en) Contact person information processing method, device and mobile terminal
CN106162581A (en) Information push method, device and system
CN103517292A (en) Mobile terminal information reporting method and apparatus
EP2381630B1 (en) Monitoring a mobile data service associated with a mailbox
CN112783677A (en) Method and device for monitoring service abnormity
US20240314053A1 (en) Managing Information Technology Infrastructure
WO2025013005A1 (en) System and method for ticket management of planned events in a network
CN114185717A (en) Microservice exception handling method, microservice exception handling apparatus, microservice exception handling device, microservice exception handling medium, and program product
CN115904853A (en) Fault positioning method based on multi-device interconnection and electronic device
US10419368B1 (en) Dynamic scaling of computing message architecture
US20170223136A1 (en) Any Web Page Reporting and Capture
US9667777B1 (en) Automated bulk provisioning of primary rate interface and SIP trunk telephone numbers
CN109388546B (en) Method, device and system for processing faults of application program
CN112783665B (en) Interface compensation method and device
WO2025013004A1 (en) System and method for automated inter intra node domain alarm correlation
US9336138B2 (en) Method and apparatus for implementing garbage collection within a computing environment
WO2025013003A1 (en) System and method for enrichment of network alarms
WO2025017661A1 (en) System and method for creating a dynamic uniform resource locator (url)
CN111371602B (en) Alarm information processing method and device

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 24839118

Country of ref document: EP

Kind code of ref document: A1