[go: up one dir, main page]

WO2025022439A1 - Method and system for generation of interconneted dashboards - Google Patents

Method and system for generation of interconneted dashboards Download PDF

Info

Publication number
WO2025022439A1
WO2025022439A1 PCT/IN2024/051344 IN2024051344W WO2025022439A1 WO 2025022439 A1 WO2025022439 A1 WO 2025022439A1 IN 2024051344 W IN2024051344 W IN 2024051344W WO 2025022439 A1 WO2025022439 A1 WO 2025022439A1
Authority
WO
WIPO (PCT)
Prior art keywords
dashboard
request
module
data
ipm
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
PCT/IN2024/051344
Other languages
French (fr)
Inventor
Ankit Murarka
Aayush Bhatnagar
Jugal Kishore
Gaurav Kumar
Kishan Sahu
Rahul Kumar
Sunil Meena
Gourav Gurbani
Sanjana Chaudhary
Chandra GANVEER
Supriya Kaushik DE
Debashish Kumar
Mehul Tilala
Dharmendra Kumar Vishwakarma
Yogesh Kumar
Niharika PATNAM
Harshita GARG
Avinash Kushwaha
Sajal Soni
Srinath KALKIVAYI
Vitap Pandey
Manasvi Rajani
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jio Platforms Ltd
Original Assignee
Jio Platforms Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jio Platforms Ltd filed Critical Jio Platforms Ltd
Publication of WO2025022439A1 publication Critical patent/WO2025022439A1/en
Pending legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
    • G06Q10/063Operations research, analysis or management
    • G06Q10/0639Performance analysis of employees; Performance analysis of enterprise or organisation operations
    • G06Q10/06393Score-carding, benchmarking or key performance indicator [KPI] analysis
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
    • G06Q10/063Operations research, analysis or management
    • G06Q10/0635Risk analysis of enterprise or organisation activities
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
    • G06Q10/063Operations research, analysis or management
    • G06Q10/0639Performance analysis of employees; Performance analysis of enterprise or organisation operations
    • G06Q10/06395Quality analysis or management
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/20Administration of product repair or maintenance
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/50Business processes related to the communications industry

Definitions

  • Embodiments of the present disclosure generally relate to network performance management systems. More particularly, embodiments of the present disclosure relate to generation of one or more interconnected dashboards.
  • Network performance management systems typically track network elements and data from network monitoring tools and combine and process such data to determine key performance indicators (KPI) of the network.
  • KPI key performance indicators
  • Integrated performance management systems provide the means to visualize the network performance data so that network operators and other relevant stakeholders are able to identify the service quality of the overall network, and individual/ grouped network elements. By having an overall as well as detailed view of the network performance, the network operators can detect, diagnose, and remedy actual service issues, as well as predict potential service issues or failures in the network and take precautionary measures accordingly.
  • the telecommunication monitoring services face several challenges when it comes to incorporating results into other computations for KPIs or counters data.
  • network operators can gain a more comprehensive view of their network performance, but such analytics capabilities, as is, are inefficient and therefore not suitable for such instances.
  • An aspect of the present disclosure may relate to a method for generation of one or more interconnected dashboards.
  • the method comprises receiving, at a user interface module, a first request for generation of a first dashboard.
  • the method further comprises receiving, at the user interface module, a second request to treat the first dashboard as a waterfall dashboard.
  • the method further comprises receiving, at an integrated performance management (IPM) module, the second request from the user interface module.
  • the method further comprises saving, by a storage unit, at the IPM module, an associated information of the second request.
  • the method further comprises forwarding, by the IPM module to a computation module, the associated information of second request for generating a report.
  • the method further comprises forwarding, by the computation module to the IPM module, the report for storing in the storage unit.
  • IPM integrated performance management
  • the method further comprises receiving, at the user interface module, a third request for generation of a second dashboard.
  • the method further comprises receiving, at the user interface module, a fourth request for adding the first dashboard as a supporting dashboard to the second dashboard using the stored report.
  • the method further comprises interconnecting, by the IPM module, the first dashboard and the second dashboard.
  • the method further comprises computing, by the computation module, a pre-computed data. It is to be noted that the pre-computed data comprises one or more values for the one or more KPIs based on the one or more operations. Further, the pre-computed data is used to filter the one or more values of the one or more KPIs in the second dashboard.
  • the method further comprises sending, at the IPM module, an acknowledgement of the second request for treating the first dashboard as the waterfall dashboard.
  • the method uses the first dashboard which is treated as the waterfall dashboard, as the supporting dashboard for an existing dashboard.
  • the method comprises receiving, at the user interface module, the one or more key performance indicators (KPIs) and one or more aggregations in the third request for the first dashboard; and receiving, at the user interface module, the one or more operations to be applied on the one or more KPIs and the one or more aggregations.
  • KPIs key performance indicators
  • the method further comprises setting, by the computation module, a time range for the associated information of the second request.
  • the pre-computed data is computed for a time period within the set time range, wherein the time period is received from the user interface module.
  • the system comprises a user interface module which is configured to receive, a first request for generation of a first dashboard.
  • the user interface module is further configured to receive, a second request to treat the first dashboard as a waterfall dashboard.
  • the system further comprises an integrated performance management (IPM) module connected with at least the user interface module.
  • the IPM module is configured to receive, the second request from the user interface module.
  • the IPM module is further configured to save, in a storage unit, an associated information of the second request.
  • the IPM module is further configured to forward, to a computation module, the associated information of the second request for generating a report.
  • IPM integrated performance management
  • the computation module is connected at least to the IPM module, and the computation module is configured to forward, the report to the IPM module, for storing at the storage unit.
  • the user interface module is further configured to receive, a third request, for generation of a second dashboard.
  • the user interface module is further configured to receive, a fourth request, for adding the first dashboard as a supporting dashboard to the second dashboard using the stored report.
  • the IPM module is further configured to interconnect the first dashboard with the second dashboard.
  • the computation module is further configured to compute a precomputed data.
  • the pre-computed data comprises one or more values for one or more KPIs based on one or more operations.
  • the pre-computed data is further used to filter the one or more values of the one or more KPIs in the second dashboard.
  • a user equipment for generation of one or more interconnected dashboards.
  • the UE comprising: a processor configured to: transmit a first request for generation of a first dashboard; transmit a second request to treat the first dashboard as a waterfall dashboard; transmit a third request for generation of a second dashboard; transmit a fourth request for adding the first dashboard as a supporting dashboard to the second dashboard using a stored report, wherein for generation of the one or more interconnected dashboards, process comprises: receiving, at an integrated performance management (IPM) module, the second request from the user interface module; saving, by a storage unit, an associated information of the second request; forwarding, by the IPM module to a computation module, the associated information of the second request for generating a report; forwarding, by the computation module to the IPM module, the report for storing in the storage unit; interconnecting, by the IPM module, the first dashboard and the second dashboard; and computing, by the computation module, a pre-computed data, the pre-
  • IPM integrated performance management
  • Yet another aspect of the present disclosure relates to a non-transitory computer-readable storage medium storing instruction for generation of one or more interconnected dashboards, the storage medium comprising executable code which, when executed by one or more units of a system, causes: a user interface module to receive: a first request for generation of a first dashboard; a second request to treat the first dashboard as a waterfall dashboard; an integrated performance management (IPM) module connected with at least the user interface module, the IPM module to: receive, the second request from the user interface module; save, in a storage unit, an associated information of the second request; forward, to a computation module, the associated information of the second request for generating a report; the computation module connected at least to the IPM module, the computation module to: forward, the report to the IPM module, for storing at the storage unit; the user interface module to further receive: a third request, for generation of a second dashboard; a fourth request, for adding the first dashboard as a supporting dashboard to the second dashboard using the stored report; the IPM
  • FIG. 1 illustrates an exemplary block diagram of an integrated performance management system, in accordance with the exemplary embodiments of the present disclosure.
  • FIG. 2 illustrates an exemplary block diagram of a computing device upon which the features of the present disclosure may be implemented in accordance with exemplary implementation of the present disclosure.
  • FIG. 3 illustrates an exemplary block diagram of a system for generation of one or more interconnected dashboards, in accordance with exemplary implementations of the present disclosure.
  • FIG. 4 illustrates a method flow diagram for generation of one or more interconnected dashboards, in accordance with exemplary implementations of the present disclosure.
  • FIG. 5 illustrates an exemplary system architecture for implementing interlinked dashboard, in accordance with the exemplary embodiments of the present disclosure.
  • FIG. 6 illustrates an exemplary sequence flow diagram illustrating a process for generation of one or more interconnected dashboards, in accordance with exemplary implementations of the present disclosure.
  • any aspect or design described herein as “exemplary” and/or “demonstrative” is not necessarily to be construed as preferred or advantageous over other aspects or designs, nor is it meant to preclude equivalent exemplary structures and techniques known to those of ordinary skill in the art.
  • the terms “includes,” “has,” “contains,” and other similar words are used in either the detailed description or the claims, such terms are intended to be inclusive — in a manner similar to the term “comprising” as an open transition word — without precluding any additional or other elements.
  • a “processing unit” or “processor” or “operating processor” includes one or more processors, wherein processor refers to any logic circuitry for processing instructions.
  • a processor may be a general-purpose processor, a special purpose processor, a conventional processor, a digital signal processor, a plurality of microprocessors, one or more microprocessors in association with a (Digital Signal Processing) DSP core, a controller, a microcontroller, Application Specific Integrated Circuits, Field Programmable Gate Array circuits, any other type of integrated circuits, etc.
  • the processor may perform signal coding data processing, input/output processing, and/or any other functionality that enables the working of the system according to the present disclosure. More specifically, the processor or processing unit is a hardware processor.
  • a user equipment may be any electrical, electronic and/or computing device or equipment, capable of implementing the features of the present disclosure.
  • the user equipment/device may include, but is not limited to, a mobile phone, smart phone, laptop, a general-purpose computer, desktop, personal digital assistant, tablet computer, wearable device or any other computing device which is capable of implementing the features of the present disclosure.
  • the user device may contain at least one input means configured to receive an input from at least one of a transceiver unit, a processing unit, a storage unit, a detection unit and any other such unit(s) which are required to implement the features of the present disclosure.
  • storage unit or “memory unit” refers to a machine or computer-readable medium including any mechanism for storing information in a form readable by a computer or similar machine.
  • a computer-readable medium includes read-only memory (“ROM”), random access memory (“RAM”), magnetic disk storage media, optical storage media, flash memory devices or other types of machine-accessible storage media.
  • the storage unit stores at least the data that may be required by one or more units of the system to perform their respective functions.
  • interface refers to a shared boundary across which two or more separate components of a system exchange information or data.
  • the interface may also be referred to a set of rules or protocols that define communication or interaction of one or more modules or one or more units with each other, which also includes the methods, functions, or procedures that may be called.
  • All modules, units, components used herein, unless explicitly excluded herein, may be software modules or hardware processors, the processors being a general-purpose processor, a special purpose processor, a conventional processor, a digital signal processor (DSP), a plurality of microprocessors, one or more microprocessors in association with a DSP core, a controller, a microcontroller, Application Specific Integrated Circuits (ASIC), Field Programmable Gate Array circuits (FPGA), any other type of integrated circuits, etc.
  • DSP digital signal processor
  • ASIC Application Specific Integrated Circuits
  • FPGA Field Programmable Gate Array circuits
  • the user interface module may include an in-built transceiver unit that has at least one receiver and at least one transmitter configured respectively for receiving and transmitting data, signals, information, or a combination thereof between units/ components within the system and/or connected with the system.
  • the present disclosure aims to overcome the above-mentioned and other existing problems in this field of technology by providing method and system of generating one or more interconnected dashboards.
  • FIG. 1 illustrates an exemplary block diagram of an integrated performance management system [100], in accordance with the exemplary embodiments of the present disclosure.
  • the network performance management system [100] comprises various sub-systems such as: Integrated performance management module [100a], normalization layer [100b], computation layer (CL) [lOOd], anomaly detection layer [lOOo], streaming engine [1001], load balancer (LB) [100k], operations and management system [lOOp], API gateway system [lOOr], analysis engine [lOOh], parallel computing framework [lOOi], forecasting engine [lOOt], distributed file system, mapping layer [100s], distributed data lake [lOOu], scheduling layer [100g], reporting engine [100m], message broker [lOOe], graph layer [ 1 OOf], caching layer [100c], service quality manager [lOOq] and correlation engine[100n].
  • Integrated performance management module [100a] normalization layer [100b], computation layer (CL) [lOOd], anomaly detection layer [lOOo], streaming engine [1001], load
  • the various components may include:
  • Integrated performance management module [100a] comprises a 5G performance engine [lOOv] and a 5G Key Performance Indicator (KPI) Engine [100w],
  • 5G Performance Engine [100v] The 5G Performance engine [lOOv] is a crucial component of the integrated system, responsible for collecting, processing, and managing performance counter data from various data sources within the network.
  • the gathered data includes metrics such as connection speed, latency, data transfer rates, and many others.
  • This raw data is then processed and aggregated as required, forming a comprehensive overview of network performance.
  • the processed information is then stored in a Distributed Data Lake [lOOu], a centralized, scalable, and flexible storage solution, allowing for easy access and further analysis.
  • the 5G Performance engine [lOOv] also enables the reporting and visualization of this performance counter data, thus providing network administrators with a real-time, insightful view of the network's operation. Through these visualizations, operators can monitor the network's performance, identify potential issues, and make informed decisions to enhance network efficiency and reliability.
  • 5G Key Performance Indicator (KPI) Engine [100w]: The 5G Key Performance Indicator (KPI) Engine is a dedicated component tasked with managing the KPIs of all the network elements. It uses the performance counters, which are collected and processed by the 5G Performance Management engine from various data sources. These counters, encapsulating crucial performance data, are harnessed by the KPI engine [lOOw] to calculate essential KPIs. These KPIs might include data throughput, latency, packet loss rate, and more. Once the KPIs are computed, they are segregated based on the aggregation requirements, offering a multi-layered and detailed understanding of network performance.
  • the processed KPI data is then stored in the Distributed Data Lake [lOOu], ensuring a highly accessible, centralized, and scalable data repository for further analysis and utilization. Similar to the Performance Management engine, the KPI engine [lOOw] is also responsible for reporting and visualization of KPI data. This functionality allows network administrators to gain a comprehensive, visual understanding of the network's performance, thus supporting informed decision-making and efficient network management.
  • the Ingestion layer forms a key part of the Integrated Performance Management system. Its primary function is to establish an environment capable of handling diverse types of incoming data. This data may include Alarms, Counters, Configuration parameters, Call Detail Records (CDRs), Infrastructure metrics, Logs, and Inventory data, all of which are crucial for maintaining and optimizing the network's performance.
  • the Ingestion layer processes it by validating its integrity and correctness to ensure it is fit for further use.
  • the data is routed to various components of the system, including the Normalization layer, Streaming Engine, Streaming Analytics, and Message Brokers. The destination is chosen based on where the data is required for further analytics and processing. By serving as the first point of contact for incoming data, the Ingestion layer plays a vital role in managing the data flow within the system, thus supporting comprehensive and accurate network performance analysis.
  • Normalization layer [100b] The Normalization Layer [100b] serves to standardize, enrich, and store data into the appropriate databases. It takes in data that has been ingested and adjusts it to a common standard, making it easier to compare and analyse. This process of "normalization” reduces redundancy and improves data integrity. Upon completion of normalization, the data is stored in various databases like the Distributed Data Lake [lOOu], Caching Layer, and Graph Layer, depending on its intended use. The choice of storage determines how the data can be accessed and used in the future. Additionally, the Normalization Layer [100b] produces data for the Message Broker, a system that enables communication between different parts of the performance management system through the exchange of data messages.
  • Message Broker a system that enables communication between different parts of the performance management system through the exchange of data messages.
  • the Normalizer Layer then inserts this normalized data into various databases.
  • One such database is the Caching Layer [100c]
  • the Caching Layer [100c] is a high-speed data storage layer which temporarily holds data that is likely to be reused, to improve speed and performance of data retrieval. By storing frequently accessed data in the Caching Layer [100c], the system significantly reduces the time taken to access this data, improving overall system efficiency and performance.
  • the Caching Layer [100c] serves as an intermediate layer between the data sources and the sub-systems, such as the Analysis Engine, Correlation Engine [lOOn], Service Quality Manager, and Streaming Engine.
  • the Normalization Layer [100b] is responsible for providing these sub-systems with the necessary data from the Caching Layer [100c],
  • Computation layer [100d] The Computation Layer [lOOd] in the Integrated Performance Management system serves as the main hub for complex data processing tasks. In the initial stages, raw data is gathered, normalized, and enriched by the Normalization Layer [100b], The Normalizer Layer then inserts this standardized data into multiple databases including the Distributed Data Lake [lOOu], Caching Layer [100c], and Graph Layer, and also feeds it to the Message Broker. Within the Computation Layer [ 1 OOd], several powerful sub-systems such as the Analysis Engine, Correlation Engine [lOOn], Service Quality Manager, and Streaming Engine, utilize the normalized data. These systems are designed to execute various data processing tasks. The Analysis Engine performs in-depth data analytics to generate insights from the data.
  • the Correlation Engine [lOOn] identifies and understands the relations and patterns within the data.
  • the Service Quality Manager assesses and ensures the quality of the services.
  • the Streaming Engine processes and analyses the real-time data feeds.
  • the Computation Layer [lOOd] is where all major computation and data processing tasks occur. It uses the normalized data provided by the Normalization Layer [100b], processing it to generate useful insights, ensure service quality, understand data patterns, and facilitate real-time data analytics.
  • Message broker [100e] The Message Broker [lOOe], an integral part of the Integrated Performance Management system, operates as a publish-subscribe messaging system. It orchestrates and maintains the real-time flow of data from various sources and applications. At its core, the Message Broker [lOOe] facilitates communication between data producers and consumers through message-based topics. This creates an advanced platform for contemporary distributed applications. With the ability to accommodate a large number of permanent or ad-hoc consumers, the Message Broker [lOOe] demonstrates immense flexibility in managing data streams. Moreover, it leverages the filesystem for storage and caching, boosting its speed and efficiency. The design of the Message Broker [lOOe] is centred around reliability. It is engineered to be fault-tolerant and mitigate data loss, ensuring the integrity and consistency of the data. With its robust design and capabilities, the Message Broker [lOOe] forms a critical component in managing and delivering real-time data in the system.
  • the Modeler should be adept at processing steps provided in the model and delivering the results to the system requested, whether it be a Parallel Computing system, Workflow Engine, Query Engine, Correlation System [lOOn], 5G Performance Management Engine, or 5G KPI Engine [lOOw], With its powerful modelling and processing capabilities, the Graph Layer [1 OOf] forms an essential part of the system, enabling the processing and analysis of complex relationships between various types of network data.
  • Scheduling layer [100g] serves as a key element of the Integrated Performance Management System, endowed with the ability to execute tasks at predetermined intervals set according to user preferences.
  • a task might be an activity performing a service call, an API call to another microservice, the execution of an Elastic Search query, and storing its output in the Distributed Data Lake [lOOu] or Distributed File System or sending it to another micro- service.
  • the versatility of the Scheduling Layer [100g] extends to facilitating graph traversals via the Mapping Layer to execute tasks. This crucial capability enables seamless and automated operations within the system, ensuring that various tasks and services are performed on schedule, without manual intervention, enhancing the system's efficiency and performance.
  • the Scheduling Layer [100g] orchestrates the systematic and periodic execution of tasks, making it an integral part of the efficient functioning of the entire system.
  • Analysis Engine [lOOh] forms a crucial part of the Integrated Performance Management System, designed to provide an environment where users can configure and execute workflows for a wide array of use-cases. This facility aids in the debugging process and facilitates a better understanding of call flows.
  • users can perform queries on data sourced from various subsystems or external gateways. This capability allows for an in-depth overview of data and aids in pinpointing issues. The system's flexibility allows users to configure specific policies aimed at identifying anomalies within the data.
  • the Analysis Engine [lOOh] provides a robust analytical environment for systematic data interrogation, facilitating efficient problem identification and resolution, thereby contributing significantly to the system's overall performance management.
  • Parallel Computing Framework [lOOi] is a key aspect of the Integrated Performance Management System, providing a user-friendly yet advanced platform for executing computing tasks in parallel. This framework highlights both scalability and fault tolerance, crucial for managing vast amounts of data. Users can input data via Distributed File System (DFS) [lOOj] locations or Distributed Data Lake (DDL) indices.
  • the framework supports the creation of task chains by interfacing with the Service Configuration Management (SCM) Sub-System. Each task in a workflow is executed sequentially, but multiple chains can be executed simultaneously, optimizing processing time. To accommodate varying task requirements, the service supports the allocation of specific host lists for different computing tasks.
  • SCM Service Configuration Management
  • the Parallel Computing Framework [lOOi] is an essential tool for enhancing processing speeds and efficiently managing computing resources, significantly improving the system's performance management capabilities.
  • Distributed File System [lOOj] The Distributed File System (DFS) [lOOj] is a critical component of the Integrated Performance Management System, enabling multiple clients to access and interact with data seamlessly.
  • This file system is designed to manage data files that are partitioned into numerous segments known as chunks.
  • the DFS [lOOj] effectively allows for the distribution of data across multiple nodes.
  • This architecture enhances both the scalability and redundancy of the system, ensuring optimal performance even with large data sets.
  • DFS [lOOj] also supports diverse operations, facilitating the flexible interaction with and manipulation of data. This accessibility is paramount for a system that requires constant data input and output, as is the case in a robust performance management system.
  • Load Balancer [100k] The Load Balancer (LB) [100k] is a vital component of the Integrated Performance Management System, designed to efficiently distribute incoming network traffic across a multitude of backend servers or microservices. Its purpose is to ensure the even distribution of data requests, leading to optimized server resource utilization, reduced latency, and improved overall system performance.
  • the LB [100k] implements various routing strategies to manage traffic. These include round-robin scheduling, header-based request dispatch, and contextbased request dispatch. Round-robin scheduling is a simple method of rotating requests evenly across available servers. In contrast, header and context-based dispatching allow for more intelligent, request-specific routing.
  • Header-based dispatching routes requests based on data contained within the headers of the Hypertext Transfer Protocol (HTTP) requests.
  • Context-based dispatching routes traffic based on the contextual information about the incoming requests. For example, in an event-driven architecture, the LB [100k] manages event and event acknowledgments, forwarding requests or responses to the specific microservice that has requested the event. This system ensures efficient, reliable, and prompt handling of requests, contributing to the robustness and resilience of the overall performance management system.
  • Streaming Engine [1001] The Streaming Engine [1001], also referred to as Stream Analytics, is a critical subsystem in the Integrated Performance Management System. This engine is specifically designed for high-speed data pipelining to the User Interface (UI). Its core objective is to ensure real-time data processing and delivery, enhancing the system's ability to respond promptly to dynamic changes. Data is received from various connected subsystems and processed in real-time by the Streaming Engine [1001], After processing, the data is streamed to the UI, fostering rapid decision-making and responses. The Streaming Engine [1001] cooperates with the Distributed Data Lake [lOOu], Message Broker [lOOe], and Caching Layer [100c] to provide seamless, real-time data flow.
  • UI User Interface
  • Stream Analytics is designed to perform required computations on incoming data instantly, ensuring that the most relevant and up-to-date information is always available at the UI. Furthermore, this system can also retrieve data from the Distributed Data Lake [lOOu], Message Broker [ 1 OOe], and Caching Layer [100c] as per the requirement and deliver it to the UI in real-time.
  • the streaming engine's [1001] goal is to provide fast, reliable, and efficient data streaming, contributing to the overall performance of the management system.
  • Reporting Engine [100m] The Reporting Engine [100m] is a key subsystem of the Integrated Performance Management System.
  • the fundamental purpose of designing the Reporting Engine [100m] is to dynamically create report layouts of API data, catered to individual client requirements, and deliver these reports via the Notification Engine.
  • the REM serves as the primary interface for creating custom reports based on the data visualized through the client's dashboard.
  • These custom dashboards created by the client through the User Interface (UI), provide the basis for the Reporting Engine [100m] to process and compile data from various interfaces.
  • the main output of the Reporting Engine [100m] is a detailed report generated in Excel format.
  • the Reporting Engine’s [100m] unique capability to parse data from different subsystem interfaces, process it according to the client's specifications and requirements, and generate a comprehensive report makes it an essential component of this performance management system.
  • the Reporting Engine [100m] integrates seamlessly with the Notification Engine to ensure timely and efficient delivery of reports to clients via email, ensuring the information is readily accessible and usable, thereby improving overall client satisfaction and system usability.
  • FIG. 2 illustrates an exemplary block diagram of a computing device [200] (also referred to herein as a computer system [200]) upon which the features of the present disclosure may be implemented in accordance with exemplary implementation of the present disclosure.
  • the computing device [200] may also implement a method for generation of one or more interconnected dashboards, utilising the system.
  • the computing device [200] itself implements the method for generation of one or more interconnected dashboards using one or more units configured within the computing device [200], wherein said one or more units are capable of implementing the features as disclosed in the present disclosure.
  • the computing device [200] encompasses a wide range of electronic devices capable of processing data and performing computations. Examples of computing device [200] include, but are not limited only to, personal computers, laptops, tablets, smartphones, servers, and embedded systems. The devices may operate independently or as part of a network and can perform a variety of tasks such as data storage, retrieval, and analysis. Additionally, computing device [200] may include peripheral devices, such as monitors, keyboards, and printers, as well as integrated components within larger electronic systems, highlighting their versatility in various technological applications.
  • the computing device [200] may include a bus [202] or other communication mechanism for communicating information, and a processor [204] coupled with bus [202] for processing information.
  • the processor [204] may be, for example, a general purpose microprocessor.
  • the computing device [200] may also include a main memory [206], such as a random access memory (RAM), or other dynamic storage device, coupled to the bus [202] for storing information and instructions to be executed by the processor [204],
  • the main memory [206] also may be used for storing temporary variables or other intermediate information during execution of the instructions to be executed by the processor [204], Such instructions, when stored in non-transitory storage media accessible to the processor [204], render the computing device [200] into a special-purpose machine that is customized to perform the operations specified in the instructions.
  • the computing device [200] further includes a read only memory (ROM) [208] or other static storage device coupled to the bus [202] for storing static information and instructions for the processor [204],
  • ROM read only memory
  • a storage device [210] such as a magnetic disk, optical disk, or solid-state drive is provided and coupled to the bus [202] for storing information and instructions.
  • the computing device [200] may be coupled via the bus [202] to a display [212], such as a cathode ray tube (CRT), Liquid crystal Display (LCD), Light Emitting Diode (LED) display, Organic LED (OLED) display, etc. for displaying information to a computer user.
  • An input device [214] including alphanumeric and other keys, touch screen input means, etc.
  • a cursor controller [216] such as a mouse, a trackball, or cursor direction keys, for communicating direction information and command selections to the processor [204], and for controlling cursor movement on the display [212].
  • This input device typically has two degrees of freedom in two axes, a first axis (e.g., x) and a second axis (e.g., y), that allow the device to specify positions in a plane.
  • the computing device [200] may implement the techniques described herein using customized hard-wired logic, one or more ASICs or FPGAs, firmware and/or program logic which in combination with the computing device [200] causes or programs the computing device [200] to be a special-purpose machine.
  • the techniques herein are performed by the computing device [200] in response to the processor [204] executing one or more sequences of one or more instructions contained in the main memory [206], Such instructions may be read into the main memory [206] from another storage medium, such as the storage device [210], Execution of the sequences of instructions contained in the main memory [206] causes the processor [204] to perform the process steps described herein.
  • the computing device [200] also may include a communication interface [218] coupled to the bus [202], The communication interface [218] provides a two-way data communication coupling to a network link [220] that is connected to a local network [222],
  • the communication interface [218] may be an integrated services digital network (ISDN) card, cable modem, satellite modem, or a modem to provide a data communication connection to a corresponding type of telephone line.
  • the communication interface [218] may be a local area network (LAN) card to provide a data communication connection to a compatible LAN.
  • LAN local area network
  • Wireless links may also be implemented.
  • the communication interface [218] sends and receives electrical, electromagnetic, or optical signals that carry digital data streams representing various types of information.
  • the computing device [200] can send messages and receive data, including program code, through the network(s), the network link [220] and the communication interface [218],
  • a server [230] might transmit a requested code for an application program through the Internet [228], the ISP [226], the local network [222], host [224] and the communication interface [218],
  • the received code may be executed by the processor [204] as it is received, and/or stored in the storage device [210], or other non-volatile storage for later execution.
  • FIG. 3 an exemplary block diagram of a system [300] for generation of one or more interconnected dashboards, is shown, in accordance with the exemplary implementations of the present disclosure.
  • the system [300] comprises at least one user interface module [302], at least one Integrated performance management (IPM) module [100a], at least one storage unit [305], and at least one computation module [306], Also, all of the components/ units of the system [300] are assumed to be connected to each other unless otherwise indicated below. As shown in the figures all units shown within the system should also be assumed to be connected to each other. Also, in FIG. 3 only a few units are shown, however, the system [300] may comprise multiple such units or the system [300] may comprise any such numbers of said units, as required to implement the features of the present disclosure.
  • IPM Integrated performance management
  • the system [300] is configured for generation of the one or more interconnected dashboards, with the help of the interconnection between the components/units of the system [300], [0078]
  • the user interface module [302] of the system [300] is configured to receive, a first request for generation of a first dashboard.
  • the first request in the specification refers to the initial user action to generate a dashboard within the network performance management system.
  • the first request includes several key components such as the dashboard's name, the type of data it will display, and the specific key performance indicators (KPIs) and metrics the user wants to monitor.
  • KPIs key performance indicators
  • a user might issue a first request to create a "Network Traffic Dashboard," specifying that it should display KPIs like total data throughput, packet loss rate, and latency over selected time intervals.
  • the request can include parameters for data aggregation methods (e.g., hourly averages, daily totals) and any initial filter criteria (e.g., specific geographic regions or network nodes).
  • data aggregation methods e.g., hourly averages, daily totals
  • any initial filter criteria e.g., specific geographic regions or network nodes.
  • the user interface module [302] is further configured to receive, a second request to treat the first dashboard as a waterfall dashboard.
  • the second request refers to the user action of designating the first dashboard as a waterfall dashboard.
  • the second request includes specific information such as the type of dashboard being created, the parameters that need to be precomputed, and the logic for how these parameters should be processed. For example, if the first dashboard tracks network throughput, the second request might specify that the busiest hour for each day should be calculated and stored. This precomputed data can then be used in the second dashboard to analyse success call ratios during those busy hours.
  • the integrated performance management (IPM) module [100a] is configured to receive, the second request from the user interface module [302], The IPM module [100a] is further configured to send, an acknowledgement of the second request for treating the first dashboard as the waterfall dashboard.
  • the first dashboard, treated as the waterfall dashboard may be used as the supporting dashboard for an existing dashboard.
  • the waterfall dashboard is a specialized type of dashboard within a system that is designated for precomputation. This means that the data and key performance indicators (KPIs) associated with a waterfall dashboard are calculated in advance, allowing this precomputed data to be used as a foundational input for other dashboards.
  • KPIs key performance indicators
  • a dashboard tracks the busiest hour of network traffic each day (a Throughput KPI) over the past 90 days
  • this precomputed data can then be used to calculate and display related metrics, such as the Success Call Ratio KPI, within the same or another dashboard.
  • a Throughput KPI Throughput KPI
  • users can streamline complex sequential calculations, ensuring efficient and timely performance analysis without the need to recompute data repeatedly. This approach enhances the overall efficiency and effectiveness of network performance management by enabling interconnected and dependent dashboards to utilize precomputed outputs, thereby providing a more comprehensive and accurate understanding of network performance dynamics.
  • the Integrated performance management (IPM) module [100a] is further configured to save, in the storage unit [305], an associated information of the second request.
  • Associated information refers to the specific data and metadata required to process requests, generate reports, and perform computations within the dashboards in a network performance management system. This information can include details such as the time range for data analysis, the specific key performance indicators (KPIs) to be monitored, the type of computations to be performed on the KPIs, user preferences for data display, and configurations for integrating multiple dashboards.
  • KPIs key performance indicators
  • the associated information will include the selected KPI (network throughput), the specified time range (90 days), and any specific computation rules or operations (such as calculating the busiest hour of each day). Additionally, if the user wants to use this dashboard as a waterfall dashboard to support another dashboard that calculates the Success Call Ratio, the associated information will also include the necessary integration configurations and precomputed values needed to link the two dashboards.
  • the Integrated performance management (IPM) module [100a] is further configured to forward to the computation module [306], the associated information of the second request for generating a report.
  • the computation module [306] is configured to forward, the report to the IPM module [100a], for storing at the storage unit [305],
  • the report is created after the CL [lOOd] processes the necessary data retrieved from the Distributed File System (DFS) [100j],
  • the report includes detailed computations of key performance indicators (KPIs), aggregated data, and any other metrics specified by the user. For example, if the user has designated a Waterfall Dashboard to precompute the busiest hour of the day for network throughput over the past 90 days, the report will contain this computed data.
  • IPM Integrated Performance Management
  • the user interface module [302] is further configured to receive, a third request, for generation of a second dashboard.
  • the third request is a step where the user interface module [303] receives a request for the generation of a second dashboard.
  • the third request includes specific details about the key performance indicators (KPIs) and aggregations that the user wants to incorporate into the second dashboard. Additionally, the third request may specify the operations to be applied to these selected KPIs and aggregations. For example, a user might request the generation of a second dashboard that includes a KPI for network latency and an aggregation of average latency over the last 30 days. The user might also specify an operation to filter this data to show only peak usage times.
  • the user interface module [302] is further configured to receive, a fourth request, for adding the first dashboard as a supporting dashboard to the second dashboard using the stored report.
  • the fourth request involves adding the first dashboard, designated as a waterfall dashboard, as a supporting dashboard to the second dashboard.
  • the fourth request includes utilizing the stored report, referred using name or identifier for interconnecting dashboards and enabling the sequential execution of precomputed data from one dashboard to influence another.
  • the third request which is for generating a second dashboard, involves setting up a new dashboard that could monitor different network parameters or KPIs. In this case, the user might want to incorporate insights from the first dashboard, such as the busiest hour of network usage, into the second dashboard's calculations.
  • the system links the first dashboard's precomputed data to the second dashboard, allowing for comprehensive analysis. For example, if the first dashboard calculates the busiest hour for network throughput, this data can then be used in the second dashboard to analyse Success Call Ratio during those busy hours.
  • the third request sets up the new monitoring parameters, while the fourth request integrates previously computed data to enhance the new dashboard's analytical capabilities.
  • the IPM module [100a] is further configured to interconnect the first dashboard with the second dashboard.
  • the user interface module [302] is further configured to receive, selection of one or more key performance indicators (KPIs) and one or more aggregations in the first dashboard.
  • KPIs key performance indicators
  • the user interface module [302] is further configured to receive, one or more operations to be applied on the selected one or more KPIs and the one or more aggregations.
  • the computation module [306] is further configured to compute a pre-computed data. It is important to note that the pre-computed data comprises one or more values for the one or more KPIs based on the one or more operations. It is further noted that the pre-computed data is further used to filter the one or more values of the one or more KPIs in the second dashboard.
  • the one or more modules, units, components may be software modules configured via hardware modules/processor, or hardware processors, the processors being a general-purpose processor, a special purpose processor, a conventional processor, a digital signal processor, a plurality of microprocessors, one or more microprocessors in association with a DSP core, a controller, a microcontroller, Application Specific Integrated Circuits, Field Programmable Gate Array circuits, any other type of integrated circuits, etc.
  • the computation module [306] is further configured to set a time range for the associated information of the second request.
  • the computation module [306] can define a specific period within which the data will be precomputed and analysed. For example, if a user specifies a time range of the last 30 days via the user interface module [303], the computation module [306] will use this time range to process and compute relevant KPIs for that period. This functionality ensures that the resulting analysis and insights are based on the user-defined timeframe, providing tailored and precise performance metrics for the specified duration.
  • the pre-computed data is computed for a time period within the set time range, wherein the time period is received from the user interface module [302],
  • the users can specify a particular time range through the user interface, such as the last 30 days or the previous quarter.
  • the system will then use this specified time range to calculate the pre-computed data, such as the busiest hour for network throughput during that period. For example, if a user specifies a time range of the last 30 days through the user interface, the system will calculate the busiest hour for network throughput during those 30 days.
  • This precomputed data can then be used to analyse other metrics, such as the Success Call Ratio, within the same 30-day period.
  • FIG. 4 an exemplary flow diagram of method [400] for generation of one or more interconnected dashboards, in accordance with exemplary implementations of the present disclosure is shown.
  • the method [400] is performed by the system [300], Further, in an implementation, the system [300] may be present in a server device to implement the features of the present disclosure. Also, as shown in FIG. 4, the method [400] starts at step [402],
  • the method [400] comprises receiving, at a user interface module [302], a first request for generation of a first dashboard.
  • the first request in the specification refers to the initial user action to generate a dashboard within the network performance management system.
  • the first request includes several key components such as the dashboard's name, the type of data it will display, and the specific key performance indicators (KPIs) and metrics the user wants to monitor.
  • KPIs key performance indicators
  • a user might issue a first request to create a "Network Traffic Dashboard,” specifying that it should display KPIs like total data throughput, packet loss rate, and latency over selected time intervals.
  • the request can include parameters for data aggregation methods (e.g., hourly averages, daily totals) and any initial filter criteria (e.g., specific geographic regions or network nodes).
  • data aggregation methods e.g., hourly averages, daily totals
  • any initial filter criteria e.g., specific geographic regions or network nodes.
  • the method [400] comprises receiving, at the user interface module [302], a second request to treat the first dashboard as a waterfall dashboard.
  • the second request refers to the user action of designating the first dashboard as a waterfall dashboard.
  • the second request includes specific information such as the type of dashboard being created, the parameters that need to be precomputed, and the logic for how these parameters should be processed. For example, if the first dashboard tracks network throughput, the second request might specify that the busiest hour for each day should be calculated and stored. This precomputed data can then be used in the second dashboard to analyse success call ratios during those busy hours.
  • users ensure that the necessary computations are done in advance, streamlining subsequent analyses, and making the overall process more efficient and accurate.
  • the method [400] comprises receiving, at an integrated performance management (IPM) module [100a], the second request from the user interface module [302],
  • IPM integrated performance management
  • the method [400] further comprises sending, at the IPM module [100a], an acknowledgement of the second request for treating the first dashboard as the waterfall dashboard.
  • the first dashboard treated as the waterfall dashboard may be used as the supporting dashboard for an existing dashboard.
  • the waterfall dashboard is a specialized type of dashboard within a system that is designated for precomputation.
  • KPIs key performance indicators
  • a dashboard tracks the busiest hour of network traffic each day (a Throughput KPI) over the past 90 days, this precomputed data can then be used to calculate and display related metrics, such as the Success Call Ratio KPI, within the same or another dashboard.
  • a Throughput KPI Throughput KPI
  • this precomputed data can then be used to calculate and display related metrics, such as the Success Call Ratio KPI, within the same or another dashboard.
  • users can streamline complex sequential calculations, ensuring efficient and timely performance analysis without the need to recompute data repeatedly. This approach enhances the overall efficiency and effectiveness of network performance management by enabling interconnected and dependent dashboards to utilize precomputed outputs, thereby providing a more comprehensive and accurate understanding of network performance dynamics.
  • the method [400] comprises saving, by a storage unit [305], at the IPM module [100a], an associated information of the second request.
  • Associated information refers to the specific data and metadata required to process requests, generate reports, and perform computations within the dashboards in a network performance management system. This information can include details such as the time range for data analysis, the specific key performance indicators (KPIs) to be monitored, the type of computations to be performed on the KPIs, user preferences for data display, and configurations for integrating multiple dashboards.
  • KPIs key performance indicators
  • the associated information will include the selected KPI (network throughput), the specified time range (90 days), and any specific computation rules or operations (such as calculating the busiest hour of each day). Additionally, if the user wants to use this dashboard as a waterfall dashboard to support another dashboard that calculates the Success Call Ratio, the associated information will also include the necessary integration configurations and precomputed values needed to link the two dashboards.
  • the method [400] comprises forwarding, by the IPM module [100a] to a computation module [306], the associated information of second request for generating a report.
  • the method [400] comprises forwarding, by the computation module [306] to the IPM module [100a], the report for storing in the storage unit [305],
  • the report is created after the CL [lOOd] processes the necessary data retrieved from the Distributed File System (DFS) [100j].
  • the report includes detailed computations of key performance indicators (KPIs), aggregated data, and any other metrics specified by the user.
  • the report will contain this computed data. Additionally, it might include calculations for success call ratios and other related KPIs over the same period.
  • the generated report is then sent to the Integrated Performance Management (IPM) module [100a], where it is saved and can be used for further analysis or to create interconnected dashboards.
  • IPM Integrated Performance Management
  • the method comprises receiving, at the user interface module [302], a third request for generation of a second dashboard.
  • the third request is a step where the user interface module [303] receives a request for the generation of a second dashboard.
  • the third request includes specific details about the key performance indicators (KPIs) and aggregations that the user wants to incorporate into the second dashboard. Additionally, the third request may specify the operations to be applied to these selected KPIs and aggregations. For example, a user might request the generation of a second dashboard that includes a KPI for network latency and an aggregation of average latency over the last 30 days. The user might also specify an operation to filter this data to show only peak usage times.
  • the method comprises receiving, at the user interface module [302], a fourth request for adding the first dashboard as a supporting dashboard to the second dashboard using the stored report.
  • the fourth request involves adding the first dashboard, designated as a waterfall dashboard, as a supporting dashboard to the second dashboard.
  • the fourth request includes utilizing the stored report for interconnecting dashboards and enabling the sequential execution of precomputed data from one dashboard to influence another.
  • the third request which is for generating a second dashboard, involves setting up a new dashboard that could monitor different network parameters or KPIs.
  • the user might want to incorporate insights from the first dashboard, such as the busiest hour of network usage, into the second dashboard's calculations.
  • the system links the first dashboard's precomputed data to the second dashboard, allowing for comprehensive analysis. For example, if the first dashboard calculates the busiest hour for network throughput, this data can then be used in the second dashboard to analyse Success Call Ratio during those busy hours.
  • the third request sets up the new monitoring parameters, while the fourth request integrates previously computed data to enhance the new dashboard's analytical capabilities.
  • the method [400] comprises interconnecting, by the IPM module [100a], the first dashboard and the second dashboard.
  • the method [400] comprises computing, by the computation module [306], a pre-computed data.
  • the pre-computed data comprises one or more values for the one or more KPIs based on the one or more operations. Further, the pre-computed data is used to filter the one or more values of the one or more KPIs in the second dashboard.
  • UI Module to IPM The connection between the User Interface (UI) [532] and the Integrated Performance Management (IPM) module [100a] is established using an HTTP connection.
  • HTTP Hypertext Transfer Protocol
  • IPM Integrated Performance Management
  • IPM to DDL The connection between the IPM module [100a] and the Distributed Data Lake (DDL) [535] is established using a TCP (Transmission Control Protocol) connection.
  • TCP Transmission Control Protocol
  • TCP is a reliable and connection-oriented protocol that ensures the integrity and ordered delivery of data packets.
  • the IPM module [100a] can save and retrieve relevant data from the DDL [535] for computations, ensuring data consistency and reliability.
  • IPM to CL The connection between the IPM module [100a] and the Computation Layer (CL) [534] is also established using an HTTP connection. Similar to the UI [532] to IPM [100a] module connection, this HTTP connection allows the IPM module [100a] to forward requests and computations, which includes large computations and/or complex queries, to the CL [534], The CL [534] processes the received instructions and returns the results or intermediate data to the IPM module [100a],
  • CL to DFS The connection between the Computation Layer (CL) [534] and the Distributed File System (DFS) [536] is established using a File IO connection.
  • File IO typically refers to the operations performed on files, such as reading from or writing to files.
  • the CL [534] utilizes File IO operations to store and manage large files used in computations within the DFS [536]
  • the DFS [536] usually includes historical data, i.e., data is stored for longer time periods. This connection allows the CL [534] to efficiently access and manipulate the required files.
  • the plurality of modules includes a load balancer [537] for managing connections.
  • the load balancer [537] is adapted to distribute the incoming network traffic across multiple servers or components to ensure optimal resource utilization and high availability.
  • the load balancer [537] is commonly employed to evenly distribute incoming requests across multiple instances of the IPM module [100a] or CL [534], providing scalability and fault tolerance to the system [100], Overall, these connections and the inclusion of the load balancer [537] help to facilitate effective communication, data transfer, and resource management within the system, enhancing its performance and reliability.
  • the user creates a dashboard request on the User Interface (UI) [532] and designates it as an interlinked Dashboard eligible for precomputation.
  • the Integrated performance management module [100a] processes the dashboard request and generates the computed output for each interlinked dashboard at the computation layer [534],
  • the computed output is stored in a suitable format for future reference and retrieval.
  • the user may add the created interlinked dashboard as a supporting dashboard to a newly created or existing dashboard which is delegated to the Computation Layer (CL) [534] that in turn processes the execution requests, accesses the stored data, and performs the necessary computations using the precomputed data from the interlinked Dashboard.
  • the precomputed output of these operations is stored and used to filter the values of KPIs in subsequent dashboard requests by the user for execution.
  • the user may filter the values of KPIs as per the requirement.
  • the present disclosure provides a technically advanced solution for generation of one or more interconnected dashboards.
  • the present solution particularly involves categorizing dashboards as interlinked and/or interconnected Dashboards, performing precomputations, and utilizing the precomputed data in associated dashboards.
  • Interlinked and/or interconnected dashboards provide a sequential and consolidated view of data, allowing for the precomputation of essential dashboards.
  • the need for parallel observation of multiple dashboards is eliminated. Instead, the focus is shifted to analyse interconnected KPIs on a single consolidated dashboard, where computations are performed in advance, and the outcomes are readily available for subsequent computations. This approach reduces cognitive overload, simplifies data synchronization, improves processing time, and enhances scalability.
  • a network engineer uses the user interface [532] to designate a primary dashboard that monitors network throughput as a Waterfall Dashboard.
  • This dashboard includes KPIs like total data transferred, peak transfer rates, and times of peak activity.
  • the user interface [532] sends a request to the Integrated Performance Management (TPM) module [100a] to precompute data for the Waterfall Dashboard.
  • TPM Integrated Performance Management
  • the IPM acknowledges the request and forwards it to the Computation Layer (CL) [534],
  • the CL [534] processes the request, performing computations to identify things like the busiest hours or days for network traffic in the past 90 days.
  • DDL Distributed Data Lake
  • the engineer then creates a new dashboard to monitor server response times and links this new dashboard to the precomputed data from the Waterfall Dashboard.
  • a request is sent from the user interface [532] to the IPM module [100a]
  • the IPM module [100a] then forwards this request to the CL [534], which retrieves the precomputed data from the DDL [535] and uses it to calculate server response times during the busiest network hours.
  • the CL [534] sends these calculated results back to the IPM module [100a], which saves the data in a predetermined format.
  • FIG. 6 illustrates an exemplary sequence flow diagram illustrating a process [600] for generation of one or more interconnected dashboards, in accordance with exemplary implementations of the present disclosure.
  • the process [600] includes the creation of a Waterfall Dashboard.
  • the user [602] initiates this process through the user interface (UI) [604] by selecting options to create a new dashboard and marking it as a Waterfall Dashboard, which indicates that it will be used for precomputation purposes.
  • UI user interface
  • the process [600] includes requesting resources from the Load Balancer (LB).
  • the UI [604] sends a request to the load balancer [100k] to identify an available instance of the Integrated Performance Management (IPM) module [100a] for the dashboard creation.
  • IPM Integrated Performance Management
  • the process [600] includes the load balancer [100k] identifying an available instance of IPM module [100a], Once an available instance of IPM module [100a] is identified, the load balancer [100k] forwards the request to the instance of IPM module [100a] to begin the dashboard creation process.
  • the process [600] includes saving the dashboard information.
  • the IPM module [100a] receives the request and saves the initial configuration and metadata of the Waterfall Dashboard to the Distributed Data Lake (DDL) [lOOu], such that all necessary information is saved.
  • DDL Distributed Data Lake
  • the process [600] includes sending an acknowledgment.
  • the IPM module [100a] sends an acknowledgment back to the UI [604] indicating that the dashboard information has been successfully saved in the DDL [100u],
  • the process [600] includes notifying the user [602] of the successful save.
  • the UI [604] displays a message to the user [602] confirming that the Waterfall Dashboard has been saved successfully.
  • the process [600] includes computing the Waterfall Dashboards.
  • the IPM module [100a] initiates the computation of the Waterfall Dashboard by sending the computation request to the Computation Layer (CL) [lOOd], which is responsible for performing the intensive data processing tasks.
  • CL Computation Layer
  • the process [600] includes bringing data from the DFS.
  • the CL [lOOd] retrieves the required data from the Distributed File System (DFS) [lOOj], which contains the raw and historical data needed for the computations.
  • DFS Distributed File System
  • the process [600] includes responding to the data request.
  • the DFS [lOOj] sends the requested data back to the CL [1 OOd], enabling the CL to perform the necessary computations.
  • the process [600] includes performing the Waterfall computation.
  • the CL [lOOd] processes the data to compute the pre-defined KPIs and other metrics for the Waterfall Dashboard, ensuring the data is ready for further use.
  • the process [600] includes saving the generated output.
  • the CL [lOOd] sends the computed results back to the IPM module [100a], which then saves this output data for future reference and use in interconnected dashboards.
  • the process [600] includes creating and saving an Excel file.
  • the IPM module [100a] creates an Excel file containing the computed results and saves it for easy access and review by the user.
  • the process [600] includes the execution of an associated dashboard.
  • the user [602] initiates this process through the UI [604], requesting the generation of a second dashboard that will use the precomputed data from the Waterfall Dashboard.
  • the process [600] includes requesting resources from the Load Balancer (LB).
  • the UI [604] sends a request to the load balancer [100k] to identify an available instance of IPM module [100a] to handle the new dashboard generation.
  • the process [600] includes the load balancer [100k] identifying an available IPM instance.
  • the load balancer [100k] finds an available IPM module [100a] and forwards the dashboard generation request to it.
  • the process [600] includes forwarding the request.
  • the IPM module [100a] receives the request and forwards it to the CL [lOOd] to access the precomputed data and perform any additional computations required for the second dashboard.
  • the process [600] includes accessing stored data.
  • the CL [lOOd] retrieves the relevant precomputed data and any additional necessary data from the DFS [ 1 OOj ] to generate the second dashboard.
  • the process [600] includes sending the required data.
  • the DFS [lOOj] sends the necessary data back to the CL [lOOd], enabling it to complete the computations for the second dashboard.
  • the process [600] includes performing data computation.
  • the CL [lOOd] processes the retrieved data to compute the required KPIs and metrics for the second dashboard.
  • the process [600] includes sending the KPI data.
  • the CL [lOOd] sends the computed KPI data and other relevant results back to the IPM module [100a],
  • the process [600] includes finalizing the output.
  • the IPM module [100a] processes the received data, finalizes the output, and prepares it for presentation to the user.
  • the process [600] includes sending the computed data along with a notification.
  • the IPM module [100a] sends the final computed data and a notification to the load balancer [100k] to inform the user that the second dashboard is ready.
  • the process [600] includes forwarding the notification.
  • the load balancer [100k] forwards the notification and the computed data to the UI [604],
  • the process [600] includes presenting the output to the user.
  • the UI [604] displays the final output to the user [602], showing the results of the second dashboard.
  • the process [600] includes the user [602] clicking on the notification. This step is initiated when the user receives a notification about the availability of the new dashboard or the updated data. By clicking on this notification, the user signals their intent to view more detailed information, or results related to the dashboard.
  • the process [600] includes raising a request to show the result. This step involves the UI [604] allowing the user's interaction by displaying an initial summary or overview of the dashboard results. This gives the user a quick glance at the key metrics or highlights of the computed data.
  • the process [600] includes fetching the result from the UI [604] to the load balancer [100k],
  • the UI [604] sends a request to the load balancer [100k] to retrieve the detailed data necessary for a comprehensive view of the dashboard. This step ensures that the UI can present the most current and detailed data to the user.
  • the process [600] includes forwarding the request from the load balancer [100k] to the IPM module [100a],
  • the load balancer [100k] upon receiving the request from the UI [604], forwards this request to the IPM module [100a] to obtain the detailed results.
  • the process [600] includes finalizing the output at the IPM module [100a],
  • the IPM module [100a] processes the request and prepares the detailed results, ensuring that all relevant data is accurately compiled and ready for presentation.
  • the process [600] includes sending the computed KPI data from the IPM module [100a] to the load balancer [100k], KPI data refers to the computed KPIs based on the request.
  • the IPM module [100a] sends this computed KPI data to the load balancer [100k], ensuring that the user receives the most recent information.
  • the process [600] includes forwarding the data from the load balancer [100k] to the UI [604],
  • the load balancer [100k] takes the detailed results and the computed KPI data from the IPM module [100a] and sends it to the UI [604] for display to the user.
  • the process [600] includes showing the result from the UI [604] to the user [602],
  • the UI [604] presents the final, detailed results to the user [602],
  • This step concludes the process, providing the user with a comprehensive view of the dashboard data, including any updated KPIs and detailed metrics, enabling effective analysis and decision-making.
  • Interlinked Dashboards provide a consolidated view of interconnected KPIs, allowing for a comprehensive understanding of network performance in a single dashboard.
  • Resource Optimization By precomputing essential data, the disclosure optimizes computational resources and enhances scalability, making it suitable for large-scale networks and heavy computations.
  • the interlinked dashboard addresses the challenge of integrating the outcomes of computations for different KPIs by providing a visual representation of the relationships and dependencies between KPIs.
  • the cascading format of the dashboard allows the users to see how changes in one KPI affect other KPIs downstream. This approach provides a more comprehensive understanding of the network's performance, allowing network operators and stakeholders to make more informed decisions about how to optimize network performance and service quality.
  • a user equipment for generation of one or more interconnected dashboards.
  • the UE comprising: a processor configured to: transmit a first request for generation of a first dashboard; transmit a second request to treat the first dashboard as a waterfall dashboard; transmit a third request for generation of a second dashboard; transmit a fourth request for adding the first dashboard as a supporting dashboard to the second dashboard using a stored report, wherein for generation of the one or more interconnected dashboards, process comprises: receiving, at an integrated performance management (IPM) module [100a], the second request from the user interface module [302]; saving, by a storage unit [305], an associated information of the second request; forwarding, by the IPM module [100a] to a computation module [307], the associated information of the second request for generating a report; forwarding, by the computation module [306] to the IPM module [100a], the report for storing in the storage unit [305]; interconnecting, by the IPM module [100a
  • IPM integrated performance management
  • Yet another aspect of the present disclosure relates to a non-transitory computer-readable storage medium storing instruction for generation of one or more interconnected dashboards, the storage medium comprising executable code which, when executed by one or more units of a system, causes: a user interface module [302] to receive: a first request for generation of a first dashboard; a second request to treat the first dashboard as a waterfall dashboard; an integrated performance management (IPM) module [100a] connected with at least the user interface module [302], the IPM module [100a] to: receive, the second request from the user interface module [302]; save, in a storage unit [305], an associated information of the second request; forward, to a computation module [306], the associated information of the second request for generating a report; the computation module [306] connected at least to the IPM module [100a], the computation module [306] to: forward, the report to the IPM module [100a], for storing at the storage unit [305]; the user interface module [302] to receive
  • the terms “first”, “second”, “primary”, “secondary”, “target” and the like, herein do not denote any order, ranking, quantity, or importance, but rather are used to distinguish one element from another.
  • the present disclosure provides a technically advanced solution for generating and interconnecting dashboards.
  • the present solution automates the precomputation of key performance indicators (KPIs) and their integration into various dashboards, allowing users to designate a dashboard as a waterfall dashboard, making it eligible for precomputation, and use its output as a base for other dashboards. This enables sequential execution of dashboards and facilitates the interconnection of multiple dashboards to provide a comprehensive understanding of network performance.
  • KPIs key performance indicators
  • the present solution addresses the need for handling complex computations over extended intervals, ensuring that the results of these computations can be used in subsequent calculations for other KPIs or counters.
  • This feature allows users to define the importance of one dashboard's data for the computation of others, enhancing efficiency and accuracy in performance management.
  • users can set a time range for the associated information of a request, allowing the saved pre-computed values to be used for calculating the value of another KPI within a specified time range of up to 90 days in the past. This flexibility ensures that historical data can be effectively utilized for current performance evaluations.
  • the present solution enables users to receive and apply one or more key performance indicators (KPIs) and aggregations, as well as operations on selected KPIs and aggregations, in a user-friendly manner.
  • KPIs key performance indicators
  • This comprehensive approach offers a robust and efficient method for managing and optimizing network performance through interconnected and precomputed dashboards.

Landscapes

  • Business, Economics & Management (AREA)
  • Human Resources & Organizations (AREA)
  • Engineering & Computer Science (AREA)
  • Strategic Management (AREA)
  • Economics (AREA)
  • Entrepreneurship & Innovation (AREA)
  • General Business, Economics & Management (AREA)
  • Theoretical Computer Science (AREA)
  • Tourism & Hospitality (AREA)
  • Physics & Mathematics (AREA)
  • Marketing (AREA)
  • General Physics & Mathematics (AREA)
  • Educational Administration (AREA)
  • Development Economics (AREA)
  • Operations Research (AREA)
  • Quality & Reliability (AREA)
  • Game Theory and Decision Science (AREA)
  • Health & Medical Sciences (AREA)
  • Primary Health Care (AREA)
  • General Health & Medical Sciences (AREA)
  • Multi Processors (AREA)

Abstract

The present disclosure relates to a method and a system for generation of one or more interconnected dashboards. The disclosure encompasses receiving first request for generation of first dashboard and second request to treat the first dashboard as a waterfall dashboard; saving an associated information of the second request; forwarding the associated information for generating report; and the report for storing; receiving third request for generation of second dashboard; and a fourth request for adding the first dashboard as a supporting dashboard to the second dashboard using the stored report; interconnecting the first dashboard and the second dashboard; receiving key performance indicators (KPIs) and aggregations; operations to be applied on the selected KPIs and the aggregations; and computing a pre-computed data.

Description

METHOD AND SYSTEM FOR GENERATION OF INTERCONNETED DASHBOARDS
TECHNICAL FIELD OF THE DISCLOSURE
[0001] Embodiments of the present disclosure generally relate to network performance management systems. More particularly, embodiments of the present disclosure relate to generation of one or more interconnected dashboards.
BACKGROUND
[0002] The following description of the related art is intended to provide background information pertaining to the field of the disclosure. This section may include certain aspects of the art that may be related to various features of the present disclosure. However, it should be appreciated that this section is used only to enhance the understanding of the reader with respect to the present disclosure, and not as admissions of the prior art.
[0003] The following description of the related art is intended to provide background information pertaining to the field of the disclosure. This section may include certain aspects of the art that may be related to various features of the present disclosure. However, it should be appreciated that this section is used only to enhance the understanding of the reader with respect to the present disclosure, and not as admissions of the prior art.
[0004] Network performance management systems typically track network elements and data from network monitoring tools and combine and process such data to determine key performance indicators (KPI) of the network. Integrated performance management systems provide the means to visualize the network performance data so that network operators and other relevant stakeholders are able to identify the service quality of the overall network, and individual/ grouped network elements. By having an overall as well as detailed view of the network performance, the network operators can detect, diagnose, and remedy actual service issues, as well as predict potential service issues or failures in the network and take precautionary measures accordingly.
[0005] In network performance management system, particularly in visualization sub-systems, the process of performing computations for KPIs and incorporating their outcomes into other computations can present significant challenges. One of the main challenges is the handling of longer intervals and large amounts of data. Network performance management systems deal with vast quantities of data collected over extended periods of time. Performing computations for KPIs within these longer intervals can be time-consuming and resource intensive. The sheer volume of data can cause delays in processing, leading to inefficiencies in the overall analysis process.
[0006] Additionally, integrating the outcomes of these computations into subsequent computations for other KPIs can be problematic. The existing methods often lack the capability to efficiently connect and relate the different KPIs. This results in a fragmented understanding of the network performance, as the relationships and dependencies between KPIs may not be fully captured or taken into account. As a result, the insights gained from the computations may not provide a comprehensive view of the network's overall performance and may fail to identify critical issues or trends.
[0007] Moreover, the lack of efficiency in performing these computations and incorporating their outcomes limits the ability of network operators and stakeholders to make timely and informed decisions. Delays in data processing and analysis hinder the proactive management of the network, as potential issues or failures may go undetected or unaddressed until they become significant problems.
[0008] In some instances, this large amount of data has been visualized using multiple dashboards parallelly. However, parallel observation of multiple dashboards can pose several problems, especially when performing computations for Key Performance Indicators (KPIs) and incorporating their outcomes into subsequent computations. Additionally, the large amount of data involved in these computations can further complicate the process. The main challenges include:
[0009] When performing computations for specific KPIs and incorporating their outcomes into subsequent computations, data synchronization becomes crucial. Ensuring that the data from different dashboards is aligned and consistent in real-time can be challenging, especially when dealing with large datasets or disparate data sources.
[0010] Further, performing computations for KPIs and incorporating their outcomes can be timeconsuming, especially when dealing with a large amount of data. Parallel computation of multiple KPIs in real-time may not be feasible within a reasonable time frame, as it can strain computational resources and impact overall system performance. [0011] Complexity of Dependencies: The dependencies between different KPI computations can be complex, making it challenging to determine the correct order of computations and incorporate their outcomes accurately. Managing the dependencies and ensuring that the results are properly synchronized can be intricate, especially when there are interdependencies between different KPIs.
[0012] Moreover, dealing with a large amount of data and performing computations across multiple dashboards requires significant computational resources. Scaling the system to handle the increased workload and ensuring optimal performance can be a challenge, particularly when the amount of data and the complexity of computations grow.
[0013] Accordingly, it may be noted that the telecommunication monitoring services face several challenges when it comes to incorporating results into other computations for KPIs or counters data. However, by leveraging advanced analytics capabilities and integrated performance management systems, network operators can gain a more comprehensive view of their network performance, but such analytics capabilities, as is, are inefficient and therefore not suitable for such instances.
[0014] Thus, there exists an imperative need in the art to provide a solution that can overcome these and other limitations of the existing solutions.
SUMMARY
[0015] This section is provided to introduce certain aspects of the present disclosure in a simplified form that are further described below in the detailed description. This summary is not intended to identify the key features or the scope of the claimed subject matter.
[0016] An aspect of the present disclosure may relate to a method for generation of one or more interconnected dashboards. The method comprises receiving, at a user interface module, a first request for generation of a first dashboard. The method further comprises receiving, at the user interface module, a second request to treat the first dashboard as a waterfall dashboard. The method further comprises receiving, at an integrated performance management (IPM) module, the second request from the user interface module. The method further comprises saving, by a storage unit, at the IPM module, an associated information of the second request. The method further comprises forwarding, by the IPM module to a computation module, the associated information of second request for generating a report. The method further comprises forwarding, by the computation module to the IPM module, the report for storing in the storage unit. The method further comprises receiving, at the user interface module, a third request for generation of a second dashboard. The method further comprises receiving, at the user interface module, a fourth request for adding the first dashboard as a supporting dashboard to the second dashboard using the stored report. The method further comprises interconnecting, by the IPM module, the first dashboard and the second dashboard. The method further comprises computing, by the computation module, a pre-computed data. It is to be noted that the pre-computed data comprises one or more values for the one or more KPIs based on the one or more operations. Further, the pre-computed data is used to filter the one or more values of the one or more KPIs in the second dashboard.
[0017] In an exemplary aspect of the present disclosure, the method further comprises sending, at the IPM module, an acknowledgement of the second request for treating the first dashboard as the waterfall dashboard.
[0018] In an exemplary aspect of the present disclosure, the method, uses the first dashboard which is treated as the waterfall dashboard, as the supporting dashboard for an existing dashboard.
[0019] In an exemplary aspect of the present disclosure, the method comprises receiving, at the user interface module, the one or more key performance indicators (KPIs) and one or more aggregations in the third request for the first dashboard; and receiving, at the user interface module, the one or more operations to be applied on the one or more KPIs and the one or more aggregations.
[0020] In an exemplary aspect of the present disclosure, the method further comprises setting, by the computation module, a time range for the associated information of the second request.
[0021] In an exemplary aspect of the present disclosure, the pre-computed data is computed for a time period within the set time range, wherein the time period is received from the user interface module.
[0022] Another aspect of the present disclosure may relate to a system for generation of one or more interconnected dashboards. The system comprises a user interface module which is configured to receive, a first request for generation of a first dashboard. The user interface module is further configured to receive, a second request to treat the first dashboard as a waterfall dashboard. The system further comprises an integrated performance management (IPM) module connected with at least the user interface module. The IPM module is configured to receive, the second request from the user interface module. The IPM module is further configured to save, in a storage unit, an associated information of the second request. The IPM module is further configured to forward, to a computation module, the associated information of the second request for generating a report. The computation module is connected at least to the IPM module, and the computation module is configured to forward, the report to the IPM module, for storing at the storage unit. The user interface module is further configured to receive, a third request, for generation of a second dashboard. The user interface module is further configured to receive, a fourth request, for adding the first dashboard as a supporting dashboard to the second dashboard using the stored report. The IPM module is further configured to interconnect the first dashboard with the second dashboard. The computation module is further configured to compute a precomputed data. The pre-computed data comprises one or more values for one or more KPIs based on one or more operations. The pre-computed data is further used to filter the one or more values of the one or more KPIs in the second dashboard.
[0023] Yet another aspect of the present disclosure may relate to a user equipment (UE) for generation of one or more interconnected dashboards. The UE comprising: a processor configured to: transmit a first request for generation of a first dashboard; transmit a second request to treat the first dashboard as a waterfall dashboard; transmit a third request for generation of a second dashboard; transmit a fourth request for adding the first dashboard as a supporting dashboard to the second dashboard using a stored report, wherein for generation of the one or more interconnected dashboards, process comprises: receiving, at an integrated performance management (IPM) module, the second request from the user interface module; saving, by a storage unit, an associated information of the second request; forwarding, by the IPM module to a computation module, the associated information of the second request for generating a report; forwarding, by the computation module to the IPM module, the report for storing in the storage unit; interconnecting, by the IPM module, the first dashboard and the second dashboard; and computing, by the computation module, a pre-computed data, the pre-computed data comprising one or more values for the one or more KPIs based on the one or more operations, and wherein the pre-computed data is further used to filter the one or more values of the one or more KPIs in the second dashboard.
[0024] Yet another aspect of the present disclosure relates to a non-transitory computer-readable storage medium storing instruction for generation of one or more interconnected dashboards, the storage medium comprising executable code which, when executed by one or more units of a system, causes: a user interface module to receive: a first request for generation of a first dashboard; a second request to treat the first dashboard as a waterfall dashboard; an integrated performance management (IPM) module connected with at least the user interface module, the IPM module to: receive, the second request from the user interface module; save, in a storage unit, an associated information of the second request; forward, to a computation module, the associated information of the second request for generating a report; the computation module connected at least to the IPM module, the computation module to: forward, the report to the IPM module, for storing at the storage unit; the user interface module to further receive: a third request, for generation of a second dashboard; a fourth request, for adding the first dashboard as a supporting dashboard to the second dashboard using the stored report; the IPM module to further interconnect the first dashboard with the second dashboard; and the computation module to further to compute a pre-computed data, the pre-computed data comprising one or more values for the one or more KPIs based on the one or more operations, and wherein the pre-computed data is further used to filter the one or more values of the one or more KPIs in the second dashboard.
OBJECTS OF THE DISCLOSURE
[0025] Some of the objects of the present disclosure, which at least one embodiment disclosed herein satisfies are listed herein below.
[0026] It is an object of the present disclosure to provide a system for efficiently processing and computing data, allowing users to create interconnected dashboards and perform computations on the precomputed data.
[0027] It is another object of the present disclosure to facilitate the creation, computation, and visualization of interconnecting dashboards.
[0028] It is another object of the present disclosure to provide a solution that eliminates the need to monitor multiple dashboards simultaneously.
[0029] It is another object of the present disclosure to provide a solution that works according to a sequential execution approach for reducing the overall computation time for associated dashboards and providing linked results effortlessly. [0030] It is yet another object of the present disclosure to provide a method of integrating the outcomes of computations for different KPIs by providing a visual representation of the relationships and dependencies between KPIs.
BRIEF DESCRIPTION OF THE DRAWINGS
[0031] The accompanying drawings, which are incorporated herein, and constitute a part of this disclosure, illustrate exemplary embodiments of the disclosed methods and systems in which like reference numerals refer to the same parts throughout the different drawings. Components in the drawings are not necessarily to scale, emphasis instead being placed upon clearly illustrating the principles of the present disclosure. Also, the embodiments shown in the figures are not to be construed as limiting the disclosure, but the possible variants of the method and system according to the disclosure are illustrated herein to highlight the advantages of the disclosure. It will be appreciated by those skilled in the art that disclosure of such drawings includes disclosure of electrical components or circuitry commonly used to implement such components.
[0032] FIG. 1 illustrates an exemplary block diagram of an integrated performance management system, in accordance with the exemplary embodiments of the present disclosure.
[0033] FIG. 2 illustrates an exemplary block diagram of a computing device upon which the features of the present disclosure may be implemented in accordance with exemplary implementation of the present disclosure.
[0034] FIG. 3 illustrates an exemplary block diagram of a system for generation of one or more interconnected dashboards, in accordance with exemplary implementations of the present disclosure.
[0035] FIG. 4 illustrates a method flow diagram for generation of one or more interconnected dashboards, in accordance with exemplary implementations of the present disclosure.
[0036] FIG. 5 illustrates an exemplary system architecture for implementing interlinked dashboard, in accordance with the exemplary embodiments of the present disclosure. [0037] FIG. 6 illustrates an exemplary sequence flow diagram illustrating a process for generation of one or more interconnected dashboards, in accordance with exemplary implementations of the present disclosure.
[0038] The foregoing shall be more apparent from the following more detailed description of the disclosure.
DETAILED DESCRIPTION
[0039] In the following description, for the purposes of explanation, various specific details are set forth in order to provide a thorough understanding of embodiments of the present disclosure. It will be apparent, however, that embodiments of the present disclosure may be practiced without these specific details. Several features described hereafter may each be used independently of one another or with any combination of other features. An individual feature may not address any of the problems discussed above or might address only some of the problems discussed above.
[0040] The ensuing description provides exemplary embodiments only, and is not intended to limit the scope, applicability, or configuration of the disclosure. Rather, the ensuing description of the exemplary embodiments will provide those skilled in the art with an enabling description for implementing an exemplary embodiment. It should be understood that various changes may be made in the function and arrangement of elements without departing from the spirit and scope of the disclosure as set forth.
[0041] Specific details are given in the following description to provide a thorough understanding of the embodiments. However, it will be understood by one of ordinary skill in the art that the embodiments may be practiced without these specific details. For example, circuits, systems, processes, and other components may be shown as components in block diagram form in order not to obscure the embodiments in unnecessary detail.
[0042] Also, it is noted that individual embodiments may be described as a process which is depicted as a flowchart, a flow diagram, a data flow diagram, a structure diagram, or a block diagram. Although a flowchart may describe the operations as a sequential process, many of the operations may be performed in parallel or concurrently. In addition, the order of the operations may be re-arranged. A process is terminated when its operations are completed but could have additional steps not included in a figure. [0043] The word “exemplary” and/or “demonstrative” is used herein to mean serving as an example, instance, or illustration. For the avoidance of doubt, the subject matter disclosed herein is not limited by such examples. In addition, any aspect or design described herein as “exemplary” and/or “demonstrative” is not necessarily to be construed as preferred or advantageous over other aspects or designs, nor is it meant to preclude equivalent exemplary structures and techniques known to those of ordinary skill in the art. Furthermore, to the extent that the terms “includes,” “has,” “contains,” and other similar words are used in either the detailed description or the claims, such terms are intended to be inclusive — in a manner similar to the term “comprising” as an open transition word — without precluding any additional or other elements.
[0044] As used herein, a “processing unit” or “processor” or “operating processor” includes one or more processors, wherein processor refers to any logic circuitry for processing instructions. A processor may be a general-purpose processor, a special purpose processor, a conventional processor, a digital signal processor, a plurality of microprocessors, one or more microprocessors in association with a (Digital Signal Processing) DSP core, a controller, a microcontroller, Application Specific Integrated Circuits, Field Programmable Gate Array circuits, any other type of integrated circuits, etc. The processor may perform signal coding data processing, input/output processing, and/or any other functionality that enables the working of the system according to the present disclosure. More specifically, the processor or processing unit is a hardware processor.
[0045] As used herein, “a user equipment”, “a user device”, “a smart-user-device”, “a smartdevice”, “an electronic device”, “a mobile device”, “a handheld device”, “a wireless communication device”, “a mobile communication device”, “a communication device” may be any electrical, electronic and/or computing device or equipment, capable of implementing the features of the present disclosure. The user equipment/device may include, but is not limited to, a mobile phone, smart phone, laptop, a general-purpose computer, desktop, personal digital assistant, tablet computer, wearable device or any other computing device which is capable of implementing the features of the present disclosure. Also, the user device may contain at least one input means configured to receive an input from at least one of a transceiver unit, a processing unit, a storage unit, a detection unit and any other such unit(s) which are required to implement the features of the present disclosure.
[0046] As used herein, “storage unit” or “memory unit” refers to a machine or computer-readable medium including any mechanism for storing information in a form readable by a computer or similar machine. For example, a computer-readable medium includes read-only memory (“ROM”), random access memory (“RAM”), magnetic disk storage media, optical storage media, flash memory devices or other types of machine-accessible storage media. The storage unit stores at least the data that may be required by one or more units of the system to perform their respective functions.
[0047] As used herein “interface” or “user interface refers to a shared boundary across which two or more separate components of a system exchange information or data. The interface may also be referred to a set of rules or protocols that define communication or interaction of one or more modules or one or more units with each other, which also includes the methods, functions, or procedures that may be called.
[0048] All modules, units, components used herein, unless explicitly excluded herein, may be software modules or hardware processors, the processors being a general-purpose processor, a special purpose processor, a conventional processor, a digital signal processor (DSP), a plurality of microprocessors, one or more microprocessors in association with a DSP core, a controller, a microcontroller, Application Specific Integrated Circuits (ASIC), Field Programmable Gate Array circuits (FPGA), any other type of integrated circuits, etc.
[0049] As used herein the user interface module may include an in-built transceiver unit that has at least one receiver and at least one transmitter configured respectively for receiving and transmitting data, signals, information, or a combination thereof between units/ components within the system and/or connected with the system.
[0050] As discussed in the background section, the current known solutions have several shortcomings. The present disclosure aims to overcome the above-mentioned and other existing problems in this field of technology by providing method and system of generating one or more interconnected dashboards.
[0051] FIG. 1 illustrates an exemplary block diagram of an integrated performance management system [100], in accordance with the exemplary embodiments of the present disclosure. Referring to FIG. 1, the network performance management system [100] comprises various sub-systems such as: Integrated performance management module [100a], normalization layer [100b], computation layer (CL) [lOOd], anomaly detection layer [lOOo], streaming engine [1001], load balancer (LB) [100k], operations and management system [lOOp], API gateway system [lOOr], analysis engine [lOOh], parallel computing framework [lOOi], forecasting engine [lOOt], distributed file system, mapping layer [100s], distributed data lake [lOOu], scheduling layer [100g], reporting engine [100m], message broker [lOOe], graph layer [ 1 OOf], caching layer [100c], service quality manager [lOOq] and correlation engine[100n]. Exemplary connections between these subsystems is also as shown in FIG. 1. However, it will be appreciated by those skilled in the art that the present disclosure is not limited to the connections shown in the diagram, and any other connections between various subsystems that are needed to realise the effects are within the scope of this disclosure.
[0052] Following are the various components of the system [100], the various components may include:
[0053] Integrated performance management module [100a] comprises a 5G performance engine [lOOv] and a 5G Key Performance Indicator (KPI) Engine [100w],
[0054] 5G Performance Engine [100v]: The 5G Performance engine [lOOv] is a crucial component of the integrated system, responsible for collecting, processing, and managing performance counter data from various data sources within the network. The gathered data includes metrics such as connection speed, latency, data transfer rates, and many others. This raw data is then processed and aggregated as required, forming a comprehensive overview of network performance. The processed information is then stored in a Distributed Data Lake [lOOu], a centralized, scalable, and flexible storage solution, allowing for easy access and further analysis. The 5G Performance engine [lOOv] also enables the reporting and visualization of this performance counter data, thus providing network administrators with a real-time, insightful view of the network's operation. Through these visualizations, operators can monitor the network's performance, identify potential issues, and make informed decisions to enhance network efficiency and reliability.
[0055] 5G Key Performance Indicator (KPI) Engine [100w]: The 5G Key Performance Indicator (KPI) Engine is a dedicated component tasked with managing the KPIs of all the network elements. It uses the performance counters, which are collected and processed by the 5G Performance Management engine from various data sources. These counters, encapsulating crucial performance data, are harnessed by the KPI engine [lOOw] to calculate essential KPIs. These KPIs might include data throughput, latency, packet loss rate, and more. Once the KPIs are computed, they are segregated based on the aggregation requirements, offering a multi-layered and detailed understanding of network performance. The processed KPI data is then stored in the Distributed Data Lake [lOOu], ensuring a highly accessible, centralized, and scalable data repository for further analysis and utilization. Similar to the Performance Management engine, the KPI engine [lOOw] is also responsible for reporting and visualization of KPI data. This functionality allows network administrators to gain a comprehensive, visual understanding of the network's performance, thus supporting informed decision-making and efficient network management.
[0056] Ingestion layer: The Ingestion layer forms a key part of the Integrated Performance Management system. Its primary function is to establish an environment capable of handling diverse types of incoming data. This data may include Alarms, Counters, Configuration parameters, Call Detail Records (CDRs), Infrastructure metrics, Logs, and Inventory data, all of which are crucial for maintaining and optimizing the network's performance. Upon receiving this data, the Ingestion layer processes it by validating its integrity and correctness to ensure it is fit for further use. Following validation, the data is routed to various components of the system, including the Normalization layer, Streaming Engine, Streaming Analytics, and Message Brokers. The destination is chosen based on where the data is required for further analytics and processing. By serving as the first point of contact for incoming data, the Ingestion layer plays a vital role in managing the data flow within the system, thus supporting comprehensive and accurate network performance analysis.
[0057] Normalization layer [100b]: The Normalization Layer [100b] serves to standardize, enrich, and store data into the appropriate databases. It takes in data that has been ingested and adjusts it to a common standard, making it easier to compare and analyse. This process of "normalization" reduces redundancy and improves data integrity. Upon completion of normalization, the data is stored in various databases like the Distributed Data Lake [lOOu], Caching Layer, and Graph Layer, depending on its intended use. The choice of storage determines how the data can be accessed and used in the future. Additionally, the Normalization Layer [100b] produces data for the Message Broker, a system that enables communication between different parts of the performance management system through the exchange of data messages. Moreover, the Normalization Layer [100b] supplies the standardized data to several other subsystems. These include the Analysis Engine for detailed data examination, the Correlation Engine [lOOn] for detecting relationships among various data elements, the Service Quality Manager for maintaining and improving the quality of services, and the Streaming Engine for processing real-time data streams. These subsystems depend on the normalized data to perform their operations effectively and accurately, demonstrating the Normalization Layer's [100b] critical role in the entire system. [0058] Caching layer [100c]: The Caching Layer [100c] in the Integrated Performance Management system plays a significant role in data management and optimization. During the initial phase, the Normalization Layer [100b] processes incoming raw data to create a standardized format, enhancing consistency and comparability. The Normalizer Layer then inserts this normalized data into various databases. One such database is the Caching Layer [100c], The Caching Layer [100c] is a high-speed data storage layer which temporarily holds data that is likely to be reused, to improve speed and performance of data retrieval. By storing frequently accessed data in the Caching Layer [100c], the system significantly reduces the time taken to access this data, improving overall system efficiency and performance. Further, the Caching Layer [100c] serves as an intermediate layer between the data sources and the sub-systems, such as the Analysis Engine, Correlation Engine [lOOn], Service Quality Manager, and Streaming Engine. The Normalization Layer [100b] is responsible for providing these sub-systems with the necessary data from the Caching Layer [100c],
[0059] Computation layer [100d]: The Computation Layer [lOOd] in the Integrated Performance Management system serves as the main hub for complex data processing tasks. In the initial stages, raw data is gathered, normalized, and enriched by the Normalization Layer [100b], The Normalizer Layer then inserts this standardized data into multiple databases including the Distributed Data Lake [lOOu], Caching Layer [100c], and Graph Layer, and also feeds it to the Message Broker. Within the Computation Layer [ 1 OOd], several powerful sub-systems such as the Analysis Engine, Correlation Engine [lOOn], Service Quality Manager, and Streaming Engine, utilize the normalized data. These systems are designed to execute various data processing tasks. The Analysis Engine performs in-depth data analytics to generate insights from the data. The Correlation Engine [lOOn] identifies and understands the relations and patterns within the data. The Service Quality Manager assesses and ensures the quality of the services. And the Streaming Engine processes and analyses the real-time data feeds. In essence, the Computation Layer [lOOd] is where all major computation and data processing tasks occur. It uses the normalized data provided by the Normalization Layer [100b], processing it to generate useful insights, ensure service quality, understand data patterns, and facilitate real-time data analytics.
[0060] Message broker [100e]: The Message Broker [lOOe], an integral part of the Integrated Performance Management system, operates as a publish-subscribe messaging system. It orchestrates and maintains the real-time flow of data from various sources and applications. At its core, the Message Broker [lOOe] facilitates communication between data producers and consumers through message-based topics. This creates an advanced platform for contemporary distributed applications. With the ability to accommodate a large number of permanent or ad-hoc consumers, the Message Broker [lOOe] demonstrates immense flexibility in managing data streams. Moreover, it leverages the filesystem for storage and caching, boosting its speed and efficiency. The design of the Message Broker [lOOe] is centred around reliability. It is engineered to be fault-tolerant and mitigate data loss, ensuring the integrity and consistency of the data. With its robust design and capabilities, the Message Broker [lOOe] forms a critical component in managing and delivering real-time data in the system.
[0061] Graph layer [100f|: The Graph Layer [ 1 OOf], serving as the Relationship Modeler, plays a pivotal role in the Integrated Performance Management system. It can model a variety of data types, including alarm, counter, configuration, CDR data, Infra-metric data, 5G Probe Data, and Inventory data. Equipped with the capability to establish relationships among diverse types of data, the Relationship Modeler offers extensive modelling capabilities. For instance, it can model Alarm and Counter data, Vprobe and Alarm data, elucidating their interrelationships. Moreover, the Modeler should be adept at processing steps provided in the model and delivering the results to the system requested, whether it be a Parallel Computing system, Workflow Engine, Query Engine, Correlation System [lOOn], 5G Performance Management Engine, or 5G KPI Engine [lOOw], With its powerful modelling and processing capabilities, the Graph Layer [1 OOf] forms an essential part of the system, enabling the processing and analysis of complex relationships between various types of network data.
[0062] Scheduling layer [100g]: The Scheduling Layer [100g] serves as a key element of the Integrated Performance Management System, endowed with the ability to execute tasks at predetermined intervals set according to user preferences. A task might be an activity performing a service call, an API call to another microservice, the execution of an Elastic Search query, and storing its output in the Distributed Data Lake [lOOu] or Distributed File System or sending it to another micro- service. The versatility of the Scheduling Layer [100g] extends to facilitating graph traversals via the Mapping Layer to execute tasks. This crucial capability enables seamless and automated operations within the system, ensuring that various tasks and services are performed on schedule, without manual intervention, enhancing the system's efficiency and performance. In sum, the Scheduling Layer [100g] orchestrates the systematic and periodic execution of tasks, making it an integral part of the efficient functioning of the entire system. [0063] Analysis Engine [lOOh] : The Analysis Engine [lOOh] forms a crucial part of the Integrated Performance Management System, designed to provide an environment where users can configure and execute workflows for a wide array of use-cases. This facility aids in the debugging process and facilitates a better understanding of call flows. With the Analysis Engine [lOOh], users can perform queries on data sourced from various subsystems or external gateways. This capability allows for an in-depth overview of data and aids in pinpointing issues. The system's flexibility allows users to configure specific policies aimed at identifying anomalies within the data. When these policies detect abnormal behaviour or policy breaches, the system sends notifications, ensuring swift and responsive action. In essence, the Analysis Engine [lOOh] provides a robust analytical environment for systematic data interrogation, facilitating efficient problem identification and resolution, thereby contributing significantly to the system's overall performance management.
[0064] Parallel Computing Framework [lOOi] : The Parallel Computing Framework [lOOi] is a key aspect of the Integrated Performance Management System, providing a user-friendly yet advanced platform for executing computing tasks in parallel. This framework highlights both scalability and fault tolerance, crucial for managing vast amounts of data. Users can input data via Distributed File System (DFS) [lOOj] locations or Distributed Data Lake (DDL) indices. The framework supports the creation of task chains by interfacing with the Service Configuration Management (SCM) Sub-System. Each task in a workflow is executed sequentially, but multiple chains can be executed simultaneously, optimizing processing time. To accommodate varying task requirements, the service supports the allocation of specific host lists for different computing tasks. The Parallel Computing Framework [lOOi] is an essential tool for enhancing processing speeds and efficiently managing computing resources, significantly improving the system's performance management capabilities.
[0065] Distributed File System [lOOj] : The Distributed File System (DFS) [lOOj] is a critical component of the Integrated Performance Management System, enabling multiple clients to access and interact with data seamlessly. This file system is designed to manage data files that are partitioned into numerous segments known as chunks. In the context of a network with vast data, the DFS [lOOj] effectively allows for the distribution of data across multiple nodes. This architecture enhances both the scalability and redundancy of the system, ensuring optimal performance even with large data sets. DFS [lOOj] also supports diverse operations, facilitating the flexible interaction with and manipulation of data. This accessibility is paramount for a system that requires constant data input and output, as is the case in a robust performance management system.
[0066] Load Balancer [100k]: The Load Balancer (LB) [100k] is a vital component of the Integrated Performance Management System, designed to efficiently distribute incoming network traffic across a multitude of backend servers or microservices. Its purpose is to ensure the even distribution of data requests, leading to optimized server resource utilization, reduced latency, and improved overall system performance. The LB [100k] implements various routing strategies to manage traffic. These include round-robin scheduling, header-based request dispatch, and contextbased request dispatch. Round-robin scheduling is a simple method of rotating requests evenly across available servers. In contrast, header and context-based dispatching allow for more intelligent, request-specific routing. Header-based dispatching routes requests based on data contained within the headers of the Hypertext Transfer Protocol (HTTP) requests. Context-based dispatching routes traffic based on the contextual information about the incoming requests. For example, in an event-driven architecture, the LB [100k] manages event and event acknowledgments, forwarding requests or responses to the specific microservice that has requested the event. This system ensures efficient, reliable, and prompt handling of requests, contributing to the robustness and resilience of the overall performance management system.
[0067] Streaming Engine [1001]: The Streaming Engine [1001], also referred to as Stream Analytics, is a critical subsystem in the Integrated Performance Management System. This engine is specifically designed for high-speed data pipelining to the User Interface (UI). Its core objective is to ensure real-time data processing and delivery, enhancing the system's ability to respond promptly to dynamic changes. Data is received from various connected subsystems and processed in real-time by the Streaming Engine [1001], After processing, the data is streamed to the UI, fostering rapid decision-making and responses. The Streaming Engine [1001] cooperates with the Distributed Data Lake [lOOu], Message Broker [lOOe], and Caching Layer [100c] to provide seamless, real-time data flow. Stream Analytics is designed to perform required computations on incoming data instantly, ensuring that the most relevant and up-to-date information is always available at the UI. Furthermore, this system can also retrieve data from the Distributed Data Lake [lOOu], Message Broker [ 1 OOe], and Caching Layer [100c] as per the requirement and deliver it to the UI in real-time. The streaming engine's [1001] goal is to provide fast, reliable, and efficient data streaming, contributing to the overall performance of the management system. [0068] Reporting Engine [100m]: The Reporting Engine [100m] is a key subsystem of the Integrated Performance Management System. The fundamental purpose of designing the Reporting Engine [100m] is to dynamically create report layouts of API data, catered to individual client requirements, and deliver these reports via the Notification Engine. The REM serves as the primary interface for creating custom reports based on the data visualized through the client's dashboard. These custom dashboards, created by the client through the User Interface (UI), provide the basis for the Reporting Engine [100m] to process and compile data from various interfaces. The main output of the Reporting Engine [100m] is a detailed report generated in Excel format. The Reporting Engine’s [100m] unique capability to parse data from different subsystem interfaces, process it according to the client's specifications and requirements, and generate a comprehensive report makes it an essential component of this performance management system. Furthermore, the Reporting Engine [100m] integrates seamlessly with the Notification Engine to ensure timely and efficient delivery of reports to clients via email, ensuring the information is readily accessible and usable, thereby improving overall client satisfaction and system usability.
[0069] FIG. 2 illustrates an exemplary block diagram of a computing device [200] (also referred to herein as a computer system [200]) upon which the features of the present disclosure may be implemented in accordance with exemplary implementation of the present disclosure. In an implementation, the computing device [200] may also implement a method for generation of one or more interconnected dashboards, utilising the system. In another implementation, the computing device [200] itself implements the method for generation of one or more interconnected dashboards using one or more units configured within the computing device [200], wherein said one or more units are capable of implementing the features as disclosed in the present disclosure.
[0070] The computing device [200] encompasses a wide range of electronic devices capable of processing data and performing computations. Examples of computing device [200] include, but are not limited only to, personal computers, laptops, tablets, smartphones, servers, and embedded systems. The devices may operate independently or as part of a network and can perform a variety of tasks such as data storage, retrieval, and analysis. Additionally, computing device [200] may include peripheral devices, such as monitors, keyboards, and printers, as well as integrated components within larger electronic systems, highlighting their versatility in various technological applications.
[0071] The computing device [200] may include a bus [202] or other communication mechanism for communicating information, and a processor [204] coupled with bus [202] for processing information. The processor [204] may be, for example, a general purpose microprocessor. The computing device [200] may also include a main memory [206], such as a random access memory (RAM), or other dynamic storage device, coupled to the bus [202] for storing information and instructions to be executed by the processor [204], The main memory [206] also may be used for storing temporary variables or other intermediate information during execution of the instructions to be executed by the processor [204], Such instructions, when stored in non-transitory storage media accessible to the processor [204], render the computing device [200] into a special-purpose machine that is customized to perform the operations specified in the instructions. The computing device [200] further includes a read only memory (ROM) [208] or other static storage device coupled to the bus [202] for storing static information and instructions for the processor [204],
[0072] A storage device [210], such as a magnetic disk, optical disk, or solid-state drive is provided and coupled to the bus [202] for storing information and instructions. The computing device [200] may be coupled via the bus [202] to a display [212], such as a cathode ray tube (CRT), Liquid crystal Display (LCD), Light Emitting Diode (LED) display, Organic LED (OLED) display, etc. for displaying information to a computer user. An input device [214], including alphanumeric and other keys, touch screen input means, etc. may be coupled to the bus [202] for communicating information and command selections to the processor [204], Another type of user input device may be a cursor controller [216], such as a mouse, a trackball, or cursor direction keys, for communicating direction information and command selections to the processor [204], and for controlling cursor movement on the display [212], This input device typically has two degrees of freedom in two axes, a first axis (e.g., x) and a second axis (e.g., y), that allow the device to specify positions in a plane.
[0073] The computing device [200] may implement the techniques described herein using customized hard-wired logic, one or more ASICs or FPGAs, firmware and/or program logic which in combination with the computing device [200] causes or programs the computing device [200] to be a special-purpose machine. According to one implementation, the techniques herein are performed by the computing device [200] in response to the processor [204] executing one or more sequences of one or more instructions contained in the main memory [206], Such instructions may be read into the main memory [206] from another storage medium, such as the storage device [210], Execution of the sequences of instructions contained in the main memory [206] causes the processor [204] to perform the process steps described herein. In alternative implementations of the present disclosure, hard-wired circuitry may be used in place of or in combination with software instructions. [0074] The computing device [200] also may include a communication interface [218] coupled to the bus [202], The communication interface [218] provides a two-way data communication coupling to a network link [220] that is connected to a local network [222], For example, the communication interface [218] may be an integrated services digital network (ISDN) card, cable modem, satellite modem, or a modem to provide a data communication connection to a corresponding type of telephone line. As another example, the communication interface [218] may be a local area network (LAN) card to provide a data communication connection to a compatible LAN. Wireless links may also be implemented. In any such implementation, the communication interface [218] sends and receives electrical, electromagnetic, or optical signals that carry digital data streams representing various types of information.
[0075] The computing device [200] can send messages and receive data, including program code, through the network(s), the network link [220] and the communication interface [218], In the Internet example, a server [230] might transmit a requested code for an application program through the Internet [228], the ISP [226], the local network [222], host [224] and the communication interface [218], The received code may be executed by the processor [204] as it is received, and/or stored in the storage device [210], or other non-volatile storage for later execution.
[0076] Referring to FIG. 3, an exemplary block diagram of a system [300] for generation of one or more interconnected dashboards, is shown, in accordance with the exemplary implementations of the present disclosure. The system [300] comprises at least one user interface module [302], at least one Integrated performance management (IPM) module [100a], at least one storage unit [305], and at least one computation module [306], Also, all of the components/ units of the system [300] are assumed to be connected to each other unless otherwise indicated below. As shown in the figures all units shown within the system should also be assumed to be connected to each other. Also, in FIG. 3 only a few units are shown, however, the system [300] may comprise multiple such units or the system [300] may comprise any such numbers of said units, as required to implement the features of the present disclosure.
[0077] The system [300] is configured for generation of the one or more interconnected dashboards, with the help of the interconnection between the components/units of the system [300], [0078] For generation of the one or more interconnected dashboards, the user interface module [302] of the system [300] is configured to receive, a first request for generation of a first dashboard. The first request in the specification refers to the initial user action to generate a dashboard within the network performance management system. The first request includes several key components such as the dashboard's name, the type of data it will display, and the specific key performance indicators (KPIs) and metrics the user wants to monitor. For example, a user might issue a first request to create a "Network Traffic Dashboard," specifying that it should display KPIs like total data throughput, packet loss rate, and latency over selected time intervals. Additionally, the request can include parameters for data aggregation methods (e.g., hourly averages, daily totals) and any initial filter criteria (e.g., specific geographic regions or network nodes). By defining these elements, the first request sets up the fundamental structure and purpose of the dashboard, enabling the system to gather and organize the necessary data for effective performance monitoring and analysis.
[0079] The user interface module [302] is further configured to receive, a second request to treat the first dashboard as a waterfall dashboard. The second request refers to the user action of designating the first dashboard as a waterfall dashboard. The second request includes specific information such as the type of dashboard being created, the parameters that need to be precomputed, and the logic for how these parameters should be processed. For example, if the first dashboard tracks network throughput, the second request might specify that the busiest hour for each day should be calculated and stored. This precomputed data can then be used in the second dashboard to analyse success call ratios during those busy hours. By setting these parameters in the second request, users ensure that the necessary computations are done in advance, streamlining subsequent analyses, and making the overall process more efficient and accurate.
[0080] The integrated performance management (IPM) module [100a] is configured to receive, the second request from the user interface module [302], The IPM module [100a] is further configured to send, an acknowledgement of the second request for treating the first dashboard as the waterfall dashboard. It is pertinent to note that the first dashboard, treated as the waterfall dashboard, may be used as the supporting dashboard for an existing dashboard. The waterfall dashboard is a specialized type of dashboard within a system that is designated for precomputation. This means that the data and key performance indicators (KPIs) associated with a waterfall dashboard are calculated in advance, allowing this precomputed data to be used as a foundational input for other dashboards. For example, if a dashboard tracks the busiest hour of network traffic each day (a Throughput KPI) over the past 90 days, this precomputed data can then be used to calculate and display related metrics, such as the Success Call Ratio KPI, within the same or another dashboard. By designating a dashboard as a waterfall dashboard, users can streamline complex sequential calculations, ensuring efficient and timely performance analysis without the need to recompute data repeatedly. This approach enhances the overall efficiency and effectiveness of network performance management by enabling interconnected and dependent dashboards to utilize precomputed outputs, thereby providing a more comprehensive and accurate understanding of network performance dynamics.
[0081] The Integrated performance management (IPM) module [100a] is further configured to save, in the storage unit [305], an associated information of the second request. Associated information refers to the specific data and metadata required to process requests, generate reports, and perform computations within the dashboards in a network performance management system. This information can include details such as the time range for data analysis, the specific key performance indicators (KPIs) to be monitored, the type of computations to be performed on the KPIs, user preferences for data display, and configurations for integrating multiple dashboards. For example, if a user requests to generate a dashboard to monitor network throughput over the past 90 days, the associated information will include the selected KPI (network throughput), the specified time range (90 days), and any specific computation rules or operations (such as calculating the busiest hour of each day). Additionally, if the user wants to use this dashboard as a waterfall dashboard to support another dashboard that calculates the Success Call Ratio, the associated information will also include the necessary integration configurations and precomputed values needed to link the two dashboards.
[0082] The Integrated performance management (IPM) module [100a] is further configured to forward to the computation module [306], the associated information of the second request for generating a report. The computation module [306] is configured to forward, the report to the IPM module [100a], for storing at the storage unit [305], In an exemplary aspect, the report is created after the CL [lOOd] processes the necessary data retrieved from the Distributed File System (DFS) [100j], The report includes detailed computations of key performance indicators (KPIs), aggregated data, and any other metrics specified by the user. For example, if the user has designated a Waterfall Dashboard to precompute the busiest hour of the day for network throughput over the past 90 days, the report will contain this computed data. Additionally, it might include calculations for success call ratios and other related KPIs over the same period. The generated report is then sent to the Integrated Performance Management (IPM) module [100a], where it is saved and can be used for further analysis or to create interconnected dashboards. This comprehensive report provides users with detailed insights and enables efficient performance management by precomputing and aggregating critical network performance data.
[0083] The user interface module [302] is further configured to receive, a third request, for generation of a second dashboard. The third request is a step where the user interface module [303] receives a request for the generation of a second dashboard. The third request includes specific details about the key performance indicators (KPIs) and aggregations that the user wants to incorporate into the second dashboard. Additionally, the third request may specify the operations to be applied to these selected KPIs and aggregations. For example, a user might request the generation of a second dashboard that includes a KPI for network latency and an aggregation of average latency over the last 30 days. The user might also specify an operation to filter this data to show only peak usage times.
[0084] The user interface module [302] is further configured to receive, a fourth request, for adding the first dashboard as a supporting dashboard to the second dashboard using the stored report. The fourth request involves adding the first dashboard, designated as a waterfall dashboard, as a supporting dashboard to the second dashboard. The fourth request includes utilizing the stored report, referred using name or identifier for interconnecting dashboards and enabling the sequential execution of precomputed data from one dashboard to influence another. For example, the third request, which is for generating a second dashboard, involves setting up a new dashboard that could monitor different network parameters or KPIs. In this case, the user might want to incorporate insights from the first dashboard, such as the busiest hour of network usage, into the second dashboard's calculations. By making the fourth request, the system links the first dashboard's precomputed data to the second dashboard, allowing for comprehensive analysis. For example, if the first dashboard calculates the busiest hour for network throughput, this data can then be used in the second dashboard to analyse Success Call Ratio during those busy hours. Thus, the third request sets up the new monitoring parameters, while the fourth request integrates previously computed data to enhance the new dashboard's analytical capabilities.
[0085] The IPM module [100a] is further configured to interconnect the first dashboard with the second dashboard. The user interface module [302] is further configured to receive, selection of one or more key performance indicators (KPIs) and one or more aggregations in the first dashboard. The user interface module [302] is further configured to receive, one or more operations to be applied on the selected one or more KPIs and the one or more aggregations. The computation module [306] is further configured to compute a pre-computed data. It is important to note that the pre-computed data comprises one or more values for the one or more KPIs based on the one or more operations. It is further noted that the pre-computed data is further used to filter the one or more values of the one or more KPIs in the second dashboard.
[0086] It is to be further noted that the one or more modules, units, components (including but not limited to the user interface module [302], the Integrated performance management (IPM) module [100a], the storage unit [305], and the computation module [306] used herein may be software modules configured via hardware modules/processor, or hardware processors, the processors being a general-purpose processor, a special purpose processor, a conventional processor, a digital signal processor, a plurality of microprocessors, one or more microprocessors in association with a DSP core, a controller, a microcontroller, Application Specific Integrated Circuits, Field Programmable Gate Array circuits, any other type of integrated circuits, etc.
[0087] The computation module [306] is further configured to set a time range for the associated information of the second request. The computation module [306] can define a specific period within which the data will be precomputed and analysed. For example, if a user specifies a time range of the last 30 days via the user interface module [303], the computation module [306] will use this time range to process and compute relevant KPIs for that period. This functionality ensures that the resulting analysis and insights are based on the user-defined timeframe, providing tailored and precise performance metrics for the specified duration.
[0088] The pre-computed data is computed for a time period within the set time range, wherein the time period is received from the user interface module [302], The users can specify a particular time range through the user interface, such as the last 30 days or the previous quarter. The system will then use this specified time range to calculate the pre-computed data, such as the busiest hour for network throughput during that period. For example, if a user specifies a time range of the last 30 days through the user interface, the system will calculate the busiest hour for network throughput during those 30 days. This precomputed data can then be used to analyse other metrics, such as the Success Call Ratio, within the same 30-day period.
[0089] Referring to FIG. 4, an exemplary flow diagram of method [400] for generation of one or more interconnected dashboards, in accordance with exemplary implementations of the present disclosure is shown. In an implementation, the method [400] is performed by the system [300], Further, in an implementation, the system [300] may be present in a server device to implement the features of the present disclosure. Also, as shown in FIG. 4, the method [400] starts at step [402],
[0090] At step [404], the method [400] comprises receiving, at a user interface module [302], a first request for generation of a first dashboard. The first request in the specification refers to the initial user action to generate a dashboard within the network performance management system. The first request includes several key components such as the dashboard's name, the type of data it will display, and the specific key performance indicators (KPIs) and metrics the user wants to monitor. For example, a user might issue a first request to create a "Network Traffic Dashboard," specifying that it should display KPIs like total data throughput, packet loss rate, and latency over selected time intervals. Additionally, the request can include parameters for data aggregation methods (e.g., hourly averages, daily totals) and any initial filter criteria (e.g., specific geographic regions or network nodes). By defining these elements, the first request sets up the fundamental structure and purpose of the dashboard, enabling the system to gather and organize the necessary data for effective performance monitoring and analysis.
[0091] At step [406], the method [400] comprises receiving, at the user interface module [302], a second request to treat the first dashboard as a waterfall dashboard. The second request refers to the user action of designating the first dashboard as a waterfall dashboard. The second request includes specific information such as the type of dashboard being created, the parameters that need to be precomputed, and the logic for how these parameters should be processed. For example, if the first dashboard tracks network throughput, the second request might specify that the busiest hour for each day should be calculated and stored. This precomputed data can then be used in the second dashboard to analyse success call ratios during those busy hours. By setting these parameters in the second request, users ensure that the necessary computations are done in advance, streamlining subsequent analyses, and making the overall process more efficient and accurate.
[0092] At step [408], the method [400] comprises receiving, at an integrated performance management (IPM) module [100a], the second request from the user interface module [302],
[0093] In an implementation of the present disclosure, the method [400] further comprises sending, at the IPM module [100a], an acknowledgement of the second request for treating the first dashboard as the waterfall dashboard. [0094] In an implementation of the present disclosure, in the method [400], the first dashboard treated as the waterfall dashboard, may be used as the supporting dashboard for an existing dashboard.
[0095] The waterfall dashboard is a specialized type of dashboard within a system that is designated for precomputation. This means that the data and key performance indicators (KPIs) associated with a waterfall dashboard are calculated in advance, allowing this precomputed data to be used as a foundational input for other dashboards. For example, if a dashboard tracks the busiest hour of network traffic each day (a Throughput KPI) over the past 90 days, this precomputed data can then be used to calculate and display related metrics, such as the Success Call Ratio KPI, within the same or another dashboard. By designating a dashboard as a waterfall dashboard, users can streamline complex sequential calculations, ensuring efficient and timely performance analysis without the need to recompute data repeatedly. This approach enhances the overall efficiency and effectiveness of network performance management by enabling interconnected and dependent dashboards to utilize precomputed outputs, thereby providing a more comprehensive and accurate understanding of network performance dynamics.
[0096] At step [410], the method [400] comprises saving, by a storage unit [305], at the IPM module [100a], an associated information of the second request. Associated information refers to the specific data and metadata required to process requests, generate reports, and perform computations within the dashboards in a network performance management system. This information can include details such as the time range for data analysis, the specific key performance indicators (KPIs) to be monitored, the type of computations to be performed on the KPIs, user preferences for data display, and configurations for integrating multiple dashboards. For example, if a user requests to generate a dashboard to monitor network throughput over the past 90 days, the associated information will include the selected KPI (network throughput), the specified time range (90 days), and any specific computation rules or operations (such as calculating the busiest hour of each day). Additionally, if the user wants to use this dashboard as a waterfall dashboard to support another dashboard that calculates the Success Call Ratio, the associated information will also include the necessary integration configurations and precomputed values needed to link the two dashboards.
[0097] At step [412], the method [400] comprises forwarding, by the IPM module [100a] to a computation module [306], the associated information of second request for generating a report. [0098] At step [414], the method [400] comprises forwarding, by the computation module [306] to the IPM module [100a], the report for storing in the storage unit [305], In an exemplary aspect, the report is created after the CL [lOOd] processes the necessary data retrieved from the Distributed File System (DFS) [100j]. The report includes detailed computations of key performance indicators (KPIs), aggregated data, and any other metrics specified by the user. For example, if the user has designated a Waterfall Dashboard to precompute the busiest hour of the day for network throughput over the past 90 days, the report will contain this computed data. Additionally, it might include calculations for success call ratios and other related KPIs over the same period. The generated report is then sent to the Integrated Performance Management (IPM) module [100a], where it is saved and can be used for further analysis or to create interconnected dashboards. This comprehensive report provides users with detailed insights and enables efficient performance management by precomputing and aggregating critical network performance data.
[0099] At step [416], the method comprises receiving, at the user interface module [302], a third request for generation of a second dashboard. The third request is a step where the user interface module [303] receives a request for the generation of a second dashboard. The third request includes specific details about the key performance indicators (KPIs) and aggregations that the user wants to incorporate into the second dashboard. Additionally, the third request may specify the operations to be applied to these selected KPIs and aggregations. For example, a user might request the generation of a second dashboard that includes a KPI for network latency and an aggregation of average latency over the last 30 days. The user might also specify an operation to filter this data to show only peak usage times.
[0100] At step [418], the method comprises receiving, at the user interface module [302], a fourth request for adding the first dashboard as a supporting dashboard to the second dashboard using the stored report. The fourth request involves adding the first dashboard, designated as a waterfall dashboard, as a supporting dashboard to the second dashboard. The fourth request includes utilizing the stored report for interconnecting dashboards and enabling the sequential execution of precomputed data from one dashboard to influence another. For example, the third request, which is for generating a second dashboard, involves setting up a new dashboard that could monitor different network parameters or KPIs. In this case, the user might want to incorporate insights from the first dashboard, such as the busiest hour of network usage, into the second dashboard's calculations. By making the fourth request, the system links the first dashboard's precomputed data to the second dashboard, allowing for comprehensive analysis. For example, if the first dashboard calculates the busiest hour for network throughput, this data can then be used in the second dashboard to analyse Success Call Ratio during those busy hours. Thus, the third request sets up the new monitoring parameters, while the fourth request integrates previously computed data to enhance the new dashboard's analytical capabilities.
[0101] At step [420], the method [400] comprises interconnecting, by the IPM module [100a], the first dashboard and the second dashboard.
[0102] At step [422], the method [400] comprises computing, by the computation module [306], a pre-computed data. It is to be noted that the pre-computed data comprises one or more values for the one or more KPIs based on the one or more operations. Further, the pre-computed data is used to filter the one or more values of the one or more KPIs in the second dashboard.
[0103] Thereafter, the method [400] terminates at step [424],
[0104] In the preferred embodiment as illustrated in FIG. 5, the connections between the various components of the system [500] are established using different protocols and mechanisms, as well known in the art. For example:
[0105] UI Module to IPM: The connection between the User Interface (UI) [532] and the Integrated Performance Management (IPM) module [100a] is established using an HTTP connection. HTTP (Hypertext Transfer Protocol) is a widely used protocol for communication between web browsers and servers. It allows the UI [532] to send requests and configurations to the IPM module [100a], and also receive responses or acknowledgments.
[0106] IPM to DDL: The connection between the IPM module [100a] and the Distributed Data Lake (DDL) [535] is established using a TCP (Transmission Control Protocol) connection. TCP is a reliable and connection-oriented protocol that ensures the integrity and ordered delivery of data packets. By using TCP, the IPM module [100a] can save and retrieve relevant data from the DDL [535] for computations, ensuring data consistency and reliability.
[0107] IPM to CL: The connection between the IPM module [100a] and the Computation Layer (CL) [534] is also established using an HTTP connection. Similar to the UI [532] to IPM [100a] module connection, this HTTP connection allows the IPM module [100a] to forward requests and computations, which includes large computations and/or complex queries, to the CL [534], The CL [534] processes the received instructions and returns the results or intermediate data to the IPM module [100a],
[0108] CL to DFS: The connection between the Computation Layer (CL) [534] and the Distributed File System (DFS) [536] is established using a File IO connection. File IO typically refers to the operations performed on files, such as reading from or writing to files. In this case, the CL [534] utilizes File IO operations to store and manage large files used in computations within the DFS [536], The DFS [536] usually includes historical data, i.e., data is stored for longer time periods. This connection allows the CL [534] to efficiently access and manipulate the required files.
[0109] In some embodiments, the plurality of modules includes a load balancer [537] for managing connections. The load balancer [537] is adapted to distribute the incoming network traffic across multiple servers or components to ensure optimal resource utilization and high availability. Particularly, the load balancer [537] is commonly employed to evenly distribute incoming requests across multiple instances of the IPM module [100a] or CL [534], providing scalability and fault tolerance to the system [100], Overall, these connections and the inclusion of the load balancer [537] help to facilitate effective communication, data transfer, and resource management within the system, enhancing its performance and reliability.
[0110] In operation, the user creates a dashboard request on the User Interface (UI) [532] and designates it as an interlinked Dashboard eligible for precomputation. Thereafter, the Integrated performance management module [100a] processes the dashboard request and generates the computed output for each interlinked dashboard at the computation layer [534], The computed output is stored in a suitable format for future reference and retrieval. Thereafter, the user may add the created interlinked dashboard as a supporting dashboard to a newly created or existing dashboard which is delegated to the Computation Layer (CL) [534] that in turn processes the execution requests, accesses the stored data, and performs the necessary computations using the precomputed data from the interlinked Dashboard. The precomputed output of these operations is stored and used to filter the values of KPIs in subsequent dashboard requests by the user for execution. The user may filter the values of KPIs as per the requirement.
[OHl] As is evident from the above, the present disclosure provides a technically advanced solution for generation of one or more interconnected dashboards. The present solution particularly involves categorizing dashboards as interlinked and/or interconnected Dashboards, performing precomputations, and utilizing the precomputed data in associated dashboards. Interlinked and/or interconnected dashboards provide a sequential and consolidated view of data, allowing for the precomputation of essential dashboards. By categorizing dashboards as interlinked dashboards, the need for parallel observation of multiple dashboards is eliminated. Instead, the focus is shifted to analyse interconnected KPIs on a single consolidated dashboard, where computations are performed in advance, and the outcomes are readily available for subsequent computations. This approach reduces cognitive overload, simplifies data synchronization, improves processing time, and enhances scalability.
[0112] It would be appreciated by the person skilled in the art that the technique of the present disclosure streamlines the process of performing complex computations across interconnected dashboards by precomputing data in a Waterfall Dashboard. This allows for quicker and more efficient data analysis as computations are reduced and data from one dashboard can directly influence another.
[0113] In an example, a network engineer uses the user interface [532] to designate a primary dashboard that monitors network throughput as a Waterfall Dashboard. This dashboard includes KPIs like total data transferred, peak transfer rates, and times of peak activity. The user interface [532] sends a request to the Integrated Performance Management (TPM) module [100a] to precompute data for the Waterfall Dashboard. The IPM acknowledges the request and forwards it to the Computation Layer (CL) [534], The CL [534] processes the request, performing computations to identify things like the busiest hours or days for network traffic in the past 90 days. These computed results are stored in a Distributed Data Lake (DDL) [535], The engineer then creates a new dashboard to monitor server response times and links this new dashboard to the precomputed data from the Waterfall Dashboard. When the engineer wants to view the server response times during the busiest network hours, a request is sent from the user interface [532] to the IPM module [100a], The IPM module [100a] then forwards this request to the CL [534], which retrieves the precomputed data from the DDL [535] and uses it to calculate server response times during the busiest network hours. The CL [534] sends these calculated results back to the IPM module [100a], which saves the data in a predetermined format. The results are then displayed on the user interface [532], showing server response times during the busiest network hours based on the data from the Waterfall Dashboard. By using this method, the engineer can understand how server performance is affected during peak network traffic times without having to manually calculate and correlate data between two separate dashboards. [0114] FIG. 6 illustrates an exemplary sequence flow diagram illustrating a process [600] for generation of one or more interconnected dashboards, in accordance with exemplary implementations of the present disclosure.
[0115] At S_l, the process [600] includes the creation of a Waterfall Dashboard. The user [602] initiates this process through the user interface (UI) [604] by selecting options to create a new dashboard and marking it as a Waterfall Dashboard, which indicates that it will be used for precomputation purposes.
[0116] At S_2, the process [600] includes requesting resources from the Load Balancer (LB). The UI [604] sends a request to the load balancer [100k] to identify an available instance of the Integrated Performance Management (IPM) module [100a] for the dashboard creation.
[0117] At S 3, the process [600] includes the load balancer [100k] identifying an available instance of IPM module [100a], Once an available instance of IPM module [100a] is identified, the load balancer [100k] forwards the request to the instance of IPM module [100a] to begin the dashboard creation process.
[0118] At S_4, the process [600] includes saving the dashboard information. The IPM module [100a] receives the request and saves the initial configuration and metadata of the Waterfall Dashboard to the Distributed Data Lake (DDL) [lOOu], such that all necessary information is saved.
[0119] At S_5, the process [600] includes sending an acknowledgment. The IPM module [100a] sends an acknowledgment back to the UI [604] indicating that the dashboard information has been successfully saved in the DDL [100u],
[0120] At S_6, the process [600] includes notifying the user [602] of the successful save. The UI [604] displays a message to the user [602] confirming that the Waterfall Dashboard has been saved successfully.
[0121] At S_7, the process [600] includes computing the Waterfall Dashboards. The IPM module [100a] initiates the computation of the Waterfall Dashboard by sending the computation request to the Computation Layer (CL) [lOOd], which is responsible for performing the intensive data processing tasks. [0122] At S_8, the process [600] includes bringing data from the DFS. The CL [lOOd] retrieves the required data from the Distributed File System (DFS) [lOOj], which contains the raw and historical data needed for the computations.
[0123] At S_9, the process [600] includes responding to the data request. The DFS [lOOj] sends the requested data back to the CL [1 OOd], enabling the CL to perform the necessary computations.
[0124] At S_10, the process [600] includes performing the Waterfall computation. The CL [lOOd] processes the data to compute the pre-defined KPIs and other metrics for the Waterfall Dashboard, ensuring the data is ready for further use.
[0125] At S i l, the process [600] includes saving the generated output. The CL [lOOd] sends the computed results back to the IPM module [100a], which then saves this output data for future reference and use in interconnected dashboards.
[0126] At S_12, the process [600] includes creating and saving an Excel file. The IPM module [100a] creates an Excel file containing the computed results and saves it for easy access and review by the user.
[0127] At S_13, the process [600] includes the execution of an associated dashboard. The user [602] initiates this process through the UI [604], requesting the generation of a second dashboard that will use the precomputed data from the Waterfall Dashboard.
[0128] At S_14, the process [600] includes requesting resources from the Load Balancer (LB). The UI [604] sends a request to the load balancer [100k] to identify an available instance of IPM module [100a] to handle the new dashboard generation.
[0129] At S_15, the process [600] includes the load balancer [100k] identifying an available IPM instance. The load balancer [100k] finds an available IPM module [100a] and forwards the dashboard generation request to it.
[0130] At S_16, the process [600] includes forwarding the request. The IPM module [100a] receives the request and forwards it to the CL [lOOd] to access the precomputed data and perform any additional computations required for the second dashboard. [0131] At S_17, the process [600] includes accessing stored data. The CL [lOOd] retrieves the relevant precomputed data and any additional necessary data from the DFS [ 1 OOj ] to generate the second dashboard.
[0132] At S_18, the process [600] includes sending the required data. The DFS [lOOj] sends the necessary data back to the CL [lOOd], enabling it to complete the computations for the second dashboard.
[0133] At S_19, the process [600] includes performing data computation. The CL [lOOd] processes the retrieved data to compute the required KPIs and metrics for the second dashboard.
[0134] At S_20, the process [600] includes sending the KPI data. The CL [lOOd] sends the computed KPI data and other relevant results back to the IPM module [100a],
[0135] At S_21, the process [600] includes finalizing the output. The IPM module [100a] processes the received data, finalizes the output, and prepares it for presentation to the user.
[0136] At S_22, the process [600] includes sending the computed data along with a notification. The IPM module [100a] sends the final computed data and a notification to the load balancer [100k] to inform the user that the second dashboard is ready.
[0137] At S_23, the process [600] includes forwarding the notification. The load balancer [100k] forwards the notification and the computed data to the UI [604],
[0138] At S_24, the process [600] includes presenting the output to the user. The UI [604] displays the final output to the user [602], showing the results of the second dashboard.
[0139] At S 24A, the process [600] includes the user [602] clicking on the notification. This step is initiated when the user receives a notification about the availability of the new dashboard or the updated data. By clicking on this notification, the user signals their intent to view more detailed information, or results related to the dashboard.
[0140] At S_25, the process [600] includes raising a request to show the result. This step involves the UI [604] allowing the user's interaction by displaying an initial summary or overview of the dashboard results. This gives the user a quick glance at the key metrics or highlights of the computed data.
[0141] At S_26, the process [600] includes fetching the result from the UI [604] to the load balancer [100k], Here, the UI [604] sends a request to the load balancer [100k] to retrieve the detailed data necessary for a comprehensive view of the dashboard. This step ensures that the UI can present the most current and detailed data to the user.
[0142] At S_27, the process [600] includes forwarding the request from the load balancer [100k] to the IPM module [100a], The load balancer [100k], upon receiving the request from the UI [604], forwards this request to the IPM module [100a] to obtain the detailed results.
[0143] At S_28, the process [600] includes finalizing the output at the IPM module [100a], The IPM module [100a] processes the request and prepares the detailed results, ensuring that all relevant data is accurately compiled and ready for presentation.
[0144] At S_29, the process [600] includes sending the computed KPI data from the IPM module [100a] to the load balancer [100k], KPI data refers to the computed KPIs based on the request. The IPM module [100a] sends this computed KPI data to the load balancer [100k], ensuring that the user receives the most recent information.
[0145] At S_30, the process [600] includes forwarding the data from the load balancer [100k] to the UI [604], The load balancer [100k] takes the detailed results and the computed KPI data from the IPM module [100a] and sends it to the UI [604] for display to the user.
[0146] At S_31, the process [600] includes showing the result from the UI [604] to the user [602], The UI [604] presents the final, detailed results to the user [602], This step concludes the process, providing the user with a comprehensive view of the dashboard data, including any updated KPIs and detailed metrics, enabling effective analysis and decision-making.
[0147] The present disclosure offers several advantages over existing methods. These include:
[0148] Efficiency: The use of interlinked Dashboards eliminates the need for parallel observation of multiple dashboards and streamlines the computation process, saving time and computational resources. [0149] Data Consolidation: Interlinked Dashboards provide a consolidated view of interconnected KPIs, allowing for a comprehensive understanding of network performance in a single dashboard.
[0150] Resource Optimization: By precomputing essential data, the disclosure optimizes computational resources and enhances scalability, making it suitable for large-scale networks and heavy computations.
[0151] Improved Insights: The interconnected nature of interlinked Dashboards enables users to identify relationships and dependencies between different KPIs, leading to deeper insights and better decision-making.
[0152] Furthermore, the interlinked dashboard addresses the challenge of integrating the outcomes of computations for different KPIs by providing a visual representation of the relationships and dependencies between KPIs. The cascading format of the dashboard allows the users to see how changes in one KPI affect other KPIs downstream. This approach provides a more comprehensive understanding of the network's performance, allowing network operators and stakeholders to make more informed decisions about how to optimize network performance and service quality.
[0153] Yet another aspect of the present disclosure may relate to a user equipment (UE) for generation of one or more interconnected dashboards. The UE comprising: a processor configured to: transmit a first request for generation of a first dashboard; transmit a second request to treat the first dashboard as a waterfall dashboard; transmit a third request for generation of a second dashboard; transmit a fourth request for adding the first dashboard as a supporting dashboard to the second dashboard using a stored report, wherein for generation of the one or more interconnected dashboards, process comprises: receiving, at an integrated performance management (IPM) module [100a], the second request from the user interface module [302]; saving, by a storage unit [305], an associated information of the second request; forwarding, by the IPM module [100a] to a computation module [307], the associated information of the second request for generating a report; forwarding, by the computation module [306] to the IPM module [100a], the report for storing in the storage unit [305]; interconnecting, by the IPM module [100a], the first dashboard and the second dashboard; and computing, by the computation module [306], a pre-computed data, the pre-computed data comprising one or more values for the one or more KPIs based on the one or more operations, and wherein the pre-computed data is further used to filter the one or more values of the one or more KPIs in the second dashboard.
[0154] Yet another aspect of the present disclosure relates to a non-transitory computer-readable storage medium storing instruction for generation of one or more interconnected dashboards, the storage medium comprising executable code which, when executed by one or more units of a system, causes: a user interface module [302] to receive: a first request for generation of a first dashboard; a second request to treat the first dashboard as a waterfall dashboard; an integrated performance management (IPM) module [100a] connected with at least the user interface module [302], the IPM module [100a] to: receive, the second request from the user interface module [302]; save, in a storage unit [305], an associated information of the second request; forward, to a computation module [306], the associated information of the second request for generating a report; the computation module [306] connected at least to the IPM module [100a], the computation module [306] to: forward, the report to the IPM module [100a], for storing at the storage unit [305]; the user interface module [302] to further receive: a third request, for generation of a second dashboard; a fourth request, for adding the first dashboard as a supporting dashboard to the second dashboard using the stored report; the IPM module [100a] to further interconnect the first dashboard with the second dashboard; and the computation module [306] to further to compute a pre-computed data, the pre-computed data comprising one or more values for the one or more KPIs based on the one or more operations, and wherein the pre-computed data is further used to filter the one or more values of the one or more KPIs in the second dashboard.
[0155] Further, in accordance with the present disclosure, it is to be acknowledged that the functionality described for the various components/units can be implemented interchangeably. While specific embodiments may disclose a particular functionality of these units for clarity, it is recognized that various configurations and combinations thereof are within the scope of the disclosure. The functionality of specific units, as disclosed in the disclosure, should not be construed as limiting the scope of the present disclosure. Consequently, alternative arrangements and substitutions of units, provided they achieve the intended functionality described herein, are considered to be encompassed within the scope of the present disclosure.
[0156] It should be noted that the terms "first", "second", "primary", "secondary", "target" and the like, herein do not denote any order, ranking, quantity, or importance, but rather are used to distinguish one element from another. [0157] As is evident from the above, the present disclosure provides a technically advanced solution for generating and interconnecting dashboards. The present solution automates the precomputation of key performance indicators (KPIs) and their integration into various dashboards, allowing users to designate a dashboard as a waterfall dashboard, making it eligible for precomputation, and use its output as a base for other dashboards. This enables sequential execution of dashboards and facilitates the interconnection of multiple dashboards to provide a comprehensive understanding of network performance. Further, the present solution addresses the need for handling complex computations over extended intervals, ensuring that the results of these computations can be used in subsequent calculations for other KPIs or counters. This feature allows users to define the importance of one dashboard's data for the computation of others, enhancing efficiency and accuracy in performance management. Implementing the features of the present invention, users can set a time range for the associated information of a request, allowing the saved pre-computed values to be used for calculating the value of another KPI within a specified time range of up to 90 days in the past. This flexibility ensures that historical data can be effectively utilized for current performance evaluations. Additionally, the present solution enables users to receive and apply one or more key performance indicators (KPIs) and aggregations, as well as operations on selected KPIs and aggregations, in a user-friendly manner. This comprehensive approach offers a robust and efficient method for managing and optimizing network performance through interconnected and precomputed dashboards.
[0158] While considerable emphasis has been placed herein on the disclosed implementations, it will be appreciated that many implementations can be made and that many changes can be made to the implementations without departing from the principles of the present disclosure. These and other changes in the implementations of the present disclosure will be apparent to those skilled in the art, whereby it is to be understood that the foregoing descriptive matter to be implemented is illustrative and non-limiting.

Claims

We Claim:
1. A method [400] for generation of one or more interconnected dashboards in, the method [400] comprising: receiving, at a user interface module [302], a first request for generation of a first dashboard; receiving, at the user interface module [302], a second request to treat the first dashboard as a waterfall dashboard; receiving, at an integrated performance management (IPM) module [100a], the second request from the user interface module [302]; saving, by a storage unit [305], an associated information of the second request; forwarding, by the IPM module [100a] to a computation module [307], the associated information of the second request for generating a report; forwarding, by the computation module [306] to the IPM module [100a], the report for storing in the storage unit [305]; receiving, at the user interface module [302], a third request for generation of a second dashboard; receiving, at the user interface module [302], a fourth request for adding the first dashboard as a supporting dashboard to the second dashboard using the stored report; interconnecting, by the IPM module [100a], the first dashboard and the second dashboard; and computing, by the computation module [306], a pre-computed data, the precomputed data comprising one or more values for key performance indicators (KPIs) based on one or more operations, and wherein the pre-computed data is further used to filter the one or more values of the one or more KPIs in the second dashboard.
2. The method [400] as claimed in claim 1, wherein the method [400] further comprises: sending, at the IPM module [100a], an acknowledgement of the second request for treating the first dashboard as the waterfall dashboard.
3. The method [400] as claimed in claim 1 , wherein the first dashboard treated as the waterfall dashboard, may be used as the supporting dashboard for an existing dashboard.
4. The method [400] as claimed in claim 1, further comprises: receiving, at the user interface module [302], the one or more KPIs and one or more aggregations in the third request for the first dashboard; and receiving, at the user interface module [302], the one or more operations to be applied on the one or more KPIs and the one or more aggregations.
5. The method [400] as claimed in claim 1, further comprises setting, by the computation module [306], a time range for the associated information of the second request.
6. The method [400] as claimed in claim 5, wherein the pre-computed data is computed for a time period within the set time range, wherein the time period is received from the user interface module [302],
7. A system [300] for generation of one or more interconnected dashboards, the system [300] comprises: a user interface module [302] configured to receive: o a first request for generation of a first dashboard; o a second request to treat the first dashboard as a waterfall dashboard; an integrated performance management (IPM) module [100a] connected with at least the user interface module [302], the IPM module [100a] is configured to: o receive, the second request from the user interface module [302]; o save, in a storage unit [305], an associated information of the second request; o forward, to a computation module [306], the associated information of the second request for generating a report;
- the computation module [306] connected at least to the IPM module [100a], the computation module [306] configured to: o forward, the report to the IPM module [100a], for storing at the storage unit [305];
- the user interface module [302] further configured to receive: o a third request, for generation of a second dashboard; o a fourth request, for adding the first dashboard as a supporting dashboard to the second dashboard using the stored report;
- the IPM module [100a] further configured to interconnect the first dashboard with the second dashboard; and - the computation module [306] further configured to compute a pre-computed data, the pre-computed data comprising one or more values for one or more key performance indicators (KPIs) based on one or more operations, and wherein the pre-computed data is further used to filter the one or more values of the one or more KPIs in the second dashboard.
8. The system [300] claimed in claim 7, wherein the IPM module [100a] is further configured to send, an acknowledgement of the second request for treating the first dashboard as the waterfall dashboard.
9. The system [300] claimed in claim 7, wherein the first dashboard, treated as the waterfall dashboard, may be used as the supporting dashboard for an existing dashboard.
10. The system [300] as claimed in claim 7, wherein the user interface module [302] further configured to: receive the one or more KPIs and one or more aggregations in the third request for the first dashboard; and receive the one or more operations to be applied on the one or more KPIs and the one or more aggregations.
11. The system [300] as claimed in claim 7, wherein the computation module [306] is further configured to set a time range for the associated information of the second request.
12. The system [300] as claimed in claim 11, wherein the pre-computed data is computed for a time period within the set time range, wherein the time period is received from the user interface module [302],
13. A user equipment (UE) for generation of one or more interconnected dashboards, the UE comprising: a processor configured to: o transmit a first request for generation of a first dashboard; o transmit a second request to treat the first dashboard as a waterfall dashboard; o transmit a third request for generation of a second dashboard; o transmit a fourth request for adding the first dashboard as a supporting dashboard to the second dashboard using a stored report, wherein for generation of the one or more interconnected dashboards, process comprises:
■ receiving, at an integrated performance management (IPM) module [100a], the second request from a user interface module [302];
■ saving, by a storage unit [305], an associated information of the second request;
■ forwarding, by the IPM module [100a] to a computation module [307], the associated information of the second request for generating a report;
■ forwarding, by the computation module [306] to the IPM module [100a], the report for storing in the storage unit [305];
■ interconnecting, by the IPM module [100a], the first dashboard and the second dashboard; and
■ computing, by the computation module [306], a pre-computed data, the pre-computed data comprising one or more values for one or more KPIs based on one or more operations, and wherein the precomputed data is further used to filter the one or more values of the one or more KPIs in the second dashboard.
14. A non-transitory computer-readable storage medium storing instruction for generation of one or more interconnected dashboards, the storage medium comprising executable code which, when executed by one or more units of a system, causes: a user interface module [302] to receive: o a first request for generation of a first dashboard; o a second request to treat the first dashboard as a waterfall dashboard; an integrated performance management (IPM) module [100a] connected with at least the user interface module [302], the IPM module [100a] to: o receive, the second request from the user interface module [302]; o save, in a storage unit [305], an associated information of the second request; o forward, to a computation module [306], the associated information of the second request for generating a report;
- the computation module [306] connected at least to the IPM module [100a], the computation module [306] to: o forward, the report to the IPM module [100a], for storing at the storage unit [305];
- the user interface module [302] to further receive: o a third request, for generation of a second dashboard; o a fourth request, for adding the first dashboard as a supporting dashboard to the second dashboard using the stored report;
- the IPM module [100a] to further interconnect the first dashboard with the second dashboard; and
- the computation module [306] to further to compute a pre-computed data, the precomputed data comprising one or more values for one or more KPIs based on one or more operations, and wherein the pre-computed data is further used to filter the one or more values of the one or more KPIs in the second dashboard.
PCT/IN2024/051344 2023-07-23 2024-07-22 Method and system for generation of interconneted dashboards Pending WO2025022439A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
IN202321049549 2023-07-23
IN202321049549 2023-07-23

Publications (1)

Publication Number Publication Date
WO2025022439A1 true WO2025022439A1 (en) 2025-01-30

Family

ID=94374386

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IN2024/051344 Pending WO2025022439A1 (en) 2023-07-23 2024-07-22 Method and system for generation of interconneted dashboards

Country Status (1)

Country Link
WO (1) WO2025022439A1 (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180018603A1 (en) * 2013-08-23 2018-01-18 Appdynamics Llc Dashboard for dynamic display of distributed transaction data
WO2022137021A1 (en) * 2020-12-23 2022-06-30 Altice Labs, S.A. Key performance indicators computing plan generation

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180018603A1 (en) * 2013-08-23 2018-01-18 Appdynamics Llc Dashboard for dynamic display of distributed transaction data
WO2022137021A1 (en) * 2020-12-23 2022-06-30 Altice Labs, S.A. Key performance indicators computing plan generation

Similar Documents

Publication Publication Date Title
US20230039566A1 (en) Automated system and method for detection and remediation of anomalies in robotic process automation environment
US11627053B2 (en) Continuous data sensing of functional states of networked computing devices to determine efficiency metrics for servicing electronic messages asynchronously
EP3567496B1 (en) Systems and methods for indexing and searching
US20200159702A1 (en) Method, apparatus, and computer program product for data quality analysis
US10116534B2 (en) Systems and methods for WebSphere MQ performance metrics analysis
JP2020017302A (en) Distributed industrial performance monitoring and analysis platform
JP2020021502A (en) Distributed industrial performance monitoring and analysis
JP2017076384A (en) Data analysis services for distributed industrial performance monitoring
JP2017076387A (en) Source-independent queries in distributed industrial systems
US20150332488A1 (en) Monitoring system performance with pattern event detection
WO2025046609A1 (en) METHOD AND SYSTEM FOR ANALYSIS OF KEY PERFORMANCE INDICATORS (KPIs)
WO2025017649A1 (en) Method and system for monitoring performance of network elements
CN116450465B (en) Data processing method, device, equipment and medium
WO2025022439A1 (en) Method and system for generation of interconneted dashboards
US11971907B2 (en) Component monitoring framework with predictive analytics
WO2025017645A1 (en) Method and system for performing real-time analysis of kpis to monitor performance of network
CN115514618A (en) Alarm event processing method and device, electronic equipment and medium
WO2025017640A1 (en) Method and system for real-time analysis of key performance indicators (kpis) deviations
WO2025041165A1 (en) Method and system to automatically assign restricted data to a user
WO2025027653A1 (en) Method and system for automatically detecting a new network node associated with a network
WO2025041159A1 (en) Method and system for generating and provisioning a key performance indicator (kpi)
WO2025017729A1 (en) Method and system for an automatic root cause analysis of an anomaly in a network
WO2025017578A1 (en) Method and system of providing a unified data normalizer within a network performance management system
WO2025017646A1 (en) Method and system for optimal allocation of resources for executing kpi requests
WO2025041158A1 (en) Method and system for provisioning and configuring counters

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 24845047

Country of ref document: EP

Kind code of ref document: A1