[go: up one dir, main page]

WO2025041165A1 - Method and system to automatically assign restricted data to a user - Google Patents

Method and system to automatically assign restricted data to a user Download PDF

Info

Publication number
WO2025041165A1
WO2025041165A1 PCT/IN2024/051516 IN2024051516W WO2025041165A1 WO 2025041165 A1 WO2025041165 A1 WO 2025041165A1 IN 2024051516 W IN2024051516 W IN 2024051516W WO 2025041165 A1 WO2025041165 A1 WO 2025041165A1
Authority
WO
WIPO (PCT)
Prior art keywords
restricted data
request
restricted
unit
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
PCT/IN2024/051516
Other languages
French (fr)
Inventor
Aayush Bhatnagar
Ankit Murarka
Jugal Kishore
Gaurav Kumar
Kishan Sahu
Rahul Kumar
Sunil Meena
Gourav Gurbani
Sanjana Chaudhary
Chandra GANVEER
Supriya Kaushik DE
Debashish Kumar
Mehul Tilala
Dharmendra Kumar Vishwakarma
Yogesh Kumar
Niharika PATNAM
Harshita GARG
Avinash Kushwaha
Sajal Soni
Kunal Telgote
Manasvi Rajani
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jio Platforms Ltd
Original Assignee
Jio Platforms Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jio Platforms Ltd filed Critical Jio Platforms Ltd
Publication of WO2025041165A1 publication Critical patent/WO2025041165A1/en
Pending legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1001Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
    • H04L67/1004Server selection for load balancing
    • H04L67/1023Server selection for load balancing based on a hash applied to IP addresses or costs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/16Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks using machine learning or artificial intelligence
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/20Network management software packages
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/28Restricting access to network management systems or functions, e.g. using authorisation function to access network configuration
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/12Avoiding congestion; Recovering from congestion
    • H04L47/125Avoiding congestion; Recovering from congestion by balancing the load, e.g. traffic engineering
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L69/00Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass
    • H04L69/22Parsing or analysis of headers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/02Protocols based on web technology, e.g. hypertext transfer protocol [HTTP]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1001Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
    • H04L67/1004Server selection for load balancing
    • H04L67/1017Server selection for load balancing based on a round robin mechanism
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/60Scheduling or organising the servicing of application requests, e.g. requests for application data transmissions using the analysis and optimisation of the required network resources
    • H04L67/63Routing a service request depending on the request content or context

Definitions

  • Embodiments of the present disclosure generally relate to network performance management systems. More particularly, embodiments of the present disclosure relate to automatically assigning a restricted data to a user.
  • Network performance management systems typically track network elements and data using network monitoring tools. Further, the network performance management systems combine and process such data to determine key performance indicators (KPI) of the network. Integrated performance management systems provide the means to visualize the network performance data so that network operators and other relevant stakeholders are able to identify the service quality of the overall network, and individual/grouped network elements. By having an overall as well as detailed view of the network performance, the network operator can detect, diagnose, and remedy actual service issues, as well as predict potential service issues or failures in the network and take precautionary measures accordingly.
  • KPI key performance indicators
  • An aspect of the present disclosure may relate to a method to automatically assign a restricted data to a user.
  • the method includes receiving, by a transceiver unit at an Integrated Performance Management (IPM) unit from a load balancer in a network, a restricted data request associated with the user.
  • the restricted data request is at least one of a restricted dashboard request and a restricted report execution request.
  • the method further includes transmitting, by the transceiver unit from the IPM to a trained model in the network, a hash code request associated with the restricted data request.
  • the method includes receiving, by a processing unit at the IPM unit from the trained model, a unique hash code based on the hash code request.
  • the method includes fetching, by the processing unit at the IPM unit from a caching layer, the restricted data associated with the restricted data request, upon receiving the unique hash code. Further, the method includes automatically assigning, by the processing unit from the IPM unit to the user via the load balancer in the network, the restricted data associated with the restricted data request.
  • the method further includes generating, by the processing unit via a computational layer, a set of computed restricted data based on at least the restricted report execution request, wherein the set of computed restricted data is generated in an event the restricted data associated with the restricted data request is not detected at the caching layer.
  • the method further includes automatically assigning, by the processing unit from the IPM unit to the user via the load balancer in the network, the set of computed restricted data associated with the restricted data request.
  • the unique hash code associated with the restricted data request is generated via the trained model, wherein the model is trained using a machine learning technique.
  • the system includes a transceiver unit.
  • the transceiver unit is configured to receive, at an Integrated Performance Management (IPM) unit from a load balancer in a network, a restricted data request associated with the user.
  • the restricted data request is at least one of a restricted dashboard request and a restricted report execution request.
  • the transceiver unit is further configured to transmit, from the IPM unit to a trained model in the network, a hash code request associated with the restricted data request.
  • the system includes a processing unit connected to at least the transceiver unit.
  • the processing unit is configured to receive, at the IPM unit from the trained model, a unique hash code based on the hash code request.
  • the processing unit is further configured to fetch, at the IPM unit from a caching layer, the restricted data associated with the restricted data request, upon reception of the unique hash code.
  • the processing unit is further configured to automatically assign, from the IPM unit to the user via the load balancer in the network, the restricted data associated with the restricted data request.
  • Yet another aspect of the present disclosure may relate to a non-transitory computer readable storage medium storing instructions to automatically assign a restricted data to a user, the instructions including executable code which, when executed by one or more units of a system, cause a transceiver unit of the system to receive, at an Integrated Performance Management (IPM) unit from a load balancer in a network, a restricted data request associated with the user.
  • the restricted data request is at least one of a restricted dashboard request and a restricted report execution request.
  • the instructions when executed by the system further cause the transceiver unit to transmit, from the IPM unit to a trained model in the network, a hash code request associated with the restricted data request.
  • the instructions when executed by the system further cause a processing unit to receive, at the IPM unit from the trained model, a unique hash code based on the hash code request.
  • the instructions when executed by the system further cause the processing unit to fetch, at the IPM unit from a caching layer, the restricted data associated with the restricted data request, upon reception of the unique hash code.
  • the instructions when executed by the system further cause the processing unit to automatically assign, from the IPM unit to the user via the load balancer in the network, the restricted data associated with the restricted data request.
  • KPI Key Performance Indicator
  • FIG. 1 illustrates an exemplary block diagram of a network performance management system.
  • FIG. 2 illustrates an exemplary block diagram of a computing device upon which the features of the present disclosure may be implemented in accordance with exemplary implementation of the present disclosure.
  • FIG. 3 illustrates an exemplary block diagram of a system to automatically assign a restricted data to a user, in accordance with exemplary implementations of the present disclosure.
  • FIG. 4 illustrates a method flow diagram to automatically assign a restricted data to a user, in accordance with exemplary implementations of the present disclosure.
  • FIG. 5 illustrates an exemplary system architecture to automatically assign a restricted data to a user, in accordance with exemplary implementations of the present disclosure.
  • FIG. 6 illustrates a sequence flow diagram to automatically assign a restricted data to a user, in accordance with exemplary implementations of the present disclosure.
  • exemplary and/or “demonstrative” are used herein to mean serving as an example, instance, or illustration. For the avoidance of doubt, the subject matter disclosed herein is not limited by such examples.
  • any aspect or design described herein as “exemplary” and/or “demonstrative” is not necessarily to be construed as preferred or advantageous over other aspects or designs, nor is it meant to preclude equivalent exemplary structures and techniques known to those of ordinary skill in the art.
  • a “processing unit” or “processor” or “operating processor” includes one or more processors, wherein processor refers to any logic circuitry for processing instructions.
  • a processor may be a general-purpose processor, a special purpose processor, a conventional processor, a digital signal processor, a plurality of microprocessors, one or more microprocessors in association with a (Digital Signal Processing) DSP core, a controller, a microcontroller, Application Specific Integrated Circuits, Field Programmable Gate Array circuits, any other type of integrated circuits, etc.
  • the processor may perform signal coding data processing, input/output processing, and/or any other functionality that enables the working of the system according to the present disclosure. More specifically, the processor or processing unit is a hardware processor.
  • a user equipment may be any electrical, electronic and/or computing device or equipment, capable of implementing the features of the present disclosure.
  • the user equipment/device may include, but is not limited to, a mobile phone, smart phone, laptop, a general-purpose computer, desktop, personal digital assistant, tablet computer, wearable device, or any other computing device which is capable of implementing the features of the present disclosure.
  • the user device may contain at least one input means configured to receive an input from at least one of a transceiver unit, a processing unit, a storage unit, and any other such unit(s) which are required to implement the features of the present disclosure.
  • storage unit or “memory unit” refers to a machine or computer-readable medium including any mechanism for storing information in a form readable by a computer or similar machine.
  • a computer-readable medium includes read-only memory (“ROM”), random access memory (“RAM”), magnetic disk storage media, optical storage media, flash memory devices or other types of machine-accessible storage media.
  • the storage unit stores at least the data that may be required by one or more units of the system to perform their respective functions.
  • interface refers to a shared boundary across which two or more separate components of a system exchange information or data.
  • the interface may also be referred to as a set of rules or protocols that define the communication or interaction of one or more modules or one or more units with each other, which also includes the methods, functions, or procedures that may be called.
  • All modules, units, and components used herein, unless explicitly excluded herein, may be software modules or hardware processors, the processors being a general-purpose processor, a special purpose processor, a conventional processor, a digital signal processor (DSP), a plurality of microprocessors, one or more microprocessors in association with a DSP core, a controller, a microcontroller, Application Specific Integrated Circuits (ASIC), Field Programmable Gate Array circuits (FPGA), any other type of integrated circuits, etc.
  • DSP digital signal processor
  • ASIC Application Specific Integrated Circuits
  • FPGA Field Programmable Gate Array circuits
  • the transceiver unit includes at least one receiver and at least one transmitter configured respectively for receiving and transmitting data, signals, information or a combination thereof between units/components within the system and/or connected with the system.
  • the current known solutions have several shortcomings. Assigning same dashboard for different users with different permissions and access is a major issue during the monitoring and management of the network parameters. Monitoring and managing multiple dashboards for different users with different access and permission leads to problems like delayed decision making, inaccurate assessment as the administrative user has to manage and monitor performance on multiple dashboards.
  • the present disclosure aims to overcome the above-mentioned and other existing problems in this field of technology by providing a method and a system to automatically assign a restricted data to a user based on a request of the user.
  • FIG. 1 illustrates an exemplary block diagram of a network performance management system [100], in accordance with the exemplary embodiments of the present invention.
  • the network performance management system [100] comprises various sub-systems such as: an integrated performance management unit [100a], a normalization layer [100b], a computation layer [lOOd], an anomaly detection layer [lOOo], a streaming engine [1001], a load balancer [100k], an operations and management system [lOOp], an API gateway system [lOOr], an analysis engine [lOOh], a parallel computing framework [lOOi], a forecasting engine [lOOt], a distributed file system [ 1 OOj ], a mapping layer [100s], a distributed data lake [lOOu], a scheduling layer [100g], a reporting engine [100m], a message broker [lOOe], a graph layer [ 1 OOf], a caching layer [100c], a service quality manager [lOOq] and a correlation engine[100n
  • IPM Integrated Performance Management
  • KPI Key Performance Indicator
  • Performance Management Engine [100v] is a crucial component of the IPM system [100a], responsible for collecting, processing, and managing performance counter data from various data sources within the network (e.g., 5G network).
  • the counter data includes metrics such as connection speed, latency, data transfer rates, and many others.
  • the counter data is then processed and aggregated as required, forming a comprehensive overview of network performance.
  • the processed information is then stored in the Distributed Data Lake [100u],
  • the distributed data lake [lOOu] is a centralized, scalable, and flexible storage solution, allowing for easy access and further analysis.
  • the Performance Management engine [lOOv] also enables the reporting and visualization of the performance counter data, thus providing network administrators with a real-time, insightful view of the network's operation. Through these visualizations, operators can monitor the network's performance, identify potential issues, and make informed decisions to enhance network efficiency and reliability.
  • An operator in the IPM system [100a] may be an individual, a device, an administrator, and the like who may interact with or manage the network.
  • KPI Key Performance Indicator
  • KPI Key Performance Indicator
  • the Key Performance Indicator (KPI) Engine [lOOw] is a dedicated component tasked with managing the KPIs of all the network elements.
  • the Key Performance Indicator (KPI) Engine [lOOw] uses the performance counters, which are collected and processed by the Performance Management engine [lOOv] from various data sources. These counters, encapsulating crucial performance data, are harnessed by the KPI engine [lOOw] to calculate essential KPIs.
  • KPIs may include at least one of: data throughput, latency, packet loss rate, and more.
  • the KPIs are segregated based on the aggregation requirements, offering a multi-layered and detailed understanding of the network performance.
  • the processed KPI data is then stored in the Distributed Data Lake [lOOu], ensuring a highly accessible, centralized, and scalable data repository for further analysis and utilization. Similar to the Performance Management engine [lOOv], the KPI engine [lOOw] is also responsible for reporting and visualization of KPI data. This functionality allows network administrators to gain a comprehensive, visual understanding of the network's performance, thus supporting informed decision-making and efficient network management.
  • the Ingestion layer (not shown in FIG. 1) forms a key part of the IPM system [100a], The ingestion layer primarily performs the function to establish an environment capable of handling diverse types of incoming data. This data may include Alarms, Counters, Configuration parameters, Call Detail Records (CDRs), Infrastructure metrics, Logs, and Inventory data, all of which are crucial for maintaining and optimizing the network's performance.
  • CDRs Call Detail Records
  • Infrastructure metrics Infrastructure metrics
  • Logs Logs
  • Inventory data all of which are crucial for maintaining and optimizing the network's performance.
  • the Ingestion layer processes the data by validating the data integrity and correctness to ensure that the data is fit for further use.
  • the data is routed to various components of the IPM system [100a], including the Normalization layer [100b], Streaming Engine [1001], Streaming Analytics, and Message Brokers [100e], The destination is chosen based on where the data is required for further analytics and processing.
  • the Ingestion layer plays a vital role in managing the data flow within the system, thus supporting comprehensive and accurate network performance analysis.
  • Normalization layer [100b] The Normalization Layer [100b] serves to standardize, enrich, and store data in the appropriate databases. It takes in data that has been ingested and adjusts it to a common standard, making it easier to compare and analyze. This process of "normalization" reduces redundancy and improves data integrity.
  • the data Upon completion of normalization, the data is stored in various databases like the Distributed Data Lake [lOOu], Caching Layer [100c], and Graph Layer [lOOf], depending on its intended use. The choice of storage determines how the data can be accessed and used in the future. Additionally, the Normalization Layer [100b] produces data for the Message Broker [ 1 OOe], a system that enables communication between different parts of the integrated performance management unit [100a] through the exchange of data messages. Moreover, the Normalization Layer [100b] supplies the standardized data to several other subsystems.
  • the Normalization Layer [100b] supplies the standardized data to several other subsystems.
  • Caching layer [100c] The Caching Layer [100c] in the IPM system [100a] plays a significant role in data management and optimization.
  • the Normalization Layer [100b] processes incoming raw data to create a standardized format, enhancing consistency and comparability.
  • the Normalizer Layer then inserts this normalized data into various databases.
  • One such database is the Caching Layer [100c]
  • the Caching Layer [100c] is a high-speed data storage layer which temporarily holds data that is likely to be reused, to improve the speed and performance of data retrieval. By storing frequently accessed data in the Caching Layer [100c], the system significantly reduces the time taken to access this data, improving overall system efficiency and performance.
  • the Caching Layer [100c] serves as an intermediate layer between the data sources and the sub-systems, such as the Analysis Engine, Correlation Engine [lOOn], Service Quality Manager, and Streaming Engine.
  • the Normalization Layer [100b] is responsible for providing these sub-systems with the necessary data from the Caching Layer [100c],
  • Computation layer [100d] The Computation Layer [lOOd] in the IPM system [100a] serves as the main hub for complex data processing tasks. In the initial stages, raw data is gathered, normalized, and enriched by the Normalization Layer [100b], The Normalizer Layer [100b] then inserts this standardized data into multiple databases including the Distributed Data Lake [lOOu], Caching Layer [100c], and Graph Layer [ 1 OOf , and also feeds it to the Message Broker [ 1 OOe] .
  • the Analysis Engine [lOOh] Correlation Engine [lOOn]
  • Service Quality Manager [lOOq] Service Quality Manager
  • Streaming Engine [1001] utilize the normalized data.
  • the Analysis Engine [lOOh] performs in-depth data analytics to generate insights from the data.
  • the Correlation Engine [lOOn] identifies and understands the relations and patterns within the data.
  • the Service Quality Manager [lOOq] assesses and ensures the quality of the services.
  • the Streaming Engine [1001] processes and analyses the real-time data feeds.
  • the Computation Layer [lOOd] is where all major computation and data processing tasks occur. It uses the normalized data provided by the Normalization Layer [100b], processing it to generate useful insights, ensure service quality, understand data patterns, and facilitate real-time data analytics.
  • Message broker [lOOe] The Message Broker [lOOe], an integral part of the IPM system [100a], operates as a publish-subscribe messaging system. It orchestrates and maintains the realtime flow of data from various sources and applications. At its core, the Message Broker [lOOe] facilitates communication between data producers and consumers through message-based topics. This creates an advanced platform for contemporary distributed applications. With the ability to accommodate a large number of permanent or ad-hoc consumers, the Message Broker [lOOe] demonstrates immense flexibility in managing data streams. Moreover, it leverages the filesystem for storage and caching, boosting its speed and efficiency. The design of the Message Broker [lOOe] is centered around reliability. It is engineered to be fault-tolerant and mitigate data loss, ensuring the integrity and consistency of the data. With its robust design and capabilities, the Message Broker [lOOe] forms a critical component in managing and delivering real-time data in the system.
  • the Relationship Modeler should adapt to processing steps provided in the model and delivering the results to the system requested, whether it be a Parallel Computing system, Workflow Engine, Query Engine, Correlation Engine [lOOn], Performance Management Engine, or KPI Engine [100w], With its powerful modelling and processing capabilities, the Graph Layer [1 OOf] forms an essential part of the system, enabling the processing and analysis of complex relationships between various types of network data.
  • Scheduling layer [100g] serves as a key element of the IPM System [100a], endowed with the ability to execute tasks at predetermined intervals set according to user preferences.
  • a task might be an activity performing a service call, an API call to another microservice, the execution of an Elastic Search query, and storing its output in the Distributed Data Lake [lOOu] or Distributed File System or sending it to another micro-service.
  • the microservice refers to a single system architecture to provide multiple functions. Some of the microservices in communication are API calls and remote procedure calls.
  • the versatility of the Scheduling Layer [100g] extends to facilitating graph traversals via the Mapping Layer to execute tasks.
  • Analysis Engine [100h] forms a crucial part of the IPM System [100a], designed to provide an environment where users can configure and execute workflows for a wide array of use-cases. This facility aids in the debugging process and facilitates a better understanding of call flows.
  • the Analysis Engine [lOOh] users can perform queries on data sourced from various subsystems or external gateways. This capability allows for an in- depth overview of data and aids in pinpointing issues.
  • the system's flexibility allows users to configure specific policies aimed at identifying anomalies within the data. When these policies detect abnormal behaviour or policy breaches, the system sends notifications, ensuring swift and responsive action.
  • the Analysis Engine [lOOh] provides a robust analytical environment for systematic data interrogation, facilitating efficient problem identification and resolution, thereby contributing significantly to the system's overall performance management.
  • Parallel Computing Framework [lOOi] is a key aspect of the Integrated Performance Management unit [100a], providing a user-friendly yet advanced platform for executing computing tasks in parallel.
  • the parallel computing framework [lOOi] highlights both scalability and fault tolerance, crucial for managing vast amounts of data. Users can input data via Distributed File System (DFS) [ 1 OOj ] locations or Distributed Data Lake (DDL) indices.
  • DFS Distributed File System
  • DDL Distributed Data Lake
  • the framework supports the creation of task chains by interfacing with the Service Configuration Management (SCM) Sub-System. Each task in a workflow is executed sequentially, but multiple chains can be executed simultaneously, optimizing processing time. To accommodate varying task requirements, the service supports the allocation of specific host lists for different computing tasks.
  • SCM Service Configuration Management
  • Distributed File System [lOOj] The Distributed File System (DFS) [lOOj] is a critical component of the Integrated Performance Management unit [100a], enabling multiple clients to access and interact with data seamlessly.
  • the Distributed File system [lOOj] is designed to manage data files that are partitioned into numerous segments known as chunks.
  • the DFS [ 1 OOj ] effectively allows for the distribution of data across multiple nodes. This architecture enhances both the scalability and redundancy of the system, ensuring optimal performance even with large data sets.
  • DFS [lOOj] also supports diverse operations, facilitating the flexible interaction with and manipulation of data. This accessibility is paramount for a system that requires constant data input and output, as is the case in a robust performance management system.
  • Load Balancer [100k] The Load Balancer (LB) [100k] is a vital component of the Integrated Performance Management unit [100a], designed to efficiently distribute incoming network traffic across a multitude of backend servers or microservices. Its purpose is to ensure the even distribution of data requests, leading to optimized server resource utilization, reduced latency, and improved overall system performance.
  • the LB [100k] implements various routing strategies to manage traffic.
  • the LB [100k] includes round-robin scheduling, header-based request dispatch, and context-based request dispatch. Round-robin scheduling is a simple method of rotating requests evenly across available servers. In contrast, header and context-based dispatching allow for more intelligent, request-specific routing.
  • Header-based dispatching routes requests based on data contained within the headers of the Hypertext Transfer Protocol (HTTP) requests.
  • Contextbased dispatching routes traffic based on the contextual information about the incoming requests.
  • the LB [100k] manages event and event acknowledgments, forwarding requests or responses to the specific microservice that has requested the event. This system ensures efficient, reliable, and prompt handling of requests, contributing to the robustness and resilience of the overall performance management system.
  • Streaming Engine [1001] The Streaming Engine [1001], also referred to as Stream Analytics, is a critical subsystem in the Integrated Performance Management unit [100a], This engine is specifically designed for high-speed data pipelining to the User Interface (UI).
  • UI User Interface
  • Streaming Engine [1001] After processing, the data is streamed to the UI, fostering rapid decision-making and responses.
  • the Streaming Engine [1001] cooperates with the Distributed Data Lake [lOOu], Message Broker [lOOe], and Caching Layer [100c] to provide seamless, real-time data flow.
  • Stream Analytics is designed to perform required computations on incoming data instantly, ensuring that the most relevant and up-to-date information is always available at the UI.
  • this system can also retrieve data from the Distributed Data Lake [lOOu], Message Broker [lOOe], and Caching Layer [100c] as per the requirement and deliver it to the UI in real-time.
  • the streaming engine [1001] is configured to provide fast, reliable, and efficient data streaming, contributing to the overall performance of the Integrated Performance Management unit [100a],
  • Reporting Engine [100m] is a key subsystem of the Integrated Performance Management unit [100a], The fundamental purpose of designing the Reporting Engine [100m] is to dynamically create report layouts of API data, cater to individual client requirements, and deliver these reports via the Notification Engine.
  • the REM serves as the primary interface for creating custom reports based on the data visualized through the client's dashboard. These custom dashboards, created by the client through the User Interface (UI), provide the basis for the Reporting Engine [100m] to process and compile data from various interfaces.
  • the main output of the Reporting Engine [100m] is a detailed report generated in Excel format.
  • the Reporting Engine s [100m] unique capability to parse data from different subsystem interfaces, process it according to the client's specifications and requirements, and generate a comprehensive report makes it an essential component of this performance management system. Furthermore, the Reporting Engine [100m] integrates seamlessly with the Notification Engine to ensure timely and efficient delivery of reports to clients via email, ensuring the information is readily accessible and usable, thereby improving overall client satisfaction and system usability.
  • FIG. 2 illustrates an exemplary block diagram of a computing device [200] upon which the features of the present disclosure may be implemented in accordance with exemplary implementation of the present disclosure.
  • the computing device [200] may also implement a method to automatically assign a restricted data to a user, utilizing the system.
  • the computing device [200] itself implements the method to automatically assign a restricted data to a user, using one or more units configured within the computing device [200], wherein said one or more units are capable of implementing the features as disclosed in the present disclosure.
  • the computing device [200] may include a bus [202] or other communication mechanism for communicating information, and a hardware processor [204] coupled with bus [202] for processing information.
  • the hardware processor [204] may be, for example, a general-purpose microprocessor.
  • the computing device [200] may also include a main memory [206], such as a random-access memory (RAM), or other dynamic storage device, coupled to the bus [202] for storing information and instructions to be executed by the processor [204],
  • the main memory [206] also may be used for storing temporary variables or other intermediate information during execution of the instructions to be executed by the processor [204], Such instructions, when stored in non-transitory storage media accessible to the processor [204], render the computing device [200] into a special-purpose machine that is customized to perform the operations specified in the instructions.
  • the computing device [200] further includes a read only memory (ROM) [208] or other static storage device coupled to the bus [202] for storing static information and instructions for the processor [204],
  • ROM read only memory
  • a storage device [210] such as a magnetic disk, optical disk, or solid-state drive is provided and coupled to the bus [202] for storing information and instructions.
  • the computing device [200] may be coupled via the bus [202] to a display [212], such as a cathode ray tube (CRT), Liquid crystal Display (LCD), Light Emitting Diode (LED) display, Organic LED (OLED) display, etc. for displaying information to a computer user.
  • An input device [214] including alphanumeric and other keys, touch screen input means, etc.
  • a cursor controller [216] such as a mouse, a trackball, or cursor direction keys, for communicating direction information and command selections to the processor [204], and for controlling cursor movement on the display [212].
  • the input device typically has two degrees of freedom in two axes, a first axis (e.g., x) and a second axis (e.g., y), that allow the device to specify positions in a plane.
  • the computing device [200] may implement the techniques described herein using customized hard-wired logic, one or more ASICs or FPGAs, firmware and/or program logic which in combination with the computing device [200] causes or programs the computing device [200] to be a special-purpose machine.
  • the techniques herein are performed by the computing device [200] in response to the processor [204] executing one or more sequences of one or more instructions contained in the main memory [206], Such instructions may be read into the main memory [206] from another storage medium, such as the storage device [210], Execution of the sequences of instructions contained in the main memory [206] causes the processor [204] to perform the process steps described herein.
  • hard-wired circuitry may be used in place of or in combination with software instructions.
  • the computing device [200] also may include a communication interface [218] coupled to the bus [202], The communication interface [218] provides a two-way data communication coupling to a network link [220] that is connected to a local network [222],
  • the communication interface [218] may be an integrated services digital network (ISDN) card, cable modem, satellite modem, or a modem to provide a data communication connection to a corresponding type of telephone line.
  • the communication interface [218] may be a local area network (LAN) card to provide a data communication connection to a compatible LAN.
  • LAN local area network
  • Wireless links may also be implemented.
  • the communication interface [218] sends and receives electrical, electromagnetic, or optical signals that carry digital data streams representing various types of information.
  • the computing device [200] can send messages and receive data, including program code, through the network(s), the network link [220], and the communication interface [218],
  • a server [230] might transmit a requested code for an application program through the Internet [228], the ISP [226], the local network [222], the host [224], and the communication interface [218],
  • the received code may be executed by the processor [204] as it is received, and/or stored in the storage device [210], or other non-volatile storage for later execution.
  • the present disclosure is implemented by a system [300] (as shown in FIG. 3).
  • the system [300] may include the computing device [200] (as shown in FIG. 2). It is further noted that the computing device [200] is able to perform the steps of a method [400] (as shown in FIG. 4).
  • FIG. 3 an exemplary block diagram of a system [300] to automatically assign a restricted data to a user is shown, in accordance with the exemplary implementations of the present disclosure.
  • the system [300] comprises at least one transceiver unit [302], at least one processing unit [304], and at least one storage unit [306], Also, all of the components/ units of the system [300] are assumed to be connected to each other unless otherwise indicated below. As shown in the figures all units shown within the system should also be assumed to be connected to each other. Also, in FIG. 3 only a few units are shown, however, the system [300] may comprise multiple such units, or the system [300] may comprise any such numbers of said units, as required to implement the features of the present disclosure.
  • system [300] may be present in a user device to implement the features of the present disclosure.
  • the system [300] may be a part of the user device / or may be independent of but in communication with the user device (may also referred to herein as a UE).
  • the system [300] may reside in a server or a network entity.
  • the system [300] may reside partly in the server/ network entity and partly in the user device.
  • the system [300] is configured to automatically assign a restricted data to a user, with the help of the interconnection between the components/units of the system [300],
  • the system [300] includes a transceiver unit [302], The transceiver unit [302] is configured to receive, at an Integrated Performance Management (IPM) unit [100a] from a load balancer [100k] in a network, a restricted data request associated with the user.
  • IPM Integrated Performance Management
  • the restricted data request is at least one of a restricted dashboard request and a restricted report execution request.
  • the restricted dashboard request refers to a request received for the dashboard with specific/restricted data such as geographical region-specific data.
  • a call performance dashboard may aggregate the call performance data in each circle of a network.
  • the circle refers to a specific geographic area or region. If user A has restricted access to city X circle, user A may only get data for city X while the call performance dashboard is configured for the whole country in which city X is located. Therefore, the output for the restricted dashboard request or the restricted report execution request output will depend on the type of access the user has.
  • the restricted report execution request refers to the execution or implementation of the request to generate a report based on user requirements. For example, if the report execution request is for city X, the report may be executed for city X only comprising all the required information about the requested data such as call performance data.
  • the report may include graphs, charts, and tables to represent the requested data.
  • the transceiver unit [302] may further transmit, from the IPM unit [100a] to a trained model in the network, a hash code request associated with the restricted data request.
  • the hash code request refers to the request for the assignment of a unique hash code to the restricted data request. The unique hash code may help to identify duplicate requests sent to the transceiver unit [302],
  • a unique hash code refers to a distinct identifier generated by the trained model.
  • the unique hash code such as a fixed-size string of characters is generated for the restricted data request.
  • the hash code is unique for each unique input, thereby differentiating requests based on different hash codes.
  • the system [300] further includes a processing unit [304] connected to at least the transceiver unit [302], The processing unit [304] is configured to receive, at the IPM unit [100a] from the trained model, the unique hash code based on the hash code request.
  • the unique hash code is generated via the trained model.
  • the trained model is trained via the Artificial Intelligence (AI)/Machine Learning (ML) layer. More particularly, the trained model is trained using machine learning techniques.
  • the machine learning technique refers to a method that may create a model to generate unique integer values (or unique hash code) for every restricted data request.
  • the processing unit [304] is further configured to fetch, at the IPM unit [100a] from a caching layer [506], the restricted data associated with the restricted data request, upon reception of the unique hash code.
  • the restricted data request may be executed at the IPM unit [100a]
  • the IPM unit [100a] may use the unique hash code at the caching layer [506] to retrieve the restricted data associated with the restricted data request.
  • the transceiver unit [302] is further configured to automatically assign, from the IPM unit [100a] to the user via the load balancer [100k] in the network, the restricted data associated with the restricted data request.
  • the transceiver unit [302] may automatically assign the restricted data to the user. For instance, the unique hash code assigned to the restricted data request for accessing call performance data of city X is XYZ.
  • the processing unit [304] may retrieve the restricted data associated with the call performance of city X instead of the complete data of the country where city X is also located.
  • the processing unit After retrieving the restricted data, the processing unit automatically assigns the restricted data to the user so that the user may check only the call performance data of a particular city (say X) via a user interface. Therefore, the user may not have access to city Y, which exists on the same call performance dashboard.
  • the user can access only the restricted data for which the unique hash code is assigned.
  • the user only has read-only access to the dashboard with restricted data (such as call performance data of city X). The read-only access provides only access to see the requested restricted data and the user may not be able to modify any particular changes in the dashboard.
  • the processing unit [304] is further configured to generate, via a computation layer [lOOd], a set of computed restricted data based on at least the restricted report execution request.
  • the set of computed restricted data is generated in an event the restricted data associated with the restricted data request is not detected at the caching layer [506] (also referred to as the catching engine).
  • the set of computed restricted data refers to the processed data based on the request of the user.
  • the requested data is first received at the computational layer in a raw format and then the computational layer performs computation or processing on the received data to provide the user with the processed or final output data in the form of computed restricted data.
  • the processing unit [304] is further configured to automatically assign, from the IPM unit [100a] to the user via the load balancer [100k] in the network, the set of computed restricted data associated with the restricted data request.
  • the IPM unit [100a] may send the request to the computation layer [lOOd] using the unique hash code.
  • the computation layer [lOOd] may compute the data based on the unique hash code and send the computed restricted data to the IPM unit [100a], Further, the system includes a storage unit [306], The storage unit [306] is connected to at least the transceiver unit [302] and the processing unit [304], The storage unit [306] is configured to store the data required for the implementation of the features of the present invention such as but not limited to restricted data, training data, report data, and dashboard data.
  • FIG. 4 an exemplary method flow diagram [400] to automatically assign a restricted data to a user, in accordance with exemplary implementations of the present disclosure is shown.
  • the method [400] is performed by the system [300], Further, in an implementation, the system [300] may be present in a server device to implement the features of the present disclosure.
  • reference to the components e.g., caching layer
  • FIG. 4 reference to the components of FIG. 4 is also taken Also, as shown in FIG. 4, the method [400] starts at step [402],
  • the method includes receiving, by a transceiver unit [302] at an Integrated Performance Management (IPM) unit [100a] from a load balancer [100k] in a network, a restricted data request associated with the user.
  • the restricted data request is at least one of a restricted dashboard request and a restricted report execution request.
  • the restricted dashboard request refers to a dashboard with performance parameters of the network for a specific/restricted geographical area.
  • a call performance dashboard may aggregate the call performance in each circle of a network.
  • the circle refers to a specific geographic area.
  • the restricted report execution request refers to the execution or implementation of the request to generate a report based on the report execution request. For example, if the report execution request is for city X, the report may be executed and displayed to the user for city X only.
  • the method includes transmitting, by the transceiver unit [302] from the IPM unit [100a] to a trained model in the network, a hash code request associated with the restricted data request.
  • the hash code request refers to the request for the assignment of a unique hash code to the restricted data request.
  • the unique hash code may help to identify duplicate requests sent to the transceiver unit [302],
  • the method includes receiving, by a processing unit [304] at the IPM unit [100a] from the trained model, a unique hash code based on the hash code request.
  • the unique hash code associated with the restricted data request is generated via the trained model.
  • the trained model is trained using a machine learning technique.
  • the method includes fetching, by the processing unit [304] at the IPM unit [100a] from a caching layer [110c, 506], a restricted data associated with the restricted data request, upon receiving the unique hash code.
  • the restricted data request may be executed at the IPM unit [100a]
  • the IPM unit [100a] may search the restricted data using a unique hash code at the caching layer [110c, 506] and retrieve the restricted data for the user based on the unique hash code.
  • the restricted data request is to obtain data associated with the internet speed of a 5G network in an area of Y city.
  • the processing unit [304] may retrieve the restricted data, i.e., the internet speed data of the 5G network in the city Y, and display it to the user either via the dashboard or in the form of the report.
  • the report may be downloaded by the user based on a request of the user to download the report.
  • the method includes automatically assigning, by the processing unit [304] from the IPM unit [100a] to the user via the load balancer in the network, the restricted data associated with the restricted data request.
  • the transceiver unit [302] may automatically assign the restricted data to the user.
  • the method further includes generating, by the processing unit [304] via a computation layer [lOOd], a set of computed restricted data based on at least the restricted report execution request.
  • the set of computed restricted data is generated in an event the restricted data associated with the restricted data request is not detected at the caching layer [506],
  • the method further includes automatically assigning, by the processing unit [304] from the IPM unit [100a] to the user via the load balancer [100k] in the network, the set of computed restricted data associated with the restricted data request.
  • the IPM unit [100a] may send the request to the computation layer [lOOd] to compute the restricted data and assign to the user.
  • FIG. 5 an exemplary system architecture to automatically assign a restricted data to a user, in accordance with exemplary implementations of the present disclosure is shown.
  • the exemplary system architecture [500] includes but is not limited to a User Interface (UI) [502], the load balancer [100k], the IPM unit [100a], a caching layer [506], an Artificial Intelligence/ Machine Learning (AI/ML) model [508], the computation layer [lOOd], the distributed file system [lOOj], and the distributed data lake [100u],
  • UI User Interface
  • AI/ML Artificial Intelligence/ Machine Learning
  • the caching layer [506] is similar to the caching layer [100c]
  • the user may initiate a restricted data request at the UI [502],
  • the restricted data request is at least one of a restricted dashboard request and a restricted report execution request.
  • the restricted data request is sent to the load balancer [100k],
  • the load balancer [100k] may efficiently distribute incoming network traffic across backend servers or microservices.
  • the load balancer [100k] ensures the even distribution of data requests, leading to optimized server resource utilization, reduced latency, and improved overall system performance.
  • the load balancer [100k] may forward the restricted data request to the Integrated Performance Management (IPM) unit [100a],
  • the IPM unit [100a] may send a request for assigning a unique hash code to the restricted data request for identification of any duplicate request to the AI/ML model [508],
  • the AI/ML model [508] is trained using a machine learning technique.
  • the unique hash code is assigned to the restricted data request, and the unique hash code is shared with the IPM unit [100a], In an implementation of the present solution, the unique hash code is assigned to the restricted data request. The user can access only the data for which the unique hash code is assigned. The user may only have access to execute the request on the dashboard but may not modify the dashboard.
  • the IPM unit [100a] may fetch the restricted data associated with the restricted data request from the caching layer [506], if the data requested is present in the caching layer [506], The restricted data may be fetched based on the unique hash code.
  • the caching layer [506] may send the restricted data to the IPM unit [100a],
  • the restricted data request may be executed through the distributed data lake [100u]
  • the restricted data request may be sent to the distributed data lake [lOOu] and based on the unique hash code, the restricted data associated with the restricted data request may be received at the IPM unit [100a],
  • the IPM unit [100a] may send the restricted data request to the computation layer [100d].
  • the computation layer [lOOd] may compute the data from the distributed file system [lOOj], based on the unique hash code, and send the computed restricted data to the IPM unit [100a],
  • the IPM unit [100a] may send restricted data to the load balancer [100k], The load balancer [100k] may forward the restricted data to the UI [502] for the user.
  • the user may have access only to the restricted data.
  • FIG. 6 an exemplary sequence flow diagram to automatically assign a restricted data to a user, in accordance with exemplary implementations of the present disclosure is shown.
  • a restricted data request initiated by the user may be sent to the load balancer [100k] via the User Interface (UI) [502],
  • the restricted data request is at least one of a restricted dashboard request and a restricted report execution request.
  • the restricted dashboard request refers to a dashboard for a specific/restricted geographical area.
  • the restricted report execution request refers to the execution or implementation of the request to generate a report based on the report execution request.
  • the load balancer [100k] may efficiently distribute incoming network traffic across backend servers or microservices.
  • the load balancer [100k] ensures the even distribution of data requests, leading to optimized server resource utilization, reduced latency, and improved overall system performance.
  • the UI [502] may contain a dashboard comprising the set of information. The set of information is associated with a unique hash code.
  • step 2 the load balancer [100k] may forward the restricted data request to the Integrated Performance Management (IPM) unit [100a],
  • IPM Integrated Performance Management
  • the IPM unit [100a] may send a request for assigning a unique hash code to the restricted data request to identify duplicate requests to the AI/ML model [508],
  • the request for assigning the unique hash code refers to the request for the assignment of a unique integer value to the restricted data request.
  • step 4 after the application of the AI/ML at the AI/ML model [508], the unique hash code is assigned and received at the IPM unit [100a], In an implementation of the present solution, the unique hash code is assigned to the restricted data request. The user can access only the data for which the unique hash code is assigned. The user may only have access to execute the request on the dashboard but may not modify the dashboard.
  • the method includes the IPM unit [100a] fetching the restricted data from the caching layer [506], if the requested data is present in the caching layer [506], The restricted data may be fetched based on the unique hash code.
  • the restricted data request may be executed at the IPM unit [100a]
  • the IPM unit [100a] may search the restricted data via a unique hash code at the caching layer [506] and retrieve the restricted data associated based on the unique hash code. For instance, the restricted data request is to obtain data for call performance in the circle of city X from the call performance dashboard, where the call performance dashboard is for the country.
  • the caching layer [506] may send the restricted data to the IPM unit [100a],
  • the IPM unit [100a] may receive the restricted data from the caching layer [506], i.e., the call performance data in the circle of city X from the dashboard.
  • step 7 if the restricted data is not present in either the caching layer [506] or the distributed data lake [lOOu], then the IPM unit [100a] may send the request to the computation layer [100d],
  • the computation layer [lOOd] may compute the restricted data based on the unique hash code and send the computed restricted data to the IPM unit [100a],
  • the process of computation of the restricted data includes receiving the restricted data at the computation layer [lOOd] for analysis of the restricted data and generating the computed restricted data.
  • step 9 if the requested data is present in the distributed data lake [lOOu], the restricted data request may be executed through the distributed data lake [lOOu] by sending the restricted data request to the distributed data lake [lOOu] and receiving the restricted data at the IPM unit [100a],
  • the IPM unit [100a] may send restricted data to the load balancer [100k],
  • the load balancer [100k] may forward the restricted data to the UI [502] for the user.
  • the user may have access to the restricted data, in this instance, the call performance data in a single dashboard, along with other performance management and monitoring data. The user may be able to analyse the restricted data and make accurate decisions for improving the call performance.
  • the present disclosure further discloses a non-transitory computer readable storage medium storing instructions to automatically assign a restricted data to a user, the instructions including executable code which, when executed by one or more units of a system, causes a transceiver unit [302] of the system at an Integrated Performance Management (IPM) unit [100a] from a load balancer [100k] in a network, a restricted data request associated with the user.
  • the restricted data request is at least one of a restricted dashboard request and a restricted report execution request.
  • the instructions when executed by the system further cause the transceiver unit [302] to transmit, from the IPM unit [100a] to a trained model in the network, a hash code request associated with the restricted data request.
  • the instructions when executed by the system further cause a processing unit [304] to receive, at the IPM unit [100a] from the trained model, a unique hash code based on the hash code request.
  • the instructions when executed by the system further causes the processing unit [304] to fetch, at the IPM unit [100a] from a caching layer [506], the restricted data associated with the restricted data request, upon reception of the unique hash code.
  • the instructions when executed by the system further cause the processing unit [304] to automatically assign, from the IPM unit [100a] to the user via the load balancer [100k] in the network, the restricted data associated with the restricted data request.
  • the present disclosure provides a technically advanced solution to automatically assign a restricted data to a user.
  • the present solution allows the sharing of restrictive access to information on the dashboard with a group of users via assigning counters.
  • the present solution further allows a user to create a KPI (Key Performance Indicator) and track the performance of a network via the counters.
  • KPI Key Performance Indicator
  • the present solution allows the user to debug and visualize the KPI data using the counters.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Computer Security & Cryptography (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Debugging And Monitoring (AREA)

Abstract

The present disclosure relates to a method and a system to automatically assign a restricted data. The method includes receiving, at an Integrated Performance Management (IPM) unit from a load balancer, a restricted data request associated with the user. The method includes transmitting, from the IPM unit to a trained model in the network, a hash code request associated with the restricted data request. The method includes receiving, at the IPM unit from the trained model, a unique hash code based on the hash code request. The method includes fetching, at the IPM unit from a caching layer, a restricted data upon receiving the unique hash code. The method includes automatically assigning, from the IPM unit to the user via the load balancer, the restricted data associated with the restricted data request.

Description

METHOD AND SYSTEM TO AUTOMATICALLY ASSIGN RESTRICTED DATA TO A USER
TECHNICAL FIELD
[0001] Embodiments of the present disclosure generally relate to network performance management systems. More particularly, embodiments of the present disclosure relate to automatically assigning a restricted data to a user.
BACKGROUND
[0002] The following description of the related art is intended to provide background information pertaining to the field of the disclosure. This section may include certain aspects of the art that may be related to various features of the present disclosure. However, it should be appreciated that this section is used only to enhance the understanding of the reader with respect to the present disclosure, and not as admissions of the prior art.
[0003] Network performance management systems typically track network elements and data using network monitoring tools. Further, the network performance management systems combine and process such data to determine key performance indicators (KPI) of the network. Integrated performance management systems provide the means to visualize the network performance data so that network operators and other relevant stakeholders are able to identify the service quality of the overall network, and individual/grouped network elements. By having an overall as well as detailed view of the network performance, the network operator can detect, diagnose, and remedy actual service issues, as well as predict potential service issues or failures in the network and take precautionary measures accordingly.
[0004] In network performance management systems, management of the network via multiple dashboards leads to problems like delayed decision-making, and inaccurate assessment as the network operators have to manage and monitor performance on multiple dashboards. The problem arises when a same dashboard has to be assigned for different users, where the users have different permissions and access. One way to solve this issue is creating multiple dashboards, each with specific permission assigned to the users. Another way is sending permission-specific data to fulfil the administrator requirement. But both of these solutions are cumbersome tasks that require assigning and keeping track of multiple dashboards while maintaining these dashboards. Further, creating and maintaining multiple dashboards may become labour-intensive and complex, especially with an increase in the number of users or their roles.
[0005] Thus, there exists an imperative need in the art to provide a solution that can overcome these and other limitations of the existing solutions.
SUMMARY
[0006] This section is provided to introduce certain aspects of the present disclosure in a simplified form that are further described below in the detailed description. This summary is not intended to identify the key features or the scope of the claimed subject matter.
[0007] An aspect of the present disclosure may relate to a method to automatically assign a restricted data to a user. The method includes receiving, by a transceiver unit at an Integrated Performance Management (IPM) unit from a load balancer in a network, a restricted data request associated with the user. The restricted data request is at least one of a restricted dashboard request and a restricted report execution request. The method further includes transmitting, by the transceiver unit from the IPM to a trained model in the network, a hash code request associated with the restricted data request. Furthermore, the method includes receiving, by a processing unit at the IPM unit from the trained model, a unique hash code based on the hash code request. Further, the method includes fetching, by the processing unit at the IPM unit from a caching layer, the restricted data associated with the restricted data request, upon receiving the unique hash code. Further, the method includes automatically assigning, by the processing unit from the IPM unit to the user via the load balancer in the network, the restricted data associated with the restricted data request.
[0008] In an exemplary aspect of the present disclosure, the method further includes generating, by the processing unit via a computational layer, a set of computed restricted data based on at least the restricted report execution request, wherein the set of computed restricted data is generated in an event the restricted data associated with the restricted data request is not detected at the caching layer.
[0009] In an exemplary aspect of the present disclosure, the method further includes automatically assigning, by the processing unit from the IPM unit to the user via the load balancer in the network, the set of computed restricted data associated with the restricted data request. [0010] In an exemplary aspect of the present disclosure, the unique hash code associated with the restricted data request is generated via the trained model, wherein the model is trained using a machine learning technique.
[0011] Another aspect of the present disclosure may relate to a system to automatically assign a restricted data to a user. The system includes a transceiver unit. The transceiver unit is configured to receive, at an Integrated Performance Management (IPM) unit from a load balancer in a network, a restricted data request associated with the user. The restricted data request is at least one of a restricted dashboard request and a restricted report execution request. The transceiver unit is further configured to transmit, from the IPM unit to a trained model in the network, a hash code request associated with the restricted data request. The system includes a processing unit connected to at least the transceiver unit. The processing unit is configured to receive, at the IPM unit from the trained model, a unique hash code based on the hash code request. The processing unit is further configured to fetch, at the IPM unit from a caching layer, the restricted data associated with the restricted data request, upon reception of the unique hash code. The processing unit is further configured to automatically assign, from the IPM unit to the user via the load balancer in the network, the restricted data associated with the restricted data request.
[0012] Yet another aspect of the present disclosure may relate to a non-transitory computer readable storage medium storing instructions to automatically assign a restricted data to a user, the instructions including executable code which, when executed by one or more units of a system, cause a transceiver unit of the system to receive, at an Integrated Performance Management (IPM) unit from a load balancer in a network, a restricted data request associated with the user. The restricted data request is at least one of a restricted dashboard request and a restricted report execution request. The instructions when executed by the system further cause the transceiver unit to transmit, from the IPM unit to a trained model in the network, a hash code request associated with the restricted data request. The instructions when executed by the system further cause a processing unit to receive, at the IPM unit from the trained model, a unique hash code based on the hash code request. The instructions when executed by the system further cause the processing unit to fetch, at the IPM unit from a caching layer, the restricted data associated with the restricted data request, upon reception of the unique hash code. The instructions when executed by the system further cause the processing unit to automatically assign, from the IPM unit to the user via the load balancer in the network, the restricted data associated with the restricted data request. OBJECTS OF THE INVENTION
[0013] Some of the objects of the present disclosure, which at least one embodiment disclosed herein satisfies are listed herein below.
[0014] It is an object of the present disclosure to share restrictive access to information on the dashboard with at least one user via assigning counters.
[0015] It is another object of the present disclosure to create a KPI (Key Performance Indicator) and track the performance of a network via the counters.
[0016] It is yet another object of the present disclosure to debug and visualize the KPI data using the counters.
[0017] It is yet another object of the present disclosure to automatically assign the restricted data to the user based on a request received to assign the restricted data.
DESCRIPTION OF THE DRAWINGS
[0018] The accompanying drawings, which are incorporated herein, and constitute a part of this disclosure, illustrate exemplary embodiments of the disclosed methods and systems in which like reference numerals refer to the same parts throughout the different drawings. Components in the drawings are not necessarily to scale, emphasis instead being placed upon clearly illustrating the principles of the present disclosure. Also, the embodiments shown in the figures are not to be construed as limiting the disclosure, but the possible variants of the method and system according to the disclosure are illustrated herein to highlight the advantages of the disclosure. It will be appreciated by those skilled in the art that disclosure of such drawings includes disclosure of electrical components or circuitry commonly used to implement such components.
[0019] FIG. 1 illustrates an exemplary block diagram of a network performance management system.
[0020] FIG. 2 illustrates an exemplary block diagram of a computing device upon which the features of the present disclosure may be implemented in accordance with exemplary implementation of the present disclosure. [0021] FIG. 3 illustrates an exemplary block diagram of a system to automatically assign a restricted data to a user, in accordance with exemplary implementations of the present disclosure.
[0022] FIG. 4 illustrates a method flow diagram to automatically assign a restricted data to a user, in accordance with exemplary implementations of the present disclosure.
[0023] FIG. 5 illustrates an exemplary system architecture to automatically assign a restricted data to a user, in accordance with exemplary implementations of the present disclosure.
[0024] FIG. 6 illustrates a sequence flow diagram to automatically assign a restricted data to a user, in accordance with exemplary implementations of the present disclosure.
[0025] The foregoing shall be more apparent from the following more detailed description of the disclosure.
DETAILED DESCRIPTION
[0026] In the following description, for the purposes of explanation, various specific details are set forth in order to provide a thorough understanding of embodiments of the present disclosure. It will be apparent, however, that embodiments of the present disclosure may be practiced without these specific details. Several features described hereafter may each be used independently of one another or with any combination of other features. An individual feature may not address any of the problems discussed above or might address only some of the problems discussed above.
[0027] The ensuing description provides exemplary embodiments only, and is not intended to limit the scope, applicability, or configuration of the disclosure. Rather, the ensuing description of the exemplary embodiments will provide those skilled in the art with an enabling description for implementing an exemplary embodiment. It should be understood that various changes may be made in the function and arrangement of elements without departing from the spirit and scope of the disclosure as set forth.
[0028] Specific details are given in the following description to provide a thorough understanding of the embodiments. However, it will be understood by one of the ordinary skills in the art that the embodiments may be practiced without these specific details. For example, circuits, systems, processes, and other components may be shown as components in block diagram form in order not to obscure the embodiments in unnecessary detail. [0029] Also, it is noted that individual embodiments may be described as a process which is depicted as a flowchart, a flow diagram, a data flow diagram, a structure diagram, or a block diagram. Although a flowchart may describe the operations as a sequential process, many of the operations may be performed in parallel or concurrently. In addition, the order of the operations may be re-arranged. A process is terminated when its operations are completed but could have additional steps not included in a figure.
[0030] The word “exemplary” and/or “demonstrative” are used herein to mean serving as an example, instance, or illustration. For the avoidance of doubt, the subject matter disclosed herein is not limited by such examples. In addition, any aspect or design described herein as “exemplary” and/or “demonstrative” is not necessarily to be construed as preferred or advantageous over other aspects or designs, nor is it meant to preclude equivalent exemplary structures and techniques known to those of ordinary skill in the art. Furthermore, to the extent that the terms “include,” “has,” “contains,” and other similar words are used in either the detailed description or the claims, such terms are intended to be inclusive — in a manner similar to the term “comprising” as an open transition word — without precluding any additional or other elements.
[0031] As used herein, a “processing unit” or “processor” or “operating processor” includes one or more processors, wherein processor refers to any logic circuitry for processing instructions. A processor may be a general-purpose processor, a special purpose processor, a conventional processor, a digital signal processor, a plurality of microprocessors, one or more microprocessors in association with a (Digital Signal Processing) DSP core, a controller, a microcontroller, Application Specific Integrated Circuits, Field Programmable Gate Array circuits, any other type of integrated circuits, etc. The processor may perform signal coding data processing, input/output processing, and/or any other functionality that enables the working of the system according to the present disclosure. More specifically, the processor or processing unit is a hardware processor.
[0032] As used herein, “a user equipment”, “a user device”, “a smart-user-device”, “a smartdevice”, “an electronic device”, “a mobile device”, “a handheld device”, “a wireless communication device”, “a mobile communication device”, “a communication device” may be any electrical, electronic and/or computing device or equipment, capable of implementing the features of the present disclosure. The user equipment/device may include, but is not limited to, a mobile phone, smart phone, laptop, a general-purpose computer, desktop, personal digital assistant, tablet computer, wearable device, or any other computing device which is capable of implementing the features of the present disclosure. Also, the user device may contain at least one input means configured to receive an input from at least one of a transceiver unit, a processing unit, a storage unit, and any other such unit(s) which are required to implement the features of the present disclosure.
[0033] As used herein, “storage unit” or “memory unit” refers to a machine or computer-readable medium including any mechanism for storing information in a form readable by a computer or similar machine. For example, a computer-readable medium includes read-only memory (“ROM”), random access memory (“RAM”), magnetic disk storage media, optical storage media, flash memory devices or other types of machine-accessible storage media. The storage unit stores at least the data that may be required by one or more units of the system to perform their respective functions.
[0034] As used herein “interface” or “user interface” refers to a shared boundary across which two or more separate components of a system exchange information or data. The interface may also be referred to as a set of rules or protocols that define the communication or interaction of one or more modules or one or more units with each other, which also includes the methods, functions, or procedures that may be called.
[0035] All modules, units, and components used herein, unless explicitly excluded herein, may be software modules or hardware processors, the processors being a general-purpose processor, a special purpose processor, a conventional processor, a digital signal processor (DSP), a plurality of microprocessors, one or more microprocessors in association with a DSP core, a controller, a microcontroller, Application Specific Integrated Circuits (ASIC), Field Programmable Gate Array circuits (FPGA), any other type of integrated circuits, etc.
[0036] As used herein, the transceiver unit includes at least one receiver and at least one transmitter configured respectively for receiving and transmitting data, signals, information or a combination thereof between units/components within the system and/or connected with the system.
[0037] As discussed in the background section, the current known solutions have several shortcomings. Assigning same dashboard for different users with different permissions and access is a major issue during the monitoring and management of the network parameters. Monitoring and managing multiple dashboards for different users with different access and permission leads to problems like delayed decision making, inaccurate assessment as the administrative user has to manage and monitor performance on multiple dashboards. The present disclosure aims to overcome the above-mentioned and other existing problems in this field of technology by providing a method and a system to automatically assign a restricted data to a user based on a request of the user.
[0038] FIG. 1 illustrates an exemplary block diagram of a network performance management system [100], in accordance with the exemplary embodiments of the present invention. Referring to Fig. 1, the network performance management system [100] comprises various sub-systems such as: an integrated performance management unit [100a], a normalization layer [100b], a computation layer [lOOd], an anomaly detection layer [lOOo], a streaming engine [1001], a load balancer [100k], an operations and management system [lOOp], an API gateway system [lOOr], an analysis engine [lOOh], a parallel computing framework [lOOi], a forecasting engine [lOOt], a distributed file system [ 1 OOj ], a mapping layer [100s], a distributed data lake [lOOu], a scheduling layer [100g], a reporting engine [100m], a message broker [lOOe], a graph layer [ 1 OOf], a caching layer [100c], a service quality manager [lOOq] and a correlation engine[100n]. Exemplary connections between these subsystems are also shown in FIG. 1. However, it will be appreciated by those skilled in the art that the present disclosure is not limited to the connections shown in the diagram, and any other connections between various subsystems that are needed to realise the effects are within the scope of this disclosure.
[0039] Following are the various components of the system [100], as shown in FIG. 1 :
[0040] Integrated Performance Management (IPM) unit [100a] is associated with a performance management engine [lOOv] and a Key Performance Indicator (KPI) Engine [100w],
[0041] Performance Management Engine [100v]: The Performance Management engine [lOOv] is a crucial component of the IPM system [100a], responsible for collecting, processing, and managing performance counter data from various data sources within the network (e.g., 5G network). As used herein, the counter data includes metrics such as connection speed, latency, data transfer rates, and many others. The counter data is then processed and aggregated as required, forming a comprehensive overview of network performance. The processed information is then stored in the Distributed Data Lake [100u], The distributed data lake [lOOu] is a centralized, scalable, and flexible storage solution, allowing for easy access and further analysis. The Performance Management engine [lOOv] also enables the reporting and visualization of the performance counter data, thus providing network administrators with a real-time, insightful view of the network's operation. Through these visualizations, operators can monitor the network's performance, identify potential issues, and make informed decisions to enhance network efficiency and reliability. An operator in the IPM system [100a] may be an individual, a device, an administrator, and the like who may interact with or manage the network.
[0042] Key Performance Indicator (KPI) Engine [100w]: The Key Performance Indicator (KPI) Engine [lOOw] is a dedicated component tasked with managing the KPIs of all the network elements. The Key Performance Indicator (KPI) Engine [lOOw] uses the performance counters, which are collected and processed by the Performance Management engine [lOOv] from various data sources. These counters, encapsulating crucial performance data, are harnessed by the KPI engine [lOOw] to calculate essential KPIs. These KPIs may include at least one of: data throughput, latency, packet loss rate, and more. Once the KPIs are computed, the KPIs are segregated based on the aggregation requirements, offering a multi-layered and detailed understanding of the network performance. The processed KPI data is then stored in the Distributed Data Lake [lOOu], ensuring a highly accessible, centralized, and scalable data repository for further analysis and utilization. Similar to the Performance Management engine [lOOv], the KPI engine [lOOw] is also responsible for reporting and visualization of KPI data. This functionality allows network administrators to gain a comprehensive, visual understanding of the network's performance, thus supporting informed decision-making and efficient network management.
[0043] Ingestion layer: The Ingestion layer (not shown in FIG. 1) forms a key part of the IPM system [100a], The ingestion layer primarily performs the function to establish an environment capable of handling diverse types of incoming data. This data may include Alarms, Counters, Configuration parameters, Call Detail Records (CDRs), Infrastructure metrics, Logs, and Inventory data, all of which are crucial for maintaining and optimizing the network's performance. Upon receiving this data, the Ingestion layer processes the data by validating the data integrity and correctness to ensure that the data is fit for further use. Following the validation, the data is routed to various components of the IPM system [100a], including the Normalization layer [100b], Streaming Engine [1001], Streaming Analytics, and Message Brokers [100e], The destination is chosen based on where the data is required for further analytics and processing. By serving as the first point of contact for incoming data, the Ingestion layer plays a vital role in managing the data flow within the system, thus supporting comprehensive and accurate network performance analysis. [0044] Normalization layer [100b]: The Normalization Layer [100b] serves to standardize, enrich, and store data in the appropriate databases. It takes in data that has been ingested and adjusts it to a common standard, making it easier to compare and analyze. This process of "normalization" reduces redundancy and improves data integrity. Upon completion of normalization, the data is stored in various databases like the Distributed Data Lake [lOOu], Caching Layer [100c], and Graph Layer [lOOf], depending on its intended use. The choice of storage determines how the data can be accessed and used in the future. Additionally, the Normalization Layer [100b] produces data for the Message Broker [ 1 OOe], a system that enables communication between different parts of the integrated performance management unit [100a] through the exchange of data messages. Moreover, the Normalization Layer [100b] supplies the standardized data to several other subsystems. These include the Analysis Engine [lOOh] for detailed data examination, the Correlation Engine [lOOn] for detecting relationships among various data elements, the Service Quality Manager [ 1 OOq] for maintaining and improving the quality of services, and the Streaming Engine [1001] for processing real-time data streams. These subsystems depend on the normalized data to perform their operations effectively and accurately, demonstrating the Normalization Layer's [100b] critical role in the entire system.
[0045] Caching layer [100c]: The Caching Layer [100c] in the IPM system [100a] plays a significant role in data management and optimization. During the initial phase, the Normalization Layer [100b] processes incoming raw data to create a standardized format, enhancing consistency and comparability. The Normalizer Layer then inserts this normalized data into various databases. One such database is the Caching Layer [100c], The Caching Layer [100c] is a high-speed data storage layer which temporarily holds data that is likely to be reused, to improve the speed and performance of data retrieval. By storing frequently accessed data in the Caching Layer [100c], the system significantly reduces the time taken to access this data, improving overall system efficiency and performance. Further, the Caching Layer [100c] serves as an intermediate layer between the data sources and the sub-systems, such as the Analysis Engine, Correlation Engine [lOOn], Service Quality Manager, and Streaming Engine. The Normalization Layer [100b] is responsible for providing these sub-systems with the necessary data from the Caching Layer [100c],
[0046] Computation layer [100d]: The Computation Layer [lOOd] in the IPM system [100a] serves as the main hub for complex data processing tasks. In the initial stages, raw data is gathered, normalized, and enriched by the Normalization Layer [100b], The Normalizer Layer [100b] then inserts this standardized data into multiple databases including the Distributed Data Lake [lOOu], Caching Layer [100c], and Graph Layer [ 1 OOf , and also feeds it to the Message Broker [ 1 OOe] . Within the Computation Layer [1 OOd], several powerful sub-systems such as the Analysis Engine [lOOh], Correlation Engine [lOOn], Service Quality Manager [lOOq], and the Streaming Engine [1001], utilize the normalized data. These systems are designed to execute various data processing tasks. The Analysis Engine [lOOh] performs in-depth data analytics to generate insights from the data. The Correlation Engine [lOOn] identifies and understands the relations and patterns within the data. The Service Quality Manager [lOOq] assesses and ensures the quality of the services. The Streaming Engine [1001] processes and analyses the real-time data feeds. In essence, the Computation Layer [lOOd] is where all major computation and data processing tasks occur. It uses the normalized data provided by the Normalization Layer [100b], processing it to generate useful insights, ensure service quality, understand data patterns, and facilitate real-time data analytics.
[0047] Message broker [lOOe] : The Message Broker [lOOe], an integral part of the IPM system [100a], operates as a publish-subscribe messaging system. It orchestrates and maintains the realtime flow of data from various sources and applications. At its core, the Message Broker [lOOe] facilitates communication between data producers and consumers through message-based topics. This creates an advanced platform for contemporary distributed applications. With the ability to accommodate a large number of permanent or ad-hoc consumers, the Message Broker [lOOe] demonstrates immense flexibility in managing data streams. Moreover, it leverages the filesystem for storage and caching, boosting its speed and efficiency. The design of the Message Broker [lOOe] is centered around reliability. It is engineered to be fault-tolerant and mitigate data loss, ensuring the integrity and consistency of the data. With its robust design and capabilities, the Message Broker [lOOe] forms a critical component in managing and delivering real-time data in the system.
[0048] Graph layer [100f|: The Graph Layer [ 1 OOf] plays a pivotal role in the IPM system [100a], It can model a variety of data types, including alarm, counter, configuration, CDR data, Inframetric data, Probe Data, and Inventory data. Equipped with the capability to establish relationships among diverse types of data, The Graph Layer [ 1 OOf] acts as a Relationship Modeler that offers extensive modeling capabilities. For instance, it can model Alarm and Counter data, Vprobe, and Alarm data, elucidating their interrelationships. Moreover, the Relationship Modeler should adapt to processing steps provided in the model and delivering the results to the system requested, whether it be a Parallel Computing system, Workflow Engine, Query Engine, Correlation Engine [lOOn], Performance Management Engine, or KPI Engine [100w], With its powerful modelling and processing capabilities, the Graph Layer [1 OOf] forms an essential part of the system, enabling the processing and analysis of complex relationships between various types of network data.
[0049] Scheduling layer [100g]: The Scheduling Layer [100g] serves as a key element of the IPM System [100a], endowed with the ability to execute tasks at predetermined intervals set according to user preferences. A task might be an activity performing a service call, an API call to another microservice, the execution of an Elastic Search query, and storing its output in the Distributed Data Lake [lOOu] or Distributed File System or sending it to another micro-service. The microservice refers to a single system architecture to provide multiple functions. Some of the microservices in communication are API calls and remote procedure calls. The versatility of the Scheduling Layer [100g] extends to facilitating graph traversals via the Mapping Layer to execute tasks. This crucial capability enables seamless and automated operations within the system, ensuring that various tasks and services are performed on schedule, without manual intervention, enhancing the system's efficiency and performance. In sum, the Scheduling Layer [100g] orchestrates the systematic and periodic execution of tasks, making it an integral part of the efficient functioning of the entire system.
[0050] Analysis Engine [100h]: The Analysis Engine [lOOh] forms a crucial part of the IPM System [100a], designed to provide an environment where users can configure and execute workflows for a wide array of use-cases. This facility aids in the debugging process and facilitates a better understanding of call flows. With the Analysis Engine [lOOh], users can perform queries on data sourced from various subsystems or external gateways. This capability allows for an in- depth overview of data and aids in pinpointing issues. The system's flexibility allows users to configure specific policies aimed at identifying anomalies within the data. When these policies detect abnormal behaviour or policy breaches, the system sends notifications, ensuring swift and responsive action. In essence, the Analysis Engine [lOOh] provides a robust analytical environment for systematic data interrogation, facilitating efficient problem identification and resolution, thereby contributing significantly to the system's overall performance management.
[0051] Parallel Computing Framework [lOOi] : The Parallel Computing Framework [lOOi] is a key aspect of the Integrated Performance Management unit [100a], providing a user-friendly yet advanced platform for executing computing tasks in parallel. The parallel computing framework [lOOi] highlights both scalability and fault tolerance, crucial for managing vast amounts of data. Users can input data via Distributed File System (DFS) [ 1 OOj ] locations or Distributed Data Lake (DDL) indices. The framework supports the creation of task chains by interfacing with the Service Configuration Management (SCM) Sub-System. Each task in a workflow is executed sequentially, but multiple chains can be executed simultaneously, optimizing processing time. To accommodate varying task requirements, the service supports the allocation of specific host lists for different computing tasks. The Parallel Computing Framework [lOOi] is an essential tool for enhancing processing speeds and efficiently managing computing resources, significantly improving the system's performance management capabilities.
[0052] Distributed File System [lOOj] : The Distributed File System (DFS) [lOOj] is a critical component of the Integrated Performance Management unit [100a], enabling multiple clients to access and interact with data seamlessly. The Distributed File system [lOOj] is designed to manage data files that are partitioned into numerous segments known as chunks. In the context of a network with vast data, the DFS [ 1 OOj ] effectively allows for the distribution of data across multiple nodes. This architecture enhances both the scalability and redundancy of the system, ensuring optimal performance even with large data sets. DFS [lOOj] also supports diverse operations, facilitating the flexible interaction with and manipulation of data. This accessibility is paramount for a system that requires constant data input and output, as is the case in a robust performance management system.
[0053] Load Balancer [100k]: The Load Balancer (LB) [100k] is a vital component of the Integrated Performance Management unit [100a], designed to efficiently distribute incoming network traffic across a multitude of backend servers or microservices. Its purpose is to ensure the even distribution of data requests, leading to optimized server resource utilization, reduced latency, and improved overall system performance. The LB [100k] implements various routing strategies to manage traffic. The LB [100k] includes round-robin scheduling, header-based request dispatch, and context-based request dispatch. Round-robin scheduling is a simple method of rotating requests evenly across available servers. In contrast, header and context-based dispatching allow for more intelligent, request-specific routing. Header-based dispatching routes requests based on data contained within the headers of the Hypertext Transfer Protocol (HTTP) requests. Contextbased dispatching routes traffic based on the contextual information about the incoming requests. For example, in an event-driven architecture, the LB [100k] manages event and event acknowledgments, forwarding requests or responses to the specific microservice that has requested the event. This system ensures efficient, reliable, and prompt handling of requests, contributing to the robustness and resilience of the overall performance management system. [0054] Streaming Engine [1001]: The Streaming Engine [1001], also referred to as Stream Analytics, is a critical subsystem in the Integrated Performance Management unit [100a], This engine is specifically designed for high-speed data pipelining to the User Interface (UI). Its core objective is to ensure real-time data processing and delivery, enhancing the system's ability to respond promptly to dynamic changes. Data is received from various connected subsystems and processed in real-time by the Streaming Engine [1001], After processing, the data is streamed to the UI, fostering rapid decision-making and responses. The Streaming Engine [1001] cooperates with the Distributed Data Lake [lOOu], Message Broker [lOOe], and Caching Layer [100c] to provide seamless, real-time data flow. Stream Analytics is designed to perform required computations on incoming data instantly, ensuring that the most relevant and up-to-date information is always available at the UI. Furthermore, this system can also retrieve data from the Distributed Data Lake [lOOu], Message Broker [lOOe], and Caching Layer [100c] as per the requirement and deliver it to the UI in real-time. The streaming engine [1001] is configured to provide fast, reliable, and efficient data streaming, contributing to the overall performance of the Integrated Performance Management unit [100a],
[0055] Reporting Engine [100m]: The Reporting Engine [100m] is a key subsystem of the Integrated Performance Management unit [100a], The fundamental purpose of designing the Reporting Engine [100m] is to dynamically create report layouts of API data, cater to individual client requirements, and deliver these reports via the Notification Engine. The REM serves as the primary interface for creating custom reports based on the data visualized through the client's dashboard. These custom dashboards, created by the client through the User Interface (UI), provide the basis for the Reporting Engine [100m] to process and compile data from various interfaces. The main output of the Reporting Engine [100m] is a detailed report generated in Excel format. The Reporting Engine’s [100m] unique capability to parse data from different subsystem interfaces, process it according to the client's specifications and requirements, and generate a comprehensive report makes it an essential component of this performance management system. Furthermore, the Reporting Engine [100m] integrates seamlessly with the Notification Engine to ensure timely and efficient delivery of reports to clients via email, ensuring the information is readily accessible and usable, thereby improving overall client satisfaction and system usability.
[0056] FIG. 2 illustrates an exemplary block diagram of a computing device [200] upon which the features of the present disclosure may be implemented in accordance with exemplary implementation of the present disclosure. In an implementation, the computing device [200] may also implement a method to automatically assign a restricted data to a user, utilizing the system. In another implementation, the computing device [200] itself implements the method to automatically assign a restricted data to a user, using one or more units configured within the computing device [200], wherein said one or more units are capable of implementing the features as disclosed in the present disclosure.
[0057] The computing device [200] may include a bus [202] or other communication mechanism for communicating information, and a hardware processor [204] coupled with bus [202] for processing information. The hardware processor [204] may be, for example, a general-purpose microprocessor. The computing device [200] may also include a main memory [206], such as a random-access memory (RAM), or other dynamic storage device, coupled to the bus [202] for storing information and instructions to be executed by the processor [204], The main memory [206] also may be used for storing temporary variables or other intermediate information during execution of the instructions to be executed by the processor [204], Such instructions, when stored in non-transitory storage media accessible to the processor [204], render the computing device [200] into a special-purpose machine that is customized to perform the operations specified in the instructions. The computing device [200] further includes a read only memory (ROM) [208] or other static storage device coupled to the bus [202] for storing static information and instructions for the processor [204],
[0058] A storage device [210], such as a magnetic disk, optical disk, or solid-state drive is provided and coupled to the bus [202] for storing information and instructions. The computing device [200] may be coupled via the bus [202] to a display [212], such as a cathode ray tube (CRT), Liquid crystal Display (LCD), Light Emitting Diode (LED) display, Organic LED (OLED) display, etc. for displaying information to a computer user. An input device [214], including alphanumeric and other keys, touch screen input means, etc. may be coupled to the bus [202] for communicating information and command selections to the processor [204], Another type of user input device may be a cursor controller [216], such as a mouse, a trackball, or cursor direction keys, for communicating direction information and command selections to the processor [204], and for controlling cursor movement on the display [212], The input device typically has two degrees of freedom in two axes, a first axis (e.g., x) and a second axis (e.g., y), that allow the device to specify positions in a plane.
[0059] The computing device [200] may implement the techniques described herein using customized hard-wired logic, one or more ASICs or FPGAs, firmware and/or program logic which in combination with the computing device [200] causes or programs the computing device [200] to be a special-purpose machine. According to one implementation, the techniques herein are performed by the computing device [200] in response to the processor [204] executing one or more sequences of one or more instructions contained in the main memory [206], Such instructions may be read into the main memory [206] from another storage medium, such as the storage device [210], Execution of the sequences of instructions contained in the main memory [206] causes the processor [204] to perform the process steps described herein. In alternative implementations of the present disclosure, hard-wired circuitry may be used in place of or in combination with software instructions.
[0060] The computing device [200] also may include a communication interface [218] coupled to the bus [202], The communication interface [218] provides a two-way data communication coupling to a network link [220] that is connected to a local network [222], For example, the communication interface [218] may be an integrated services digital network (ISDN) card, cable modem, satellite modem, or a modem to provide a data communication connection to a corresponding type of telephone line. As another example, the communication interface [218] may be a local area network (LAN) card to provide a data communication connection to a compatible LAN. Wireless links may also be implemented. In any such implementation, the communication interface [218] sends and receives electrical, electromagnetic, or optical signals that carry digital data streams representing various types of information.
[0061] The computing device [200] can send messages and receive data, including program code, through the network(s), the network link [220], and the communication interface [218], In the Internet example, a server [230] might transmit a requested code for an application program through the Internet [228], the ISP [226], the local network [222], the host [224], and the communication interface [218], The received code may be executed by the processor [204] as it is received, and/or stored in the storage device [210], or other non-volatile storage for later execution.
[0062] The present disclosure is implemented by a system [300] (as shown in FIG. 3). In an implementation, the system [300] may include the computing device [200] (as shown in FIG. 2). It is further noted that the computing device [200] is able to perform the steps of a method [400] (as shown in FIG. 4).
[0063] Referring to FIG. 3, an exemplary block diagram of a system [300] to automatically assign a restricted data to a user is shown, in accordance with the exemplary implementations of the present disclosure. The system [300] comprises at least one transceiver unit [302], at least one processing unit [304], and at least one storage unit [306], Also, all of the components/ units of the system [300] are assumed to be connected to each other unless otherwise indicated below. As shown in the figures all units shown within the system should also be assumed to be connected to each other. Also, in FIG. 3 only a few units are shown, however, the system [300] may comprise multiple such units, or the system [300] may comprise any such numbers of said units, as required to implement the features of the present disclosure. Further, in an implementation, the system [300] may be present in a user device to implement the features of the present disclosure. The system [300] may be a part of the user device / or may be independent of but in communication with the user device (may also referred to herein as a UE). In another implementation, the system [300] may reside in a server or a network entity. In yet another implementation, the system [300] may reside partly in the server/ network entity and partly in the user device.
[0064] The system [300] is configured to automatically assign a restricted data to a user, with the help of the interconnection between the components/units of the system [300],
[0065] The system [300] includes a transceiver unit [302], The transceiver unit [302] is configured to receive, at an Integrated Performance Management (IPM) unit [100a] from a load balancer [100k] in a network, a restricted data request associated with the user. The restricted data request is at least one of a restricted dashboard request and a restricted report execution request.
[0066] As used herein, the restricted dashboard request refers to a request received for the dashboard with specific/restricted data such as geographical region-specific data. For instance, a call performance dashboard may aggregate the call performance data in each circle of a network. The circle refers to a specific geographic area or region. If user A has restricted access to city X circle, user A may only get data for city X while the call performance dashboard is configured for the whole country in which city X is located. Therefore, the output for the restricted dashboard request or the restricted report execution request output will depend on the type of access the user has.
[0067] As used herein, the restricted report execution request refers to the execution or implementation of the request to generate a report based on user requirements. For example, if the report execution request is for city X, the report may be executed for city X only comprising all the required information about the requested data such as call performance data. The report may include graphs, charts, and tables to represent the requested data. The transceiver unit [302] may further transmit, from the IPM unit [100a] to a trained model in the network, a hash code request associated with the restricted data request. The hash code request refers to the request for the assignment of a unique hash code to the restricted data request. The unique hash code may help to identify duplicate requests sent to the transceiver unit [302],
[0068] As used herein, a unique hash code refers to a distinct identifier generated by the trained model. The unique hash code such as a fixed-size string of characters is generated for the restricted data request. The hash code is unique for each unique input, thereby differentiating requests based on different hash codes.
[0069] The system [300] further includes a processing unit [304] connected to at least the transceiver unit [302], The processing unit [304] is configured to receive, at the IPM unit [100a] from the trained model, the unique hash code based on the hash code request. The unique hash code is generated via the trained model. The trained model is trained via the Artificial Intelligence (AI)/Machine Learning (ML) layer. More particularly, the trained model is trained using machine learning techniques. The machine learning technique refers to a method that may create a model to generate unique integer values (or unique hash code) for every restricted data request.
[0070] The processing unit [304] is further configured to fetch, at the IPM unit [100a] from a caching layer [506], the restricted data associated with the restricted data request, upon reception of the unique hash code. In an implementation of the present disclosure, once the unique hash code is assigned to the restricted data request, the restricted data request may be executed at the IPM unit [100a], The IPM unit [100a] may use the unique hash code at the caching layer [506] to retrieve the restricted data associated with the restricted data request.
[0071] The transceiver unit [302] is further configured to automatically assign, from the IPM unit [100a] to the user via the load balancer [100k] in the network, the restricted data associated with the restricted data request. In an implementation of the present disclosure, once the restricted data is retrieved from the caching layer [506], the transceiver unit [302] may automatically assign the restricted data to the user. For instance, the unique hash code assigned to the restricted data request for accessing call performance data of city X is XYZ. The processing unit [304] may retrieve the restricted data associated with the call performance of city X instead of the complete data of the country where city X is also located. After retrieving the restricted data, the processing unit automatically assigns the restricted data to the user so that the user may check only the call performance data of a particular city (say X) via a user interface. Therefore, the user may not have access to city Y, which exists on the same call performance dashboard. The user can access only the restricted data for which the unique hash code is assigned. In an embodiment of the present disclosure, the user only has read-only access to the dashboard with restricted data (such as call performance data of city X). The read-only access provides only access to see the requested restricted data and the user may not be able to modify any particular changes in the dashboard.
[0072] The processing unit [304] is further configured to generate, via a computation layer [lOOd], a set of computed restricted data based on at least the restricted report execution request. The set of computed restricted data is generated in an event the restricted data associated with the restricted data request is not detected at the caching layer [506] (also referred to as the catching engine). The set of computed restricted data refers to the processed data based on the request of the user. The requested data is first received at the computational layer in a raw format and then the computational layer performs computation or processing on the received data to provide the user with the processed or final output data in the form of computed restricted data. The processing unit [304] is further configured to automatically assign, from the IPM unit [100a] to the user via the load balancer [100k] in the network, the set of computed restricted data associated with the restricted data request. In an implementation of the present disclosure, if the restricted data associated with the restricted data request is not present in the caching layer, the IPM unit [100a] may send the request to the computation layer [lOOd] using the unique hash code. The computation layer [lOOd] may compute the data based on the unique hash code and send the computed restricted data to the IPM unit [100a], Further, the system includes a storage unit [306], The storage unit [306] is connected to at least the transceiver unit [302] and the processing unit [304], The storage unit [306] is configured to store the data required for the implementation of the features of the present invention such as but not limited to restricted data, training data, report data, and dashboard data.
[0073] Referring to FIG. 4, an exemplary method flow diagram [400] to automatically assign a restricted data to a user, in accordance with exemplary implementations of the present disclosure is shown. In an implementation, the method [400] is performed by the system [300], Further, in an implementation, the system [300] may be present in a server device to implement the features of the present disclosure. To explain FIG. 4, reference to the components (e.g., caching layer) is also taken from subsequent FIG. 5 for a better understanding of the invention. The reference to the components of FIG. 4 is also taken Also, as shown in FIG. 4, the method [400] starts at step [402],
[0074] At step [404], the method includes receiving, by a transceiver unit [302] at an Integrated Performance Management (IPM) unit [100a] from a load balancer [100k] in a network, a restricted data request associated with the user. The restricted data request is at least one of a restricted dashboard request and a restricted report execution request. For example, the restricted dashboard request refers to a dashboard with performance parameters of the network for a specific/restricted geographical area. For instance, a call performance dashboard may aggregate the call performance in each circle of a network. The circle refers to a specific geographic area. The restricted report execution request refers to the execution or implementation of the request to generate a report based on the report execution request. For example, if the report execution request is for city X, the report may be executed and displayed to the user for city X only.
[0075] Next, at step [406], the method includes transmitting, by the transceiver unit [302] from the IPM unit [100a] to a trained model in the network, a hash code request associated with the restricted data request. The hash code request refers to the request for the assignment of a unique hash code to the restricted data request. The unique hash code may help to identify duplicate requests sent to the transceiver unit [302],
[0076] Next, at step [408], the method includes receiving, by a processing unit [304] at the IPM unit [100a] from the trained model, a unique hash code based on the hash code request. The unique hash code associated with the restricted data request is generated via the trained model. The trained model is trained using a machine learning technique.
[0077] Next, at step [410], the method includes fetching, by the processing unit [304] at the IPM unit [100a] from a caching layer [110c, 506], a restricted data associated with the restricted data request, upon receiving the unique hash code. In an implementation of the present disclosure, once the unique hash code is assigned to the restricted data request, the restricted data request may be executed at the IPM unit [100a], The IPM unit [100a] may search the restricted data using a unique hash code at the caching layer [110c, 506] and retrieve the restricted data for the user based on the unique hash code. For instance, the restricted data request is to obtain data associated with the internet speed of a 5G network in an area of Y city. The processing unit [304] may retrieve the restricted data, i.e., the internet speed data of the 5G network in the city Y, and display it to the user either via the dashboard or in the form of the report. The report may be downloaded by the user based on a request of the user to download the report.
[0078] At step [412], the method includes automatically assigning, by the processing unit [304] from the IPM unit [100a] to the user via the load balancer in the network, the restricted data associated with the restricted data request. In an implementation of the present disclosure, once the restricted data is retrieved from the caching layer [110c], the transceiver unit [302] may automatically assign the restricted data to the user.
[0079] The method further includes generating, by the processing unit [304] via a computation layer [lOOd], a set of computed restricted data based on at least the restricted report execution request. The set of computed restricted data is generated in an event the restricted data associated with the restricted data request is not detected at the caching layer [506], The method further includes automatically assigning, by the processing unit [304] from the IPM unit [100a] to the user via the load balancer [100k] in the network, the set of computed restricted data associated with the restricted data request. In an implementation of the present disclosure, if the restricted data associated with the restricted data request is not present in the caching layer, the IPM unit [100a] may send the request to the computation layer [lOOd] to compute the restricted data and assign to the user.
[0080] The method terminates at step [414],
[0081] Referring to FIG. 5, an exemplary system architecture to automatically assign a restricted data to a user, in accordance with exemplary implementations of the present disclosure is shown.
[0082] The exemplary system architecture [500] includes but is not limited to a User Interface (UI) [502], the load balancer [100k], the IPM unit [100a], a caching layer [506], an Artificial Intelligence/ Machine Learning (AI/ML) model [508], the computation layer [lOOd], the distributed file system [lOOj], and the distributed data lake [100u], In an implementation of the present disclosure, the caching layer [506] is similar to the caching layer [100c],
[0083] To automatically assign a restricted data to the user, the user may initiate a restricted data request at the UI [502], The restricted data request is at least one of a restricted dashboard request and a restricted report execution request. The restricted data request is sent to the load balancer [100k],
[0084] The load balancer [100k] may efficiently distribute incoming network traffic across backend servers or microservices. The load balancer [100k] ensures the even distribution of data requests, leading to optimized server resource utilization, reduced latency, and improved overall system performance. [0085] The load balancer [100k] may forward the restricted data request to the Integrated Performance Management (IPM) unit [100a], The IPM unit [100a] may send a request for assigning a unique hash code to the restricted data request for identification of any duplicate request to the AI/ML model [508], The AI/ML model [508] is trained using a machine learning technique.
[0086] After the application of the AI/ML at the AI/ML model [508] to generate the unique hash code, the unique hash code is assigned to the restricted data request, and the unique hash code is shared with the IPM unit [100a], In an implementation of the present solution, the unique hash code is assigned to the restricted data request. The user can access only the data for which the unique hash code is assigned. The user may only have access to execute the request on the dashboard but may not modify the dashboard.
[0087] Further, the IPM unit [100a] may fetch the restricted data associated with the restricted data request from the caching layer [506], if the data requested is present in the caching layer [506], The restricted data may be fetched based on the unique hash code. The caching layer [506] may send the restricted data to the IPM unit [100a],
[0088] If the restricted data is not present in the caching layer [506], but is present in the distributed data lake [lOOu], the restricted data request may be executed through the distributed data lake [100u], The restricted data request may be sent to the distributed data lake [lOOu] and based on the unique hash code, the restricted data associated with the restricted data request may be received at the IPM unit [100a],
[0089] If the restricted data is not present in either the caching layer [506] or the distributed data lake [lOOu], then the IPM unit [100a] may send the restricted data request to the computation layer [100d], The computation layer [lOOd] may compute the data from the distributed file system [lOOj], based on the unique hash code, and send the computed restricted data to the IPM unit [100a],
[0090] The IPM unit [100a] may send restricted data to the load balancer [100k], The load balancer [100k] may forward the restricted data to the UI [502] for the user. In an implementation of the present disclosure, based on the computed restricted data received at the IPM unit [100a], the user may have access only to the restricted data. [0091] Referring to FIG. 6, an exemplary sequence flow diagram to automatically assign a restricted data to a user, in accordance with exemplary implementations of the present disclosure is shown.
[0092] In step 1, a restricted data request initiated by the user may be sent to the load balancer [100k] via the User Interface (UI) [502], The restricted data request is at least one of a restricted dashboard request and a restricted report execution request. The restricted dashboard request refers to a dashboard for a specific/restricted geographical area. The restricted report execution request refers to the execution or implementation of the request to generate a report based on the report execution request. The load balancer [100k] may efficiently distribute incoming network traffic across backend servers or microservices. The load balancer [100k] ensures the even distribution of data requests, leading to optimized server resource utilization, reduced latency, and improved overall system performance. In an implementation of the present solution, the UI [502] may contain a dashboard comprising the set of information. The set of information is associated with a unique hash code.
[0093] In step 2, the load balancer [100k] may forward the restricted data request to the Integrated Performance Management (IPM) unit [100a],
[0094] Next, in step 3, the IPM unit [100a] may send a request for assigning a unique hash code to the restricted data request to identify duplicate requests to the AI/ML model [508], The request for assigning the unique hash code refers to the request for the assignment of a unique integer value to the restricted data request.
[0095] Further, in step 4, after the application of the AI/ML at the AI/ML model [508], the unique hash code is assigned and received at the IPM unit [100a], In an implementation of the present solution, the unique hash code is assigned to the restricted data request. The user can access only the data for which the unique hash code is assigned. The user may only have access to execute the request on the dashboard but may not modify the dashboard.
[0096] Next, in step 5, the method includes the IPM unit [100a] fetching the restricted data from the caching layer [506], if the requested data is present in the caching layer [506], The restricted data may be fetched based on the unique hash code. In an implementation of the present disclosure, once the unique hash code is assigned to the restricted data request, the restricted data request may be executed at the IPM unit [100a], The IPM unit [100a] may search the restricted data via a unique hash code at the caching layer [506] and retrieve the restricted data associated based on the unique hash code. For instance, the restricted data request is to obtain data for call performance in the circle of city X from the call performance dashboard, where the call performance dashboard is for the country.
[0097] Next, in step 6, the caching layer [506] may send the restricted data to the IPM unit [100a], For instance, the IPM unit [100a] may receive the restricted data from the caching layer [506], i.e., the call performance data in the circle of city X from the dashboard.
[0098] Further, in step 7, if the restricted data is not present in either the caching layer [506] or the distributed data lake [lOOu], then the IPM unit [100a] may send the request to the computation layer [100d],
[0099] In step 8, the computation layer [lOOd] may compute the restricted data based on the unique hash code and send the computed restricted data to the IPM unit [100a], The process of computation of the restricted data includes receiving the restricted data at the computation layer [lOOd] for analysis of the restricted data and generating the computed restricted data.
[0100] In step 9, if the requested data is present in the distributed data lake [lOOu], the restricted data request may be executed through the distributed data lake [lOOu] by sending the restricted data request to the distributed data lake [lOOu] and receiving the restricted data at the IPM unit [100a],
[0101] Next in step 9, the IPM unit [100a] may send restricted data to the load balancer [100k],
[0102] In step 10, the load balancer [100k] may forward the restricted data to the UI [502] for the user. In an implementation of the present disclosure, based on the computed data received at the IPM unit [100a] the user may have access to the restricted data, in this instance, the call performance data in a single dashboard, along with other performance management and monitoring data. The user may be able to analyse the restricted data and make accurate decisions for improving the call performance.
[0103] The present disclosure further discloses a non-transitory computer readable storage medium storing instructions to automatically assign a restricted data to a user, the instructions including executable code which, when executed by one or more units of a system, causes a transceiver unit [302] of the system at an Integrated Performance Management (IPM) unit [100a] from a load balancer [100k] in a network, a restricted data request associated with the user. The restricted data request is at least one of a restricted dashboard request and a restricted report execution request. The instructions when executed by the system further cause the transceiver unit [302] to transmit, from the IPM unit [100a] to a trained model in the network, a hash code request associated with the restricted data request. The instructions when executed by the system further cause a processing unit [304] to receive, at the IPM unit [100a] from the trained model, a unique hash code based on the hash code request. The instructions when executed by the system further causes the processing unit [304] to fetch, at the IPM unit [100a] from a caching layer [506], the restricted data associated with the restricted data request, upon reception of the unique hash code. The instructions when executed by the system further cause the processing unit [304] to automatically assign, from the IPM unit [100a] to the user via the load balancer [100k] in the network, the restricted data associated with the restricted data request.
[0104] As is evident from the above, the present disclosure provides a technically advanced solution to automatically assign a restricted data to a user. The present solution allows the sharing of restrictive access to information on the dashboard with a group of users via assigning counters. The present solution further allows a user to create a KPI (Key Performance Indicator) and track the performance of a network via the counters. Furthermore, the present solution allows the user to debug and visualize the KPI data using the counters.
[0105] Further, in accordance with the present disclosure, it is to be acknowledged that the functionality described for the various components/units can be implemented interchangeably. While specific embodiments may disclose a particular functionality of these units for clarity, it is recognized that various configurations and combinations thereof are within the scope of the disclosure. The functionality of specific units as disclosed in the disclosure should not be construed as limiting the scope of the present disclosure. Consequently, alternative arrangements and substitutions of units, provided they achieve the intended functionality described herein, are considered to be encompassed within the scope of the present disclosure.
[0106] While considerable emphasis has been placed herein on the disclosed implementations, it will be appreciated that many implementations can be made and that many changes can be made to the implementations without departing from the principles of the present disclosure. These and other changes in the implementations of the present disclosure will be apparent to those skilled in the art, whereby it is to be understood that the foregoing descriptive matter to be implemented is illustrative and non-limiting.

Claims

We Claim:
1. A method to automatically assign a restricted data to a user, the method comprising: receiving, by a transceiver unit [302] at an Integrated Performance Management
(IPM) unit [100a] from a load balancer [100k] in a network, a restricted data request associated with the user, wherein the restricted data request is at least one of a restricted dashboard request and a restricted report execution request; transmitting, by the transceiver unit [302] from the IPM unit [100a] to a trained model in the network, a hash code request associated with the restricted data request; receiving, by a processing unit [304] at the IPM unit [100a] from the trained model, a unique hash code based on the hash code request; fetching, by the processing unit [304] at the IPM unit [100a] from a caching layer [506], a restricted data associated with the restricted data request, upon receiving the unique hash code; and automatically assigning, by the processing unit [304] from the IPM unit [100a] to the user via the load balancer in the network, the restricted data associated with the restricted data request.
2. The method as claimed in claim 1 further comprises generating, by the processing unit [304] via a computation layer [lOOd], a set of computed restricted data based on at least the restricted report execution request, wherein the set of computed restricted data is generated in an event the restricted data associated with the restricted data request is not detected at the caching layer [506],
3. The method as claimed in claim 2 further comprises automatically assigning, by the processing unit [304] from the IPM unit [100a] to the user via the load balancer [100k] in the network, the set of computed restricted data associated with the restricted data request.
4. The method as claimed in claim 1, wherein the unique hash code associated with the restricted data request is generated via the trained model, wherein the model is trained using a machine learning technique.
5. A system to automatically assign a restricted data to a user, the system comprises: a transceiver unit [302], wherein the transceiver unit [302] is configured to: receive, at an Integrated Performance Management (IPM) unit [100a] from a load balancer [100k] in a network, a restricted data request associated with the user, wherein the restricted data request is at least one of a restricted dashboard request and a restricted report execution request; transmit, from the IPM unit [100a] to a trained model in the network, a hash code request associated with the restricted data request; a processing unit [304] connected to at least the transceiver unit, wherein the processing unit is configured to: receive, at the IPM unit [100a] from the trained model, a unique hash code based on the hash code request; fetch, at the IPM unit [100a] from a caching layer [506], the restricted data associated with the restricted data request, upon reception of the unique hash code; and wherein the transceiver unit [302] is further configured to: automatically assign, from the IPM unit [100a] to the user via the load balancer [100k] in the network, the restricted data associated with the restricted data request.
6. The system as claimed in claim 5, wherein the processing unit [304] is further configured to generate, via a computation layer [lOOd], a set of computed restricted data based on at least the restricted report execution request, wherein the set of computed restricted data is generated in an event the restricted data associated with the restricted data request is not detected at the caching layer [506],
7. The system as claimed in claim 6, wherein the processing unit [304] is further configured to automatically assign, from the IPM unit [100a] to the user via the load balancer [100k] in the network, the set of computed restricted data associated with the restricted data request.
8. The system as claimed in claim 5, wherein the unique hash code associated with the restricted data request is generated via the trained model, wherein the trained model is trained using a machine learning technique.
9. A non-transitory computer-readable storage medium, storing instructions to automatically assign a restricted data to a user, the instructions comprising executable code which, when executed by one or more units of a system, causes: a transceiver unit [302] of the system to: receive, at an Integrated Performance Management (IPM) unit [100a] from a load balancer [100k] in a network, a restricted data request associated with the user, wherein the restricted data request is at least one of a restricted dashboard request and a restricted report execution request; and transmit, from the IPM unit [100a] to a trained model in the network, a hash code request associated with the restricted data request; a processing unit [304] of the system to: receive, at the IPM unit [100a] from the trained model, a unique hash code based on the hash code request; and fetch, at the IPM unit [100a] from a caching layer [506], the restricted data associated with the restricted data request, upon reception of the unique hash code; and the transceiver unit [302] to automatically assign, from the IPM unit [100a] to the user via the load balancer [100k] in the network, the restricted data associated with the restricted data request.
PCT/IN2024/051516 2023-08-22 2024-08-20 Method and system to automatically assign restricted data to a user Pending WO2025041165A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
IN202321056267 2023-08-22
IN202321056267 2023-08-22

Publications (1)

Publication Number Publication Date
WO2025041165A1 true WO2025041165A1 (en) 2025-02-27

Family

ID=94731503

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IN2024/051516 Pending WO2025041165A1 (en) 2023-08-22 2024-08-20 Method and system to automatically assign restricted data to a user

Country Status (1)

Country Link
WO (1) WO2025041165A1 (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2014096113A (en) * 2012-11-12 2014-05-22 Nippon Telegr & Teleph Corp <Ntt> Load balancer
US9083710B1 (en) * 2012-01-03 2015-07-14 Google Inc. Server load balancing using minimally disruptive hash tables
US10715589B2 (en) * 2014-10-17 2020-07-14 Huawei Technologies Co., Ltd. Data stream distribution method and apparatus
CN113810358A (en) * 2021-02-05 2021-12-17 京东科技控股股份有限公司 Access limiting method, device, computer equipment and storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9083710B1 (en) * 2012-01-03 2015-07-14 Google Inc. Server load balancing using minimally disruptive hash tables
JP2014096113A (en) * 2012-11-12 2014-05-22 Nippon Telegr & Teleph Corp <Ntt> Load balancer
US10715589B2 (en) * 2014-10-17 2020-07-14 Huawei Technologies Co., Ltd. Data stream distribution method and apparatus
CN113810358A (en) * 2021-02-05 2021-12-17 京东科技控股股份有限公司 Access limiting method, device, computer equipment and storage medium

Similar Documents

Publication Publication Date Title
US10747592B2 (en) Router management by an event stream processing cluster manager
US11416456B2 (en) Method, apparatus, and computer program product for data quality analysis
US20230039566A1 (en) Automated system and method for detection and remediation of anomalies in robotic process automation environment
EP2510653B1 (en) Cloud computing monitoring and management system
US10152361B2 (en) Event stream processing cluster manager
CN114363042B (en) Log analysis method, device, equipment and readable storage medium
CN114756301B (en) Log processing method, device and system
CN113312242B (en) Interface information management method, device, equipment and storage medium
CN115514618A (en) Alarm event processing method and device, electronic equipment and medium
WO2025017649A1 (en) Method and system for monitoring performance of network elements
WO2025046609A1 (en) METHOD AND SYSTEM FOR ANALYSIS OF KEY PERFORMANCE INDICATORS (KPIs)
CN119961231A (en) A dynamic log collection method and system
WO2025041165A1 (en) Method and system to automatically assign restricted data to a user
US12477336B2 (en) Collecting and managing access to management data in a telecommunications network
WO2025017640A1 (en) Method and system for real-time analysis of key performance indicators (kpis) deviations
WO2025017726A1 (en) Method and system for creating a network area
WO2025017645A1 (en) Method and system for performing real-time analysis of kpis to monitor performance of network
WO2025017729A1 (en) Method and system for an automatic root cause analysis of an anomaly in a network
WO2025022439A1 (en) Method and system for generation of interconneted dashboards
WO2025027653A1 (en) Method and system for automatically detecting a new network node associated with a network
WO2025017578A1 (en) Method and system of providing a unified data normalizer within a network performance management system
WO2025017579A1 (en) Method and system for unified data ingestion in a network performance management system
WO2025074407A1 (en) Method and system for counters and key performance indicators (kpis) policy management in a network
WO2025041159A1 (en) Method and system for generating and provisioning a key performance indicator (kpi)
WO2025041158A1 (en) Method and system for provisioning and configuring counters

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 24856034

Country of ref document: EP

Kind code of ref document: A1