[go: up one dir, main page]

WO2025074411A1 - Method and system for forecasting events in a network - Google Patents

Method and system for forecasting events in a network Download PDF

Info

Publication number
WO2025074411A1
WO2025074411A1 PCT/IN2024/051971 IN2024051971W WO2025074411A1 WO 2025074411 A1 WO2025074411 A1 WO 2025074411A1 IN 2024051971 W IN2024051971 W IN 2024051971W WO 2025074411 A1 WO2025074411 A1 WO 2025074411A1
Authority
WO
WIPO (PCT)
Prior art keywords
events
trained models
request
data
forecasting
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
PCT/IN2024/051971
Other languages
French (fr)
Inventor
Aayush Bhatnagar
Ankit Murarka
Jugal Kishore
Chandra GANVEER
Sanjana Chaudhary
Gourav Gurbani
Yogesh Kumar
Avinash Kushwaha
Dharmendra Kumar Vishwakarma
Sajal Soni
Niharika PATNAM
Shubham Ingle
Harsh Poddar
Sanket KUMTHEKAR
Mohit Bhanwria
Shashank Bhushan
Vinay Gayki
Aniket KHADE
Durgesh KUMAR
Zenith KUMAR
Gaurav Kumar
Manasvi Rajani
Kishan Sahu
Sunil Meena
Supriya Kaushik DE
Kumar Debashish
Mehul Tilala
Satish Narayan
Rahul Kumar
Harshita GARG
Kunal Telgote
Ralph LOBO
Girish DANGE
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jio Platforms Ltd
Original Assignee
Jio Platforms Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jio Platforms Ltd filed Critical Jio Platforms Ltd
Publication of WO2025074411A1 publication Critical patent/WO2025074411A1/en
Pending legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/14Network analysis or design
    • H04L41/147Network analysis or design for predicting network behaviour
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/04Forecasting or optimisation specially adapted for administrative or management purposes, e.g. linear programming or "cutting stock problem"
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/20Administration of product repair or maintenance
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/50Business processes related to the communications industry
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/16Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks using machine learning or artificial intelligence

Definitions

  • the present invention relates to the field of network data analytics for predictive network management and more particularly relates to, a system and a method for forecasting events in a network.
  • An advanced prediction system integrated with an AI/ML system excels in executing a wide array of algorithms and predictive tasks where the AI/ML models integrated into the network are trained using data sets refined based on standard parameters.
  • An advanced AI/ML integrated system is capable of data analysis and making predictions. There are trained models which are trained by users to make dedicated prediction or analysis. Some AI/ML integrated predictive system implements existing trained models to predict and forecast required results by utilizing existing or new data sources.
  • the machine learning integrated predictive models are incorporated in a network for making predictions about future events based on the preprocessed data source. Inclusion of a new data source for inferencing by means of a trained model is complex and resource consuming. Similarly, for new data sources or for existing data sources with a new date range or with a new time period range, ML training for the same training model is time consuming. The resource-utilization in both cases is uneven and does not provide an optimal solution.
  • One or more embodiments of the present disclosure provide a method and system for forecasting events in a network.
  • the method for forecasting the events in the network includes the step of receiving a request from a user to forecast one or more events.
  • the method further includes the step of extracting from the request, information related to at least one of, whether the one or more events are required to be forecasted using the new data sources or whether the one or more events are required to be forecasted using one or more existing data source.
  • the method includes the step of selecting, one or more trained models from the plurality of trained models pre-stored in a storage unit based on details of the one or more trained models from the request.
  • the method includes the step of forecasting utilizing the selected one or more trained models, the one or more events, based on data from at least one of, the new data source or the one or more existing data sources as per the extracted information from the request.
  • the one or more existing data sources are data sources including data which is at least one of, partly or completely used for forecasting the one or more events as per the request.
  • the one or more events are required to be forecasted using one or more existing data sources
  • the step of, selecting, based on the extracted information, one or more trained models from a plurality of trained models pre- stored in a storage unit includes the step of selecting the corresponding one or more trained models from the storage unit based on at least one of, model name, data source name test values, forecasted values, and performance indicators which include at least one of, accuracy and Root Mean Square Error (RMSE) .
  • RMSE Root Mean Square Error
  • the step of forecasting utilizing the selected one or more trained models, one or more events based on the data from at least one of, the new data source or the one or more existing data sources as per the extracted information from the request incudes the steps of checking based on the received request, one or more parameters forecasting utilizing the selected one or more trained models, the one or more events based on learnt trends/pattems of historic data pertaining to the one or more parameters.
  • the one or more events are forecasted by the one or more processors, utilizing the selected one or more trained models for at least one of, a time range provided as per the request.
  • the system for forecasting the events in the network includes a transceiver configured to receive a request from a user to forecast one or more events.
  • the system further includes a processing unit configured to extract from the request information related to at least one of, whether the one or more events are required to be forecasted using one or more new data sources or whether the one or more events are required to be forecasted using one or more existing data sources.
  • the system further includes a selecting unit configured to select one or more trained models from a plurality of trained models pre- stored in a storage unit based on details of the one or more trained models from the request.
  • the system further includes a forecasting engine configured to forecast utilizing the selected one or more trained models, the one or more events, based on data from at least one of, the new data source or the one or more existing data sources as per the extracted information from the request.
  • a non-transitory computer-readable medium having stored thereon computer-readable instructions is disclosed.
  • the computer-readable instructions are executed by a processor.
  • the processor is configured to receive, a request from a user to forecast one or more events.
  • the processor is further configured to extract from the request information related to at least one of, whether the one or more events are required to be forecasted using one or more new data sources or whether the one or more events are required to be forecasted using one or more existing data sources.
  • the processor is further configured to select based on the extracted information, one or more trained models from a plurality of trained models pre-stored in a storage unit.
  • the processor is further configured to forecast utilizing the selected one or more trained models, the one or more events.
  • FIG. 2 is an exemplary block diagram of a system for forecasting the events in the network, according to one or more embodiments of the present invention
  • FIG. 4 is an exemplary block diagram of an architecture implemented in the system of the FIG. 2, according to one or more embodiments of the present invention
  • FIG. 6 is a schematic representation of a method for forecasting the events in the network, according to one or more embodiments of the present invention.
  • the network 105 may include, but is not limited to, a Third Generation (3G), a Fourth Generation (4G), a Fifth Generation (5G), a Sixth Generation (5G), a New Radio (NR), a Narrow Band Internet of Things (NB-IoT), an Open Radio Access Network (O-RAN), and the like.
  • 3G Third Generation
  • 4G Fourth Generation
  • 5G Fifth Generation
  • 6G Sixth Generation
  • NR New Radio
  • NB-IoT Narrow Band Internet of Things
  • OF-RAN Open Radio Access Network
  • the network 105 may also include, by way of example but not limitation, one or more of a wireless network, a wired network, an internet, an intranet, a public network, a private network, a packet-switched network, a circuit- switched network, an ad hoc network, an infrastructure network, a Public-Switched Telephone Network (PSTN), a cable network, a cellular network, a satellite network, a fiber optic network, a VOIP or some combination thereof.
  • PSTN Public-Switched Telephone Network
  • FIG. 2 is an exemplary block diagram of the system 120 for forecasting the events in the network 105, according to one or more embodiments of the present invention.
  • the UI 215 includes a variety of interfaces, for example, interfaces for a graphical user interface, a web user interface, a Command Line Interface (CLI), and the like.
  • the UI 215 facilitates communication of the system 120.
  • the UI 215 provides a communication pathway for one or more components of the system 120. Examples of such components include, but are not limited to, the UE 110 and the database 220.
  • the database 220 is one of, but not limited to, a centralized database, a cloud-based database, a commercial database, an open-source database, a distributed database, an end-user database, a graphical database, a No-Structured Query Language (NoSQL) database, an object- oriented database, a personal database, an in-memory database, a document-based database, a time series database, a wide column database, a key value database, a search database, a cache databases, and so forth.
  • NoStructured Query Language (NoSQL) database No-Structured Query Language
  • object- oriented database a personal database
  • an in-memory database a document-based database
  • a time series database a time series database
  • a wide column database a key value database
  • search database a cache databases
  • the processor 205 includes one or more modules.
  • the one or more modules/units includes, but not limited to, a transceiver unit 225, a processing unit 230, a storage unit 235, a selecting unit 240, and a forecasting engine 245 communicably coupled to each other for forecasting the events in the network 105.
  • the one or more new data sources or the one or more existing data sources include at least one of, a file, a source path, a data stream, a Hypertext Transfer Protocol (HTTP), a Distributed File System (DFS) and a Network Access Server (NAS).
  • HTTP Hypertext Transfer Protocol
  • DFS Distributed File System
  • NAS Network Access Server
  • the HTTP is the protocol for efficient data transfer over the internet.
  • the HTTP allows multiple streams of data to be sent concurrently over a single connection.
  • the HTTP improving the speed and performance of the data retrieval enhancing retrieval speed and performance, which can be leveraged to gather data from monitoring tools or external systems that track the applications and the services, such as, but not limited to, mobile edge computing (MEC) or virtualized network functions (VNFs), aiding in predicting service degradation or system bottlenecks.
  • MEC mobile edge computing
  • VNFs virtualized network functions
  • the DFS is designed for handling vast amounts of the data across multiple nodes in the distributed network.
  • the infrastructure needs to handle the growing data demands from user devices, loT systems, and ultra-reliable low-latency communication (UREEC).
  • the DFS with its scalable and fault-tolerant architecture 400 is well-suited for storing large datasets generated by the network functions like Core Network (CN), Radio Access Network (RAN), and applications like edge computing.
  • CN Core Network
  • RAN Radio Access Network
  • the examples of the DFS includes, but not limited to, data processing in DFS, data generation, visualization and reporting, data ingestion, and data storage.
  • the one or more existing data sources includes data which is at least one of, partly or completely used for forecasting the one or more events as per the request.
  • the user may intend to only use part of the data of the one or more existing data sources to forecast the one or more events, wherein the part of the data is based on a data range or conditions selected by user.
  • the user may intend to use the complete data of the one or more existing data sources to forecast the one or more events as per the request. Therefore, based on the request received from the user, the data from the one or more existing data sources may be selected partly or fully to forecast the one or more events, thereby advantageously, save on processing time.
  • the one or more existing data sources may encompass the variety of datasets that provide valuable historical insights pertinent to the forecasting process.
  • the actual value represents real, observed data points that reflect the current state or performance of the entity being measured.
  • the test values are used to evaluate the trained models against known outcomes to assess the predictive accuracy.
  • the forecasted values are representing predictions made by the trained models regarding future events based on the data and learned patterns.
  • the accuracy is the measure of how well the model's predictions align with the actual outcomes. The accuracy is often expressed as the percentage, indicating the ratio of correct predictions to total predictions.
  • the example of accuracy includes, but not limited, user traffic forecasting, signal quality prediction, network latency predictions, and user experience metrics.
  • the RMSE is the common statistical measure used assess the differences between predicted values and actual values. The RMSE quantifies the average deviation of petitions from actual outcomes, with lower values indicating better model performance.
  • the examples of the RMSE includes, but not limited to, signal strength prediction, traffic forecasting, latency estimation, and user experience predictions.
  • the selecting unit 240 selects the one or more trained models based on details provided by the user in the request of the one or more trained models from the plurality of the pre stored models in the storing unit 240. Thereafter, the selecting unit 240 applies the selected one or more trained models to the data from the new data source for forecasting the one or more events.
  • the selecting unit 240 selects the one or more trained models from the storage unit 235 based on at least one of, model name, data source name.
  • the model’s name refers to the unique identifiers or the label associated with the specific machine learning model that has been trained to forecast certain events.
  • Each trained model in the system 120 is saved with the distinct name that signifies the trained model functionality, configuration, or the task was trained for.
  • the data source name refers to the identifier or label for the source of data that will be used in the forecasting process.
  • the data sources may include different datasets such as, but not limited to, sales records, customer interaction logs, sensor data, or network traffic data, each labelled with the unique name.
  • the data sources are critical inputs for trained models and different models might be designed to work with several types of data sources, and the selecting unit 240 uses the data source name to ensure that the model selected is compatible with the relevant data source.
  • the forecasting engine 245 forecasts the one or more events utilizing the selected one or more trained models related to the data from the one or more data sources by checking the received request against the one or more parameters such as one or more hyperparameters.
  • the one or more hyperparameters may include, but are not limited to, hyperparameters configured for the model, which consist of statistical calculations such as, but not limited to, mean, mode, variance, trend, Autocorrelation Function (ACF), and Partial Autocorrelation Function (PACF).
  • the one or more events are forecasted utilizing the selected one or more trained models for at least one of, a time range provided as per the request.
  • the time range may be provided or customized as let us consider 7 am to 10 am on day 1 (future date).
  • the user may intend to forecast the one or more events on day 3, 4, 5, (future dates) etc., without intending to forecast the one or more events on day 2.
  • the forecasting engine 245 may advantageously forecast the one or more events based on customization of the time or date range by the user via the request. Due to which, the present invention may forecast the one or more events which is independent of being continuous in nature
  • the time range may include at least one of, a temporal range or temporal boundaries.
  • the temporal range refers to the specific period of time over which the existing data is selected or analysed.
  • the temporal range may be the range such as for example last 30 days, a particular year, or a specific time frame (e.g., between 1:00 PM and 3:00 PM).
  • the temporal boundaries refer to the start and end points of the temporal range.
  • the temporal range defines the time frame of the existing data to be used in forecasting.
  • the system 120 ensures that only the relevant data from that time period is considered in the forecasting process.
  • the temporal boundaries precisely define the limits of the time window for the data being used.
  • the temporal boundaries may be set from January 1st, 2022, to December 31st, 2022. The temporal boundaries help in restricting the existing data to the specific window, ensuring that only the existing data within the timeframe is used.
  • FIG. 3 describes a preferred embodiment of the system 120 of FIG. 2, according to various embodiments of the present invention. It is to be noted that the embodiment with respect to FIG. 3 will be explained with respect to the first UE 110a and the system 120 for the purpose of description and illustration and should nowhere be construed as limited to the scope of the present disclosure.
  • each of the first UE 110a, the second UE 110b, and the third UE 110c may include an external storage device, a bus, a main memory, a read-only memory, a mass storage device, communication port(s), and a processor.
  • the exemplary embodiment as illustrated in FIG. 3 will be explained with respect to the first UE 110a without deviating from the scope of the present disclosure and limiting the scope of the present disclosure.
  • the first UE 110a includes one or more primary processors 305 communicably coupled to the one or more processors 205 of the system 120.
  • the one or more primary processors 305 are coupled with a memory 310 storing instructions which are executed by the one or more primary processors 305. Execution of the stored instructions by the one or more primary processors 305 causes the first UE 110a to transmit the request to one or more processors, based on the user selecting or customizing the one or more parameters via the UI 215 of the UE 110, to forecast one or more events to the network 105.
  • the UI 215 provides the user with the flexibility to select and/or customize the one or more parameters that will guide the forecasting process.
  • the one or more parameters may be tailored to meet individual user needs and preferences, enabling more relevant and accurate predictions.
  • the one or more parameters includes at least one of, new data source, one or more existing data sources, time range, date range, at least part of or complete data of the one or more existing data sources which are at least one of, selected and/or customized by the user via the UI of the UE (110).
  • the user analysing network performance during peak hours may select and/or customize the one or more parameters such as, the time range, such as 6 PM to 9 PM, from the UI 215 to forecast the one or more events such as data usage and latency for that period.
  • the time range such as 6 PM to 9 PM
  • the one or more processors 205 of the system 120 is configured to forecast the one or more events.
  • the system 120 includes the one or more processors 205, the memory 210, the UI 215, and the database 220.
  • the operations and functions of the one or more processors 205, the memory 210, the UI 215, and the database 220 are already explained in FIG. 2.
  • a similar description related to the working and operation of the system 108 as illustrated in FIG. 2 has been omitted to avoid repetition.
  • the processor 205 includes the transceiver unit 225, the processing unit 230, the storage unit 235, the selecting 240, and the forecasting 245.
  • the operations and functions of the transceiver unit 225, the processing unit 230, the storage unit 235, the selecting 240, and the forecasting 245 are already explained in FIG. 2.
  • a similar description related to the working and operation of the system 120 as illustrated in FIG. 2 has been omitted to avoid repetition.
  • the limited description provided for the system 120 in FIG. 3, should be read with the description provided for the system 120 in the FIG. 2 above, and should not be construed as limiting the scope of the present disclosure.
  • FIG. 4 is an exemplary block diagram of an architecture 400 of the system 120 for forecasting events in the network 105, according to one or more embodiments of the present invention.
  • the architecture 400 includes a data source 405, a data integration 410, a data preprocessing 415, a model training 420, a prediction module 425, a graphic representation module 430, an interface module 435, a new data source 440, and the database 220.
  • the data source 405 serves as the foundational element of the forecasting method which includes both existing and the new data sources.
  • Existing data sources are datasets previously utilized to train machine learning models, while new data sources may include fresh inputs like files, data streams, and external databases.
  • the diverse nature of the data sources 405 enhances the system's ability to generate accurate and relevant forecasts for network events. By leveraging both historical and real-time data, the forecasting method may adapt to changing conditions and improve decision-making.
  • the data integration 410 consolidates data from various sources to create the unified dataset for analysis.
  • the data integration 410 process involves aggregating information from both new and existing data sources, ensuring compatibility and readiness for further processing. Effective data integration 410 is crucial for maximizing the quality of input data, which directly impacts the reliability of predictions. By establishing the comprehensive dataset, the data integration 410 lays the groundwork for the successful forecasting process.
  • the data pre-processing 415 is the critical step that involves cleaning, transforming, and organizing the integrated data into the format suitable for model training and prediction.
  • the model training and prediction tasks within the data pre-processing 415 may include, but not limited to, removing duplicates, managing missing values, and normalizing data.
  • Proper data pre-processing 415 is vital for improving data quality, which enhances machine learning model performance.
  • the data pre-processing step ensures that the input data is free from noise and biases, contributing to more accurate forecasting results.
  • the graphic representation module 430 visualizes the results of the forecasting process, generating graphs and charts that depict predicted outcomes alongside the data.
  • the graphic representation module 430 may include visual elements like trend lines, confidence intervals, and comparative metrics. Visualization is crucial for interpreting complex data and forecasting results, making the graphic representation module 430 easier for users to understand findings and take informed actions based on the predictions.
  • the new data source 440 or the existing data source along with the one or more parameters are identified where the data input occurs.
  • the data may originate from the new source 440 or an existing source with updated the one or more parameters, such as, but not limited to, the date, time, or any other variable required for the task at hand.
  • the one or more parameters allows flexibility in choosing the specific data scope for forecasting, enabling the ingestion of data for further processing.
  • the next phase is the data preprocessing 415.
  • the data preprocessing 415 involves organizing and refining the raw data, which might include cleaning any inconsistencies, normalizing the data to the familiar format, or selecting important features necessary for analysis.
  • the data preprocessing 415 ensures that the data is accurate and of high quality, making the data preprocessing 415 suitable for forecasting tasks.
  • the pre-trained models are engaged.
  • the pre-trained models have already been trained using historical data and may now use the prepared data for making predictions or forecasting.
  • the models analyse the data, applying the patterns they learned during training. Depending on whether the user is using the new or existing data source, the appropriate model is selected to forecast the outcomes.
  • the method 600 includes the step of receiving a request from a user to forecast one or more events.
  • the method 600 includes the step of extracting from the request, information related to at least one of, whether the one or more events are required to be forecasted using the new data sources or whether the one or more events are required to be forecasted using one or more existing data source.

Landscapes

  • Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Theoretical Computer Science (AREA)
  • Human Resources & Organizations (AREA)
  • Physics & Mathematics (AREA)
  • Strategic Management (AREA)
  • Economics (AREA)
  • General Physics & Mathematics (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Tourism & Hospitality (AREA)
  • Marketing (AREA)
  • General Business, Economics & Management (AREA)
  • Software Systems (AREA)
  • Operations Research (AREA)
  • Quality & Reliability (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Mathematical Physics (AREA)
  • Game Theory and Decision Science (AREA)
  • Computing Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Data Mining & Analysis (AREA)
  • Development Economics (AREA)
  • Medical Informatics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Educational Administration (AREA)
  • Primary Health Care (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Molecular Biology (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The present disclosure relates to a system (120) and a method (600) for forecasting events in a network (105). The method (600) includes the step of receiving a request from a user to forecast one or more events. The method (600) further includes the step of extracting from the request, information related to at least one of, whether the one or more events are required to be forecasted using the new data sources or whether the one or more events are required to be forecasted using one or more existing data source. The method (600) includes the step of selecting one or more trained models from the plurality of trained models pre-stored in a storage unit based on details of the one or more trained models from the request. The method (600) includes the step of forecasting utilizing the selected one or more trained models, the one or more events based on data from at least one of, the new data source or the one or more existing data sources as per the extracted information from the request.

Description

METHOD AND SYSTEM FOR FORECASTING EVENTS IN A NETWORK
FIELD OF THE INVENTION
[0001] The present invention relates to the field of network data analytics for predictive network management and more particularly relates to, a system and a method for forecasting events in a network.
BACKGROUND OF THE INVENTION
[0002] With the increase in number of users, the network service providers have been implementing up-grades to enhance the service quality so as to keep pace with such high demand. To enhance user experience and implement advanced monitoring mechanisms, prediction methodologies are being incorporated in the network management. An advanced prediction system integrated with an AI/ML system excels in executing a wide array of algorithms and predictive tasks where the AI/ML models integrated into the network are trained using data sets refined based on standard parameters.
[0003] An advanced AI/ML integrated system is capable of data analysis and making predictions. There are trained models which are trained by users to make dedicated prediction or analysis. Some AI/ML integrated predictive system implements existing trained models to predict and forecast required results by utilizing existing or new data sources.
[0004] The machine learning integrated predictive models are incorporated in a network for making predictions about future events based on the preprocessed data source. Inclusion of a new data source for inferencing by means of a trained model is complex and resource consuming. Similarly, for new data sources or for existing data sources with a new date range or with a new time period range, ML training for the same training model is time consuming. The resource-utilization in both cases is uneven and does not provide an optimal solution.
[0005] In view of the above, there is a requirement of a system and method thereof to facilitate the analysis of data from various data sources and perform inference on them, thus providing flexibility to users.
SUMMARY OF THE INVENTION
[0006] One or more embodiments of the present disclosure provide a method and system for forecasting events in a network. [0007] In one aspect of the present invention, the method for forecasting the events in the network is disclosed. The method includes the step of receiving a request from a user to forecast one or more events. The method further includes the step of extracting from the request, information related to at least one of, whether the one or more events are required to be forecasted using the new data sources or whether the one or more events are required to be forecasted using one or more existing data source. The method includes the step of selecting, one or more trained models from the plurality of trained models pre-stored in a storage unit based on details of the one or more trained models from the request. The method includes the step of forecasting utilizing the selected one or more trained models, the one or more events, based on data from at least one of, the new data source or the one or more existing data sources as per the extracted information from the request.
[0008] In embodiment, the new data sources or the one or more existing data sources include at least one of, a file, a source path, a data stream, data via Hypertext Transfer Protocol (HTTP), Distributed File System (DFS) and Network Access Server (NAS).
[0009] In an embodiment, the one or more existing data sources are data sources including data which is at least one of, partly or completely used for forecasting the one or more events as per the request.
[0010] In an embodiment, the one or more events are required to be forecasted using one or more existing data sources, the step of, selecting, based on the extracted information, one or more trained models from a plurality of trained models pre- stored in a storage unit, includes the step of selecting the corresponding one or more trained models from the storage unit based on at least one of, model name, data source name test values, forecasted values, and performance indicators which include at least one of, accuracy and Root Mean Square Error (RMSE) .
[0011] In an embodiment, the step of forecasting utilizing the selected one or more trained models, one or more events based on the data from at least one of, the new data source or the one or more existing data sources as per the extracted information from the request incudes the steps of checking based on the received request, one or more parameters forecasting utilizing the selected one or more trained models, the one or more events based on learnt trends/pattems of historic data pertaining to the one or more parameters.
[0012] In an embodiment, the one or more events are forecasted by the one or more processors, utilizing the selected one or more trained models for at least one of, a time range provided as per the request. [0013] In another aspect of the present invention, the system for forecasting the events in the network is disclosed. The system includes a transceiver configured to receive a request from a user to forecast one or more events. The system further includes a processing unit configured to extract from the request information related to at least one of, whether the one or more events are required to be forecasted using one or more new data sources or whether the one or more events are required to be forecasted using one or more existing data sources. The system further includes a selecting unit configured to select one or more trained models from a plurality of trained models pre- stored in a storage unit based on details of the one or more trained models from the request. The system further includes a forecasting engine configured to forecast utilizing the selected one or more trained models, the one or more events, based on data from at least one of, the new data source or the one or more existing data sources as per the extracted information from the request.
[0014] In another aspect of the invention, a non-transitory computer-readable medium having stored thereon computer-readable instructions is disclosed. The computer-readable instructions are executed by a processor. The processor is configured to receive, a request from a user to forecast one or more events. The processor is further configured to extract from the request information related to at least one of, whether the one or more events are required to be forecasted using one or more new data sources or whether the one or more events are required to be forecasted using one or more existing data sources. The processor is further configured to select based on the extracted information, one or more trained models from a plurality of trained models pre-stored in a storage unit. The processor is further configured to forecast utilizing the selected one or more trained models, the one or more events.
[0015] In another aspect of invention, User Equipment (UE) is disclosed. The UE includes one or more primary processors communicatively coupled to one or more processors. The one or more primary processors is coupled with a memory. The one or more primary processors causes the UE to provide, one or more parameters for the user on a User Interface (UI) of the UE. The one or more parameters are one of, selected or customized by the user, wherein the one or more parameters includes selection or customization of at least one of, a new data source, one or more existing data sources, a time range, a date range, at least part of or complete data of the one or more existing data sources. Further, the one or more primary processors causes the UE to transmit, a request based on the user selecting or customizing the one or more parameters to one or more processor to forecast one or more events.
[0016] Other features and aspects of this invention will be apparent from the following description and the accompanying drawings. The features and advantages described in this summary and in the following detailed description are not all-inclusive, and particularly, many additional features and advantages will be apparent to one of ordinary skill in the relevant art, in view of the drawings, specification, and claims hereof. Moreover, it should be noted that the language used in the specification has been principally selected for readability and instructional purposes and may not have been selected to delineate or circumscribe the inventive subject matter, resort to the claims being necessary to determine such inventive subject matter.
BRIEF DESCRIPTION OF THE DRAWINGS
[0017] The accompanying drawings, which are incorporated herein, and constitute a part of this disclosure, illustrate exemplary embodiments of the disclosed methods and systems in which like reference numerals refer to the same parts throughout the different drawings. Components in the drawings are not necessarily to scale, emphasis instead being placed upon clearly illustrating the principles of the present disclosure. Some drawings may indicate the components using block diagrams and may not represent the internal circuitry of each component. It will be appreciated by those skilled in the art that disclosure of such drawings includes disclosure of electrical components, electronic components or circuitry commonly used to implement such components.
[0018] FIG. 1 is an exemplary block diagram of an environment for forecasting events in a network, according to one or more embodiments of the present invention;
[0019] FIG. 2 is an exemplary block diagram of a system for forecasting the events in the network, according to one or more embodiments of the present invention;
[0020] FIG. 3 is a schematic representation of a workflow of the system of FIG. 1, according to the one or more embodiments of the present invention;
[0021] FIG. 4 is an exemplary block diagram of an architecture implemented in the system of the FIG. 2, according to one or more embodiments of the present invention;
[0022] FIG. 5 is a signal flow diagram for forecasting the events in the network, according to one or more embodiments of the present invention; and
[0023] FIG. 6 is a schematic representation of a method for forecasting the events in the network, according to one or more embodiments of the present invention.
[0024] The foregoing shall be more apparent from the following detailed description of the invention.
DETAILED DESCRIPTION OF THE INVENTION
[0025] Some embodiments of the present disclosure, illustrating all its features, will now be discussed in detail. It must also be noted that as used herein and in the appended claims, the singular forms "a", "an" and "the" include plural references unless the context clearly dictates otherwise.
[0026] Various modifications to the embodiment will be readily apparent to those skilled in the art and the generic principles herein may be applied to other embodiments. However, one of ordinary skill in the art will readily recognize that the present disclosure including the definitions listed here below are not intended to be limited to the embodiments illustrated but is to be accorded the widest scope consistent with the principles and features described herein. [0027] A person of ordinary skill in the art will readily ascertain that the illustrated steps detailed in the figures and here below are set out to explain the exemplary embodiments shown, and it should be anticipated that ongoing technological development will change the manner in which particular functions are performed. These examples are presented herein for purposes of illustration, and not limitation. Further, the boundaries of the functional building blocks have been arbitrarily defined herein for the convenience of the description. Alternative boundaries can be defined so long as the specified functions and relationships thereof are appropriately performed. Alternatives (including equivalents, extensions, variations, deviations, etc., of those described herein) will be apparent to persons skilled in the relevant art(s) based on the teachings contained herein. Such alternatives fall within the scope and spirit of the disclosed embodiments.
[0028] FIG. 1 illustrates an exemplary block diagram of an environment 100 for forecasting events in a network, according to one or more embodiments of the present disclosure. In this regard, the environment 100 includes a User Equipment (UE) 110, a server 115, a network 105 and a system 120 communicably coupled to each other for forecasting events in the network. [0029] As per the illustrated embodiment and for the purpose of description and illustration, the UE 110 includes, but not limited to, a first UE 110a, a second UE 110b, and a third UE 110c, and should nowhere be construed as limiting the scope of the present disclosure. In alternate embodiments, the UE 110 may include a plurality of UEs as per the requirement. For ease of reference, each of the first UE 110a, the second UE 110b, and the third UE 110c, will hereinafter be collectively and individually referred to as the “User Equipment (UE) 110”. [0030] In an embodiment, the UE 110 is one of, but not limited to, any electrical, electronic, electro-mechanical or an equipment and a combination of one or more of the above devices such as a smartphone, virtual reality (VR) devices, augmented reality (AR) devices, laptop, a general-purpose computer, desktop, personal digital assistant, tablet computer, mainframe computer, or any other computing device. [0031] The environment 100 includes the server 115 accessible via the network 105. The server 115 may include, by way of example but not limitation, one or more of a standalone server, a server blade, a server rack, a bank of servers, a server farm, hardware supporting a part of a cloud service or system, a home server, hardware running a virtualized server, one or more processors executing code to function as a server, one or more machines performing server-side functionality as described herein, at least a portion of any of the above, some combination thereof. In an embodiment, the entity may include, but is not limited to, a vendor, a network operator, a company, an organization, a university, a lab facility, a business enterprise side, a defense facility side, or any other facility that provides service.
[0032] The network 105 includes, by way of example but not limitation, one or more of a wireless network, a wired network, an internet, an intranet, a public network, a private network, a packet- switched network, a circuit- switched network, an ad hoc network, an infrastructure network, a Public-Switched Telephone Network (PSTN), a cable network, a cellular network, a satellite network, a fiber optic network, or some combination thereof. The network 105 may include, but is not limited to, a Third Generation (3G), a Fourth Generation (4G), a Fifth Generation (5G), a Sixth Generation (5G), a New Radio (NR), a Narrow Band Internet of Things (NB-IoT), an Open Radio Access Network (O-RAN), and the like.
[0033] The network 105 may also include, by way of example but not limitation, at least a portion of one or more networks having one or more nodes that transmit, receive, forward, generate, buffer, store, route, switch, process, or a combination thereof, etc. one or more messages, packets, signals, waves, voltage or current levels, some combination thereof, or so forth. The network 105 may also include, by way of example but not limitation, one or more of a wireless network, a wired network, an internet, an intranet, a public network, a private network, a packet-switched network, a circuit- switched network, an ad hoc network, an infrastructure network, a Public-Switched Telephone Network (PSTN), a cable network, a cellular network, a satellite network, a fiber optic network, a VOIP or some combination thereof.
[0034] The environment 100 further includes the system 120 communicably coupled to the server 115 and the UE 110 via the network 105. The system 120 is configured for forecasting events in the network 105. As per one or more embodiments, the system 120 is adapted to be embedded within the server 115 or embedded as an individual entity.
[0035] Operational and construction features of the system 120 will be explained in detail with respect to the following figures. [0036] FIG. 2 is an exemplary block diagram of the system 120 for forecasting the events in the network 105, according to one or more embodiments of the present invention.
[0037] As per the illustrated embodiment, the system 120 includes one or more processors 205, a memory 210, a User Interface (UI) 215, and a database 220. For the purpose of description and explanation, the description will be explained with respect to one processor 205 and should nowhere be construed as limiting the scope of the present disclosure. In alternate embodiments, the system 120 may include more than one processor 205 as per the requirement of the network 105. The one or more processors 205, hereinafter referred to as the processor 205 may be implemented as one or more microprocessors, microcomputers, microcontrollers, digital signal processors, central processing units, state machines, logic circuitries, single board computers, and/or any devices that manipulate signals based on operational instructions.
[0038] As per the illustrated embodiment, the processor 205 is configured to fetch and execute computer-readable instructions stored in the memory 210. The memory 210 may be configured to store one or more computer-readable instructions or routines in a non-transitory computer-readable storage medium, which may be fetched and executed to create or share data packets over a network service. The memory 210 may include any non-transitory storage device including, for example, volatile memory such as RAM, or non-volatile memory such as disk memory, EPROMs, FLASH memory, unalterable memory, and the like.
[0039] In an embodiment, the UI 215 includes a variety of interfaces, for example, interfaces for a graphical user interface, a web user interface, a Command Line Interface (CLI), and the like. The UI 215 facilitates communication of the system 120. In one embodiment, the UI 215 provides a communication pathway for one or more components of the system 120. Examples of such components include, but are not limited to, the UE 110 and the database 220.
[0040] The database 220 is one of, but not limited to, a centralized database, a cloud-based database, a commercial database, an open-source database, a distributed database, an end-user database, a graphical database, a No-Structured Query Language (NoSQL) database, an object- oriented database, a personal database, an in-memory database, a document-based database, a time series database, a wide column database, a key value database, a search database, a cache databases, and so forth. The foregoing examples of database 220 types are non-limiting and may not be mutually exclusive e.g., a database can be both commercial and cloud-based, or both relational and open-source, etc.
[0041] In order for the system 120 for forecasting the events in the network 105, the processor 205 includes one or more modules. In one embodiment, the one or more modules/units includes, but not limited to, a transceiver unit 225, a processing unit 230, a storage unit 235, a selecting unit 240, and a forecasting engine 245 communicably coupled to each other for forecasting the events in the network 105.
[0042] In one embodiment, the one or more modules can be used in combination or interchangeably for forecasting events in the network 105.
[0043] The transceiver unit 225, the processing unit 230, the storage unit 235, the selecting unit 240, and the forecasting engine 245, in an embodiment, may be implemented as a combination of hardware and programming (for example, programmable instructions) to implement one or more functionalities of the processor 205. In the examples described herein, such combinations of hardware and programming may be implemented in several different ways. For example, the programming for the processor 205 may be processor-executable instructions stored on a non-transitory machine-readable storage medium and the hardware for the processor may comprise a processing resource (for example, one or more processors), to execute such instructions. In the present examples, the memory 210 may store instructions that, when executed by the processing resource, implement the processor. In such examples, the system 120 may comprise the memory 210 storing the instructions and the processing resource to execute the instructions, or the memory 210 may be separate but accessible to the system 120 and the processing resource. In other examples, the processor 205 may be implemented by electronic circuitry.
[0044] In an embodiment, the transceiver unit 225 is configured to receive a request from a user to forecast the one or more events. The request includes at least one, but not limited to, one or more parameters which are at least one of, selected or customized by a user via a User Interface (UI). The one or more parameters includes, at least one of, but not limited to, new data source, one or more existing data sources, a time range, a date range, at least part of or complete data of the one or more existing data sources to be used to forecast the one or more events.
[0045] The one or more new data sources or the one or more existing data sources include at least one of, a file, a source path, a data stream, a Hypertext Transfer Protocol (HTTP), a Distributed File System (DFS) and a Network Access Server (NAS). The system retrieves the data from the files input to analyse past network behaviour and identify trends or patterns that may help predict future events, such as, but not limited to, a potential service outage or network congestion during peak hours.
[0046] In an embodiment, the source path refers to accessing data stored at the specific file path or directory within the network 105. The source path may automate the retrieval of the new files from the source path as they are generated, ensuring that the forecasting models are continuously trained on the most recent data. The source helps in predicting issues like handover failures between cells in the network 105.
[0047] In an embodiment, the data stream is the continuous flow of data that provides realtime or near-real-time network information. The data stream process streaming data to detect immediate or short-term anomalies, such as a sudden spike in traffic that might indicate the denial-of-service attack.
[0048] Further, the HTTP is the protocol for efficient data transfer over the internet. The HTTP allows multiple streams of data to be sent concurrently over a single connection. The HTTP improving the speed and performance of the data retrieval enhancing retrieval speed and performance, which can be leveraged to gather data from monitoring tools or external systems that track the applications and the services, such as, but not limited to, mobile edge computing (MEC) or virtualized network functions (VNFs), aiding in predicting service degradation or system bottlenecks.
[0049] In an embodiment, the DFS is designed for handling vast amounts of the data across multiple nodes in the distributed network. The infrastructure needs to handle the growing data demands from user devices, loT systems, and ultra-reliable low-latency communication (UREEC). The DFS with its scalable and fault-tolerant architecture 400 is well-suited for storing large datasets generated by the network functions like Core Network (CN), Radio Access Network (RAN), and applications like edge computing. The examples of the DFS includes, but not limited to, data processing in DFS, data generation, visualization and reporting, data ingestion, and data storage.
[0050] In an embodiment, the NAS typically refers to the protocol layer responsible for managing the control and signalling between the UE 110 (such as smartphones, loT devices) and the CN. The NAS layer handles key functionalities related to the network access, mobility management, session management, and other important aspects of communication. The example of the NAS includes, but not limited to, session management, paging in idle mode, security mode command, and NAS handling slice selection.
[0051] Upon receiving the request from the user to forecast the one or more events, the processing unit 230 is configured to extract from the request, information related to at least one of, whether the one or more events are required to be forecasted using the new data sources or whether the one or more events are required to be forecasted using one or more existing data sources. [0052] The new data sources are data sources which have not been used to train the plurality of trained models pre-stored in the storage unit 235. The new data sources are data inputs that the system 120 has not previously encountered during the training phase of the machine learning models. The new data sources contain the data which was not part of the dataset originally used to build or train the pre-existing trained models. Further the system 120 already has a set of trained models stored in the storage unit 235. The models are trained on specific data sets and are available for use when the user requests the forecast or prediction.
[0053] In an embodiment, the one or more existing data sources includes data which is at least one of, partly or completely used for forecasting the one or more events as per the request. In other words, as per the request, the user may intend to only use part of the data of the one or more existing data sources to forecast the one or more events, wherein the part of the data is based on a data range or conditions selected by user. Further, the user may intend to use the complete data of the one or more existing data sources to forecast the one or more events as per the request. Therefore, based on the request received from the user, the data from the one or more existing data sources may be selected partly or fully to forecast the one or more events, thereby advantageously, save on processing time. The one or more existing data sources may encompass the variety of datasets that provide valuable historical insights pertinent to the forecasting process.
[0054] Thereafter the selecting unit 240 is configured to select the one or more trained models from the plurality of trained models pre-stored in the storage unit 235 based on details of the one or more trained models provided in the request by the user. The details of the trained models may be provided directly in the request such as, by a model name. Alternatively, the details of the trained models may be provided indirectly as a data source name, test values, actual values, forecasted values, and performance indicators which include at least one of, accuracy and Root Mean Square Error (RMSE). When the details of the trained models are provided indirectly as per the request, the selecting unit 240 is configured to select the one or more trained models which are trained based on the details provided indirectly in the request. By selecting the pre-trained models stored in the storage unit 235, advantageously there is no requirement for training a model afresh, hence again saving on processing time.
[0055] In an embodiment, the actual value represents real, observed data points that reflect the current state or performance of the entity being measured. The test values are used to evaluate the trained models against known outcomes to assess the predictive accuracy. The forecasted values are representing predictions made by the trained models regarding future events based on the data and learned patterns. [0056] In an embodiment, the accuracy is the measure of how well the model's predictions align with the actual outcomes. The accuracy is often expressed as the percentage, indicating the ratio of correct predictions to total predictions. The example of accuracy includes, but not limited, user traffic forecasting, signal quality prediction, network latency predictions, and user experience metrics. The RMSE is the common statistical measure used assess the differences between predicted values and actual values. The RMSE quantifies the average deviation of petitions from actual outcomes, with lower values indicating better model performance. The examples of the RMSE includes, but not limited to, signal strength prediction, traffic forecasting, latency estimation, and user experience predictions.
[0057] Further, when the one or more events are required to be forecasted using the one or more new data sources, the selecting unit 240 selects the one or more trained models based on details provided by the user in the request of the one or more trained models from the plurality of the pre stored models in the storing unit 240. Thereafter, the selecting unit 240 applies the selected one or more trained models to the data from the new data source for forecasting the one or more events.
[0058] When the one or more events are required to be forecasted using the one or more existing data sources, the selecting unit 240 selects the one or more trained models from the storage unit 235 based on at least one of, model name, data source name.
[0059] In an embodiment, the model’s name refers to the unique identifiers or the label associated with the specific machine learning model that has been trained to forecast certain events. Each trained model in the system 120 is saved with the distinct name that signifies the trained model functionality, configuration, or the task was trained for. The data source name refers to the identifier or label for the source of data that will be used in the forecasting process. The data sources may include different datasets such as, but not limited to, sales records, customer interaction logs, sensor data, or network traffic data, each labelled with the unique name. The data sources are critical inputs for trained models and different models might be designed to work with several types of data sources, and the selecting unit 240 uses the data source name to ensure that the model selected is compatible with the relevant data source.
[0060] Thereafter, the forecasting engine 245 forecasts the one or more events utilizing the selected one or more trained models related to the data from the one or more data sources by checking the received request against the one or more parameters such as one or more hyperparameters. The one or more hyperparameters may include, but are not limited to, hyperparameters configured for the model, which consist of statistical calculations such as, but not limited to, mean, mode, variance, trend, Autocorrelation Function (ACF), and Partial Autocorrelation Function (PACF).
[0061] In an embodiment, the one or more events are forecasted utilizing the selected one or more trained models for at least one of, a time range provided as per the request. For example, the time range may be provided or customized as let us consider 7 am to 10 am on day 1 (future date). Further, the user may intend to forecast the one or more events on day 3, 4, 5, (future dates) etc., without intending to forecast the one or more events on day 2. Even in this situation, where the future dates are not continuous in nature, the forecasting engine 245 may advantageously forecast the one or more events based on customization of the time or date range by the user via the request. Due to which, the present invention may forecast the one or more events which is independent of being continuous in nature
[0062] Further the time range may include at least one of, a temporal range or temporal boundaries. The temporal range refers to the specific period of time over which the existing data is selected or analysed. For example, the temporal range may be the range such as for example last 30 days, a particular year, or a specific time frame (e.g., between 1:00 PM and 3:00 PM). The temporal boundaries refer to the start and end points of the temporal range. The temporal range defines the time frame of the existing data to be used in forecasting. By selecting the specific range, the system 120 ensures that only the relevant data from that time period is considered in the forecasting process. As well as the temporal boundaries precisely define the limits of the time window for the data being used. For example, the temporal boundaries may be set from January 1st, 2022, to December 31st, 2022. The temporal boundaries help in restricting the existing data to the specific window, ensuring that only the existing data within the timeframe is used.
[0063] FIG. 3 describes a preferred embodiment of the system 120 of FIG. 2, according to various embodiments of the present invention. It is to be noted that the embodiment with respect to FIG. 3 will be explained with respect to the first UE 110a and the system 120 for the purpose of description and illustration and should nowhere be construed as limited to the scope of the present disclosure.
[0064] As mentioned earlier in FIG. 1, each of the first UE 110a, the second UE 110b, and the third UE 110c may include an external storage device, a bus, a main memory, a read-only memory, a mass storage device, communication port(s), and a processor. The exemplary embodiment as illustrated in FIG. 3 will be explained with respect to the first UE 110a without deviating from the scope of the present disclosure and limiting the scope of the present disclosure. The first UE 110a includes one or more primary processors 305 communicably coupled to the one or more processors 205 of the system 120.
[0065] The one or more primary processors 305 are coupled with a memory 310 storing instructions which are executed by the one or more primary processors 305. Execution of the stored instructions by the one or more primary processors 305 causes the first UE 110a to transmit the request to one or more processors, based on the user selecting or customizing the one or more parameters via the UI 215 of the UE 110, to forecast one or more events to the network 105.
[0066] In an embodiment, the UI 215 provides the user with the flexibility to select and/or customize the one or more parameters that will guide the forecasting process. The one or more parameters may be tailored to meet individual user needs and preferences, enabling more relevant and accurate predictions. The one or more parameters includes at least one of, new data source, one or more existing data sources, time range, date range, at least part of or complete data of the one or more existing data sources which are at least one of, selected and/or customized by the user via the UI of the UE (110).
[0067] In one exemplary embodiment, the user analysing network performance during peak hours may select and/or customize the one or more parameters such as, the time range, such as 6 PM to 9 PM, from the UI 215 to forecast the one or more events such as data usage and latency for that period.
[0068] As mentioned earlier in FIG. 2, the one or more processors 205 of the system 120 is configured to forecast the one or more events. As per the illustrated embodiment, the system 120 includes the one or more processors 205, the memory 210, the UI 215, and the database 220. The operations and functions of the one or more processors 205, the memory 210, the UI 215, and the database 220 are already explained in FIG. 2. For the sake of brevity, a similar description related to the working and operation of the system 108 as illustrated in FIG. 2 has been omitted to avoid repetition.
[0069] Further, the processor 205 includes the transceiver unit 225, the processing unit 230, the storage unit 235, the selecting 240, and the forecasting 245. The operations and functions of the transceiver unit 225, the processing unit 230, the storage unit 235, the selecting 240, and the forecasting 245 are already explained in FIG. 2. Hence, for the sake of brevity, a similar description related to the working and operation of the system 120 as illustrated in FIG. 2 has been omitted to avoid repetition. The limited description provided for the system 120 in FIG. 3, should be read with the description provided for the system 120 in the FIG. 2 above, and should not be construed as limiting the scope of the present disclosure. [0070] FIG. 4 is an exemplary block diagram of an architecture 400 of the system 120 for forecasting events in the network 105, according to one or more embodiments of the present invention.
[0071] The architecture 400 includes a data source 405, a data integration 410, a data preprocessing 415, a model training 420, a prediction module 425, a graphic representation module 430, an interface module 435, a new data source 440, and the database 220.
[0072] The data source 405 serves as the foundational element of the forecasting method which includes both existing and the new data sources. Existing data sources are datasets previously utilized to train machine learning models, while new data sources may include fresh inputs like files, data streams, and external databases. The diverse nature of the data sources 405 enhances the system's ability to generate accurate and relevant forecasts for network events. By leveraging both historical and real-time data, the forecasting method may adapt to changing conditions and improve decision-making.
[0073] The data integration 410 consolidates data from various sources to create the unified dataset for analysis. The data integration 410 process involves aggregating information from both new and existing data sources, ensuring compatibility and readiness for further processing. Effective data integration 410 is crucial for maximizing the quality of input data, which directly impacts the reliability of predictions. By establishing the comprehensive dataset, the data integration 410 lays the groundwork for the successful forecasting process.
[0074] The data pre-processing 415 is the critical step that involves cleaning, transforming, and organizing the integrated data into the format suitable for model training and prediction. The model training and prediction tasks within the data pre-processing 415 may include, but not limited to, removing duplicates, managing missing values, and normalizing data. Proper data pre-processing 415 is vital for improving data quality, which enhances machine learning model performance. The data pre-processing step ensures that the input data is free from noise and biases, contributing to more accurate forecasting results.
[0075] The model training 420 is responsible for training machine learning models using the data obtained during the data pre-processing 415 phase. However, in the present invention, the model training 420 may not be used as the models are pre-trained.
[0076] The prediction module 425 utilizes the pre-trained models to forecast future events based on current data inputs. The prediction module 425 applies the selected models to new data sources or the one or more existing data sources to generate predictions about the specified events. As the core of the forecasting method, the quality of predictions produced by prediction module 425 significantly influences decision-making processes within the network 105, providing actionable insights derived from the models' learned capabilities.
[0077] The graphic representation module 430 visualizes the results of the forecasting process, generating graphs and charts that depict predicted outcomes alongside the data. The graphic representation module 430 may include visual elements like trend lines, confidence intervals, and comparative metrics. Visualization is crucial for interpreting complex data and forecasting results, making the graphic representation module 430 easier for users to understand findings and take informed actions based on the predictions.
[0078] The interface module 435 facilitates user interactions with the forecasting system 120. The interface module 435 allows users to submit requests for predictions, select models, and define parameters, such as date or time ranges, for the forecasts. The user-friendly interface module 435 is vital for ensuring effective engagement with the system, enabling users to customize the forecasting needs and access valuable insights generated by the predictive models.
[0079] The new data source 440 represents additional data inputs that may be integrated into the forecasting process. Incorporating the new data source 440 continuously ensures that forecasting models remain relevant and adaptable to evolving network conditions. The new data source 440 flexibility is crucial for maintaining the accuracy and effectiveness of predictions in dynamic environments.
[0080] The database 220 acts as the central repository for storing trained models data, user requests, and performance metrics. The database 220 enables efficient access and retrieval of information necessary for the forecasting process. The well-structured database 220 is essential for managing large volumes of data, supporting the system's longevity and scalability. The database 220 also ensures data integrity, facilitating smooth operations across the various components of the forecasting method.
[0081] FIG. 5 is a signal flow diagram for forecasting events in the network 105, according to one or more embodiments of the present invention.
[0082] At step 505, the new data source 440 or the existing data source along with the one or more parameters are identified where the data input occurs. The data may originate from the new source 440 or an existing source with updated the one or more parameters, such as, but not limited to, the date, time, or any other variable required for the task at hand. The one or more parameters allows flexibility in choosing the specific data scope for forecasting, enabling the ingestion of data for further processing. [0083] At step 510, once the data is received, the next phase is the data preprocessing 415. The data preprocessing 415 involves organizing and refining the raw data, which might include cleaning any inconsistencies, normalizing the data to the familiar format, or selecting important features necessary for analysis. The data preprocessing 415 ensures that the data is accurate and of high quality, making the data preprocessing 415 suitable for forecasting tasks.
[0084] At step 515, upon preprocessing the data, the pre-trained models are engaged. The pre-trained models have already been trained using historical data and may now use the prepared data for making predictions or forecasting. The models analyse the data, applying the patterns they learned during training. Depending on whether the user is using the new or existing data source, the appropriate model is selected to forecast the outcomes.
[0085] At step 520, once the pre-trained models are set, the inference on selected range are applied to the dataset to perform inference. The inference on selected range involves making predictions or forecasts within the defined range, such as, but not limited to, the specific time window or another parameter range chosen by the user. The inference on selected range stage leverages the learned knowledge of the model to make accurate predictions for future events or trends.
[0086] At step 525, after inference is performed, the model output is evaluated using key performance metrics, including the RMS and accuracy. The RMSE measures the difference between predicted and actual values, while accuracy indicates how well the model has performed in relation to the expected outcomes. The model metrics help validate the reliability and precision of the predictions.
[0087] At step 530, the final stage involves visualizing the results. Both predicted values and actual data are represented graphically, allowing the user to compare them easily. The visualization is crucial for interpreting the effectiveness of the forecasting, often presented through charts, graphs, or other visual aids, offering the clear and actionable view of the forecasted versus real-world data.
[0088] FIG. 6 is a flow diagram of a method 600 for forecasting events in the network 105, according to one or more embodiments of the present invention. For the purpose of description, the method 500 is described with the embodiments as illustrated in FIG. 2 and should nowhere be construed as limiting the scope of the present disclosure.
[0089] At step 605, the method 600 includes the step of receiving a request from a user to forecast one or more events.
[0090] At step 610, the method 600 includes the step of extracting from the request, information related to at least one of, whether the one or more events are required to be forecasted using the new data sources or whether the one or more events are required to be forecasted using one or more existing data source.
[0091] At step 615, the method 600 includes the step of selecting the one or more trained models from the plurality of trained models pre-stored in a storage unit based on details of the one or more trained models from the request.
[0092] At step 620, the method 600 includes the step of forecasting utilizing the selected one or more trained models, the one or more events based on data from at least one of, the new data source or the one or more existing data sources as per the extracted information from the request.
[0093] The present invention further discloses a non-transitory computer-readable medium having stored thereon computer-readable instructions. The computer-readable instructions are executed by the processor 205. The processor 205 is configured to receive, a request from a user to forecast one or more events. The processor 205 is further configured to extract from the request information related to at least one of, whether the one or more events are required to be forecasted using one or more new data sources or whether the one or more events are required to be forecasted using one or more existing data sources. The processor 205 is further configured to select based on the extracted information, one or more trained models from a plurality of trained models pre-stored in a storage unit. The processor 205 is further configured to forecast utilizing the selected one or more trained models, the one or more events based on data from at least one of, the new data source or the one or more existing data sources as per the extracted information from the request.
[0094] A person of ordinary skill in the art will readily ascertain that the illustrated embodiments and steps in description and drawings (FIG.1-6) are set out to explain the exemplary embodiments shown, and it should be anticipated that ongoing technological development will change the manner in which particular functions are performed. These examples are presented herein for purposes of illustration, and not limitation. Further, the boundaries of the functional building blocks have been arbitrarily defined herein for the convenience of the description. Alternative boundaries can be defined so long as the specified functions and relationships thereof are appropriately performed. Alternatives (including equivalents, extensions, variations, deviations, etc., of those described herein) will be apparent to persons skilled in the relevant art(s) based on the teachings contained herein. Such alternatives fall within the scope and spirit of the disclosed embodiments.
[0095] The present disclosure includes technical advancements in forecasting in networks by enabling dynamic selection of data sources and pre-trained models based on user input, facilitating more accurate and context- specific predictions. By allowing forecasting on both new and existing data sources, it offers flexibility in addressing real-time and historical trends. Additionally, the seamless integration with various data sources and formats improves adaptability across different network environments.
[0096] The present invention offers multiple advantages by supporting both new and existing data sources, enabling users to perform forecasts using real-time or historical data, which enhances its flexibility. The efficient selection of pre-trained models based on the data type ensures relevant and accurate predictions. Additionally, the use of performance metrics like RMSE and accuracy ensures that forecasted results are reliable. The user-friendly graphical visualization of predicted and actual values simplifies interpretation for users. Lastly, the invention’s compatibility with various data formats such as HTTP, DFS, and NAS allows for seamless integration into diverse network environments.
[0097] The present invention offers multiple advantages over the prior art and the above listed are a few examples to emphasize on some of the advantageous features. The listed advantages are to be read in a non-limiting manner.
REFERENCE NUMERALS
[0098] Environment- 100
[0099] User Equipment (UE)- 110
[00100] Server- 115
[00101] Network- 105
[00102] System -120
[00103] Processor- 205
[00104] Memory- 210
[00105] User interface- 215
[00106] Database - 220
[00107] Transceiver unit - 225
[00108] Processing unit - 230
[00109] Storage unit - 235
[00110] Selecting unit - 240
[00111] Forecasting engine - 245
[00112] Data source - 405
[00113] Data integration - 410
[00114] Data pre-processing - 415 [00115] Model training - 420
[00116] Prediction module - 425
[00117] Graphic representation module - 430
[00118] Interface module - 435
[00119] New data source - 440

Claims

CLAIMS We Claim:
1. A method (600) for forecasting events in a network, the method (600) comprising the steps of: receiving, by one or more processors (205), a request from a user to forecast one or more events; extracting, by the one or more processors (205), from the request, information related to at least one of, whether the one or more events are required to be forecasted using new data source or whether the one or more events are required to be forecasted using one or more existing data sources; selecting, by the one or more processors, one or more trained models from a plurality of trained models pre-stored in a storage unit based on details of the one or more trained models from the request; and forecasting, by the one or more processors (205), utilizing the selected one or more trained models, the one or more events, based on data from at least one of, the new data source or the one or more existing data sources as per the extracted information from the request.
2. The method (600) as claimed in claim 1, wherein the new data source or the one or more existing data sources include at least one of, a file, a source path, a data stream, data via Hypertext Transfer Protocol (HTTP), Distributed File System (DFS) and Network Access Server (NAS).
3. The method (600) as claimed in claim 1, wherein the one or more existing data sources includes data which is at least one of, partly or completely used for forecasting the one or more events as per the request.
4. The method (600) as claimed in claim 1 , wherein when the one or more events are required to be forecasted using one or more existing data sources, the step of, selecting, based on the extracted information, one or more trained models from a plurality of trained models pre-stored in a storage unit, includes the step of: selecting, by the one or more processors (205), the corresponding one or more trained models from the storage unit based on at least one of, model name, data source name, test values, forecasted values, and performance indicators which include at least one of, accuracy and Root Mean Square Error (RMSE).
5. The method (600) as claimed in claim 1, wherein the step of, forecasting, utilizing the selected one or more trained models, one or more events based on data from at least one of, the new data source or the one or more existing data sources as per the extracted information from the request, incudes the steps of: checking, by the one or more processors (205), based on the received request, one or more parameters; forecasting, by the one or more processors (205), utilizing the selected one or more trained models, the one or more events based on learnt trends/pattems of historic data pertaining to the one or more parameters.
6. The method (600) as claimed in claim 1, wherein the one or more events are forecasted by the one or more processors, utilizing the selected one or more trained models for at least one of, a time range provided as per the request.
7. A system (120) for forecasting events in a network, the system (120) comprising: a transceiver unit (225), configured to, receive, a request from a user to forecast one or more events; a processing unit (230), configured to, extract, from the request, information related to at least one of, whether the one or more events are required to be forecasted using new data source or whether the one or more events are required to be forecasted using one or more existing data sources; a selecting unit (240), configured to, select, one or more trained models from a plurality of trained models pre-stored in a storage unit (235) based on details of the one or more trained models from the request; and a forecasting engine (245), configured to, forecast, utilizing the selected one or more trained models, the one or more events, based on data from at least one of, the new data source or the one or more existing data sources as per the extracted information from the request.
8. The system (120) as claimed in claim 7, wherein the one or more new data sources or the one or more existing data sources include at least one of, a file, a source path, a data stream, data via Hypertext Transfer Protocol (HTTP), Distributed File System (DFS) and Network Access Server (NAS).
9. The system (120) as claimed in claim 7, wherein the one or more existing data sources includes data which is at least one of, partly or completely used for forecasting the one or more events as per the request.
10. The system (120) as claimed in claim 7, wherein when the one or more events are required to be forecasted using one or more existing data sources, the selecting unit (235) selects, the one or more trained models by: selecting, the corresponding one or more trained models from the storage unit (235) based on at least one of, model name, data source name, test values, forecasted values, and performance indicators which include at least one of, accuracy and Root Mean Square Error (RMSE).
11. The system (120) as claimed in claim 7, wherein the forecasting engine forecasts, utilizing the selected one or more trained models, one or more events, by: checking, based on the received request, one or more parameters; and forecasting, utilizing the selected one or more trained models, the one or more events based on learnt trends/patterns of historic data pertaining to the one or more parameters.
12. The system (120) as claimed in claim 7, wherein the one or more events are forecasted by the utilizing the selected one or more trained models for at least one of, a time range provided as per the request.
13. A non-transitory computer-readable medium having stored thereon computer-readable instructions that, when executed by a processor (205), causes the processor (205) to: receive, a request from a user to forecast one or more events; extract, from the request, information related to at least one of, whether the one or more events are required to be forecasted using one or more new data sources or whether the one or more events are required to be forecasted using one or more existing data sources; select, based on the extracted information, one or more trained models from a plurality of trained models pre-stored in a storage unit (235); and forecast, utilizing the selected one or more trained models, the one or more events.
14. A User Equipment (UE) (110), comprising: one or more primary processors (305), communicatively coupled to one or more processors (205) in a network (105), wherein the one or more primary processors (305) are coupled with a memory (310) stores instructions, when executed by the one or more primary processors (305), cause the UE (110) to: provide, one or more parameters for the user on a User Interface (UI) of the UE, the one or more parameters are one of, selected or customized by the user, wherein the one or more parameters includes selection or customization of at least one of, a new data source, one or more existing data sources, a time range, a date range, at least part of or complete data of the one or more existing data sources and; transmit, a request to one or more processor (205) based on the user selecting or customizing the one or more parameters via the UI of the UE, to forecast one or more events, wherein the one or more processors (205) is configured to perform the steps of claim 1.
PCT/IN2024/051971 2023-10-06 2024-10-06 Method and system for forecasting events in a network Pending WO2025074411A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
IN202321067271 2023-10-06
IN202321067271 2023-10-06

Publications (1)

Publication Number Publication Date
WO2025074411A1 true WO2025074411A1 (en) 2025-04-10

Family

ID=95284275

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IN2024/051971 Pending WO2025074411A1 (en) 2023-10-06 2024-10-06 Method and system for forecasting events in a network

Country Status (1)

Country Link
WO (1) WO2025074411A1 (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080255760A1 (en) * 2007-04-16 2008-10-16 Honeywell International, Inc. Forecasting system
US20110099136A1 (en) * 2009-10-23 2011-04-28 Gm Global Technology Operations, Inc. Method and system for concurrent event forecasting
US20180255470A1 (en) * 2015-11-05 2018-09-06 Huawei Technologies Co., Ltd. Network Event Prediction Method and Apparatus and Method and Apparatus for Establishing Network-Event Prediction Model

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080255760A1 (en) * 2007-04-16 2008-10-16 Honeywell International, Inc. Forecasting system
US20110099136A1 (en) * 2009-10-23 2011-04-28 Gm Global Technology Operations, Inc. Method and system for concurrent event forecasting
US20180255470A1 (en) * 2015-11-05 2018-09-06 Huawei Technologies Co., Ltd. Network Event Prediction Method and Apparatus and Method and Apparatus for Establishing Network-Event Prediction Model

Similar Documents

Publication Publication Date Title
US11119878B2 (en) System to manage economics and operational dynamics of IT systems and infrastructure in a multi-vendor service environment
US11237898B2 (en) Automatic model-based computing environment performance monitoring
US11488041B2 (en) System and method for predicting incidents using log text analytics
US10732618B2 (en) Machine health monitoring, failure detection and prediction using non-parametric data
CN106951984B (en) Dynamic analysis and prediction method and device for system health degree
US11796991B2 (en) Context-awareness in preventative maintenance
US20220292006A1 (en) System for Automatically Generating Insights by Analysing Telemetric Data
US20180276256A1 (en) Method and Apparatus for Smart Archiving and Analytics
US11636377B1 (en) Artificial intelligence system incorporating automatic model updates based on change point detection using time series decomposing and clustering
CN109960635B (en) Monitoring and alarming method, system, equipment and storage medium of real-time computing platform
US10372572B1 (en) Prediction model testing framework
WO2025074411A1 (en) Method and system for forecasting events in a network
WO2025079097A1 (en) System and method for detecting anomalies in a network
US12026134B2 (en) Flow-based data quality monitoring
WO2025074421A1 (en) System and method for forecasting events in a network
WO2025074410A1 (en) Method and system for forecasting events in a network
US12360818B1 (en) System and method for multi-vendor artificial intelligence workload optimization and resource allocation in cloud environments
WO2025074417A1 (en) Method and system for generating the one or more predictions
WO2025079094A1 (en) System and method for forecasting one or more alerts
Aluwala Optimizing IT Operations with AI-Driven Application Performance Management
CN119003269A (en) Cloud computer monitoring method, device, equipment and storage medium
WO2025017753A1 (en) Method and system for training a model based on a selected logic
WO2025083732A1 (en) System and method for recommending an artificial intelligence/machine learning (ai/ml) model
WO2025017742A1 (en) Method and system for processing data in a network
WO2025074413A1 (en) Method and system for identifying anomalies in events subscription service management

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 24874235

Country of ref document: EP

Kind code of ref document: A1