[go: up one dir, main page]

WO2025079092A1 - Method and system for predicting performance trends of one or more network functions - Google Patents

Method and system for predicting performance trends of one or more network functions Download PDF

Info

Publication number
WO2025079092A1
WO2025079092A1 PCT/IN2024/052031 IN2024052031W WO2025079092A1 WO 2025079092 A1 WO2025079092 A1 WO 2025079092A1 IN 2024052031 W IN2024052031 W IN 2024052031W WO 2025079092 A1 WO2025079092 A1 WO 2025079092A1
Authority
WO
WIPO (PCT)
Prior art keywords
network functions
performance
model
performance data
network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
PCT/IN2024/052031
Other languages
French (fr)
Inventor
Aayush Bhatnagar
Ankit Murarka
Jugal Kishore
Chandra GANVEER
Sanjana Chaudhary
Gourav Gurbani
Yogesh Kumar
Avinash Kushwaha
Dharmendra Kumar Vishwakarma
Sajal Soni
Niharika PATNAM
Shubham Ingle
Harsh Poddar
Sanket KUMTHEKAR
Mohit Bhanwria
Shashank Bhushan
Vinay Gayki
Aniket KHADE
Durgesh KUMAR
Zenith KUMAR
Gaurav Kumar
Manasvi Rajani
Kishan Sahu
Sunil Meena
Supriya KAUSHIK DE
Kumar Debashish
Mehul Tilala
Satish Narayan
Rahul Kumar
Harshita GARG
Kunal Telgote
Ralph LOBO
Girish DANGE
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jio Platforms Ltd
Original Assignee
Jio Platforms Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jio Platforms Ltd filed Critical Jio Platforms Ltd
Publication of WO2025079092A1 publication Critical patent/WO2025079092A1/en
Pending legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/14Network analysis or design
    • H04L41/147Network analysis or design for predicting network behaviour
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/14Network analysis or design
    • H04L41/145Network analysis or design involving simulating, designing, planning or modelling of a network
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/16Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks using machine learning or artificial intelligence
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L43/00Arrangements for monitoring or testing data switching networks
    • H04L43/08Monitoring or testing based on specific metrics, e.g. QoS, energy consumption or environmental parameters
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/08Configuration management of networks or network elements
    • H04L41/0803Configuration setting
    • H04L41/0806Configuration setting for initial configuration or provisioning, e.g. plug-and-play
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/08Configuration management of networks or network elements
    • H04L41/0803Configuration setting
    • H04L41/0823Configuration setting characterised by the purposes of a change of settings, e.g. optimising configuration for enhancing reliability
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/22Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks comprising specially adapted graphical user interfaces [GUI]

Definitions

  • the present invention relates to the field of network management and maintenance and, more specifically, to a system and a method for predicting performance matrix for network functions in a network.
  • Network functions play a vital role in improving the quality of a network by the way of managing traffic, delegating node allocation, managing performance of routing device (e.g., router, or the like) etc.
  • a network function is associated with micro-services executing several tasks (e.g., resource allocation task, session handling task, handover task or the like) in parallel.
  • the data generated by the network services are vast and analysis of such data is essential for enhancement of user experience and to improve service quality.
  • NFs network functions
  • their performance data like irregular performance matrix with drastic measurement values and parameters which further give rise to problems like inefficient resource allocation, poor user experience due to sudden variables and service disruptions etc.
  • KPIs key performance indicators
  • Performance metrics can vary significantly from one day to the next, making it challenging to predict future trends accurately. For example, the change in certain parameters is with a range of +/- 1 % of the standard value are acceptable, as this may result from sudden increases in users, or weather conditions. However, if the value deviates from normal prediction, such as twice the nominal value one day and half the next, it should be assumed that there is some underlying problem like failure of a network device.
  • the contemporary network management approach relied heavily on historical data or past data to allocate resources optimally. However, with the highly variable performance metrics, it may happen that there will be uneven resource allocation, like either over-allocating or under-allocating resources, resulting in inefficient resource allocation. The inaccurate resource allocation and forecasting leads to service degradation and increased operational costs.
  • the performance data includes data related to at least one of, Key Performance Indicators (KPIs) and counters.
  • KPIs Key Performance Indicators
  • the one or more features are selected based on a type of model required to be trained, and the one or more features include at least one of, counters and attributes.
  • the one or more processors while training the AI/ML model, the one or more processors enables the AI/ML model to learn patterns/trends/behaviour of the one or more network functions in the network.
  • the future performance of the one or more network functions is predicted based on an input received from a user via a user interface pertaining to at least one of, a time period and one or more parameters to predict the performance trends of the one or more network functions, wherein the one or more parameters are set by the user.
  • a system for predicting performance trends of one or more network functions includes a collecting unit, a selecting unit, a training unit, a feeding unit and a predicting unit.
  • the collecting unit is configured to collect, historic performance data associated with the one or more network functions.
  • the selecting unit is configured to select, one or more features from the collected historic performance data.
  • the training unit is configured to train an artificial intelligence/machine learning (AI/ML) model with the selected one or more features.
  • the feeding unit is configured to feed the trained AI/ML model with real time performance data of the one or more network functions.
  • the predicting unit is configured to predict the performance trends of the one or more network functions utilizing the trained AI/ML model based on the real time performance data.
  • a non-transitory computer-readable medium having stored thereon computer-readable instructions causes the processor to collect historic performance data associated with one or more network functions.
  • the processor selects one or more features from the collected historic performance data.
  • the processor trains an AI/ML model with the selected one or more features.
  • the processor feeds the trained AI/ML model with real time performance data of the one or more network functions.
  • the processor predicts the performance trends of the one or more network functions utilizing the trained AI/ML model based on the real time performance data.
  • first, second etc. may be used herein to describe various elements, components, regions, layers and/or sections, it should be understood that these elements, components, regions, layers and/or sections should not be limited by these terms. These terms are used only to distinguish one element, component, region, layer or section from another region, layer, or a section. Thus, a first element, component, region layer, or section discussed below could be termed a second element, component, region, layer, or section without departing form the scope of the example embodiments.
  • terms such as “processing” or “computing” or “calculating” or “determining” of “displaying” or the like refer to the action and processes of a computer system, or similar electronic computing device/hardware, that manipulates and transforms data represented as physical, electronic quantities within the computer system's registers and memories into other data similarly represented as physical quantities within the computer system memories or registers or other such information storage, transmission or display devices.
  • Various embodiments of the present invention provide a system and method for forecasting performance trend for the network functions in a network for a certain time period configurable by an interface.
  • the present system In response to the complex problem of forecasting future performance trends for NFs, the present system is configured to interact with Integrated Performance Management (IPM) by means of an AI/ML-based interlinking.
  • IPM Integrated Performance Management
  • the present system is configured to predict performance trends accurately for a configurable time frame or period like for next 24 hour or 48 hours or the next 7 days as required and as configured by the user by means of a parameter configuration module (not shown), so as to allow for a proactive approach to network management.
  • the system combines present data, historical data analysis, machine learning, and real-time monitoring to address the challenges associated with performance variability and resource allocation.
  • the communication network (106) may also include, by way of example but not limitation, one or more of a wireless network, a wired network, an internet, an intranet, a public network, a private network, a packet-switched network, a circuit-switched network, an ad hoc network, an infrastructure network, a Public-Switched Telephone Network (PSTN), a cable network, a cellular network, a satellite network, a fiber optic network, a VOIP or some combination thereof.
  • PSTN Public-Switched Telephone Network
  • the system (108) is communicatively coupled to a server (104) via the communication network (106).
  • the server (104) can be, for example, but not limited to a standalone server, a server blade, a server rack, an application server, a bank of servers, a business telephony application server (BTAS), a server farm, a cloud server, an edge server, home server, a virtualized server, one or more processors executing code to function as a server, or the like.
  • the server (104) may operate at various entities or a single entity (include, but is not limited to, a vendor side, a service provider side, a network operator side, a company side, an organization side, a university side, a lab facility side, a business enterprise side, a defense facility side, or any other facility) that provides service.
  • entities or a single entity include, but is not limited to, a vendor side, a service provider side, a network operator side, a company side, an organization side, a university side, a lab facility side, a business enterprise side, a defense facility side, or any other facility.
  • the environment (100) further includes the system (108) communicably coupled to the server (e.g., remote server or the like) (104) and each UE of the plurality of UEs (102) via the communication network (106).
  • the remote server (104) is configured to execute the requests in the communication network (106).
  • the system (108) is adapted to be embedded within the remote server (104) or is embedded as an individual entity.
  • the system (108) is designed to provide a centralized and unified view of data and facilitate efficient business operations.
  • the system (108) is authorized to access to update/create/delete one or more parameters of their relationship between the requests for predicting performance trends of one or more network functions, which gets reflected in real-time independent of the complexity of network.
  • the system (108) may include an enterprise provisioning server (for example), which may connect with the remote server (104).
  • the enterprise provisioning server provides flexibility for enterprises, ecommerce, finance to update/create/delete information related to the requests for the predicting performance trends of one or more network functions in real time as per their business needs.
  • a user with administrator rights can access and retrieve the requests for the predicting performance trends of one or more network functions and perform real-time analysis in the system (108).
  • the system (108) may include, by way of example but not limitation, one or more of a standalone server, a server blade, a server rack, a bank of servers, a business telephony application server (BTAS), a server farm, hardware supporting a part of a cloud service or system, a home server, hardware running a virtualized server, one or more processors executing code to function as a server, one or more machines performing server-side functionality as described herein, at least a portion of any of the above, some combination thereof.
  • BTAS business telephony application server
  • system (108) may operate at various entities or single entity (for example include, but is not limited to, a vendor side, service provider side, a network operator side, a company side, an organization side, a university side, a lab facility side, a business enterprise side, ecommerce side, finance side, a defense facility side, or any other facility) that provides service.
  • entities or single entity for example include, but is not limited to, a vendor side, service provider side, a network operator side, a company side, an organization side, a university side, a lab facility side, a business enterprise side, ecommerce side, finance side, a defense facility side, or any other facility.
  • system (108) is described as an integral part of the remote server (104), without deviating from the scope of the present disclosure. Operational and construction features of the system (108) will be explained in detail with respect to the following figures.
  • FIG. 2 illustrates a block diagram of the system (108) provided for predicting performance trends of the one or more network functions, according to one or more embodiments of the present invention.
  • the system (108) includes the one or more processors (202), the memory (204), a user interface (206), a display (208), an input device (210), and the database (214). Further the system (108) may comprise one or more processors (202).
  • the one or more processors (202), hereinafter referred to as the processor (202) may be implemented as one or more microprocessors, microcomputers, microcontrollers, digital signal processors, central processing units, state machines, logic circuitries, single board computers, and/or any devices that manipulate signals based on operational instructions.
  • the system (108) includes one processor. However, it is to be noted that the system (108) may include multiple processors as per the requirement and without deviating from the scope of the present disclosure.
  • An information related to predict the performance trends of the one or more network functions may be provided or stored in the memory (204) of the system (108).
  • the processor (202) is configured to fetch and execute computer-readable instructions stored in the memory (204).
  • the memory (204) may be configured to store one or more computer-readable instructions or routines in a non- transitory computer-readable storage medium, which may be fetched and executed to create or share data packets over a network service.
  • the memory (204) may include any non-transitory storage device including, for example, volatile memory such as RAM, or non-volatile memory such as disk memory, EPROMs, FLASH memory, unalterable memory, and the like.
  • the memory (204) may comprise any non-transitory storage device including, for example, volatile memory such as Random- Access Memory (RAM), or non-volatile memory such as Electrically Erasable Programmable Read-only Memory (EPROM), flash memory, and the like.
  • the system (108) may include an interface(s).
  • the interface(s) may comprise a variety of interfaces, for example, interfaces for data input and output devices, referred to as input/output (I/O) devices, storage devices, and the like.
  • the interface(s) may facilitate communication for the system.
  • the interface(s) may also provide a communication pathway for one or more components of the system. Examples of such components include, but are not limited to, processing unit/engine(s) and the database (214).
  • the processing unit/engine(s) may be implemented as a combination of hardware and programming (for example, programmable instructions) to implement one or more functionalities of the processing engine(s).
  • the information related to predict the performance trends of the one or more network functions may further be configured to render on the user interface (206).
  • the user interface (206) may include functionality similar to at least a portion of functionality implemented by one or more computer system interfaces such as those described herein and/or generally known to one having ordinary skill in the art.
  • the user interface (206) may be rendered on the display (208), implemented using Liquid Crystal Display (LCD) display technology, Organic Light-Emitting Diode (OLED) display technology, and/or other types of conventional display technology.
  • the display (208) may be integrated within the system (108) or connected externally.
  • the input device(s) (210) may include, but not limited to, keyboard, buttons, scroll wheels, cursors, touchscreen sensors, audio command interfaces, magnetic strip reader, optical scanner, etc.
  • the one or more processors (202) is configured to transmit a response content related to predict the performance trends of the one or more network functions to the UE (102-1). More specifically, the one or more processors (202) of the system (108) is configured to transmit the response content to at least one of the UE (102-1).
  • a kernel (315) is a core component serving as the primary interface between hardware components of the UE (102-1) and the system (108). The kernel (315) is configured to provide the plurality of response contents hosted on the system (108) to access resources available in the communication network (106).
  • the resources include one of a Central Processing Unit (CPU), memory components such as Random Access Memory (RAM) and Read Only Memory (ROM).
  • FIG. 4 illustrates a system architecture (400) for predicting performance trends the of one or more network functions, in accordance with some embodiments.
  • the system architecture (400) includes a system- IPM interface (410) that is linked with an Integrated Performance Management (IPM) (402) that allows the user to define the time period (for example, 24 hours, 48 hours or the like) for which the prediction performance trends is sought.
  • IPM Integrated Performance Management
  • the data integrator (404) and the data processing unit (406) continuously collect the network performance data, including key performance indicators (KPIs) and counters, from the various NFs in real-time.
  • the data processing unit (406) analyzes the historical performance data to identify patterns, trends, and seasonality. This analysis provides insights into past performance behavior. Further, the data processing unit (406) continuously monitors real-time performance metrics from the NFs to capture any deviations from the predicted trends.
  • the Al/ML Model is employed to learn from the historical data and make predictions for future performance trends.
  • the predicting unit (224) predicts the performance analytics of the one or more network functions by using the trained AI/ML model.
  • the predicting unit (224) generates predictions for the configured time period, considering historical patterns and real-time deviations.
  • the system architecture (400) also includes a resource allocation module (not shown) that allocates the network resources, such as bandwidth and processing capacity, based on these predictions to ensure that they align with expected demand.
  • the system architecture (400) also includes a learning module (not shown) that allows the system to continuously learn from new data and to adapt its predictions based on emerging trends, ensuring accuracy over time.
  • the most unique aspect of the present invention is its ability to combine advanced Al/ML methodology and mechanism with both historical and real-time performance data to make highly accurate predictions while taking into accounting fluctuation in NF performance metrics.
  • the system architecture (400) may implement Application programming interface (API) as a medium of communication to communicate with the server(s) (104) in the network (106).
  • API Application programming interface
  • the system architecture (400) may operate and exchange the information in JSON (JavaScript Object Notation) format.
  • FIG. 5 is an exemplary flow diagram illustrating the method for predicting performance trends of one or more network functions, according to various embodiments of the present disclosure.
  • the method includes collecting the historic performance data associated with the one or more network functions.
  • the method allows the collecting unit (216) to collect the historic performance data associated with the one or more network functions.
  • the method includes selecting the one or more features from the collected historic performance data.
  • the method allows the selecting unit (218) to select the one or more features from the collected historic performance data.
  • the method includes training the AI/ML model with the selected one or more features.
  • the method allows the training unit (220) to train the AI/ML model with the selected one or more features.
  • the method includes feeding the real time performance data of the one or more network functions to the trained AI/ML model.
  • the method allows the feeding unit (222) to feed the trained AI/ML model with real time performance data of the one or more network functions.
  • the method includes predicting the performance trends of the one or more network functions utilizing the trained AI/ML model based on the real time performance data.
  • the method allows the predicting unit (224) to predict the performance trends of the one or more network functions utilizing the trained AI/ML model based on the real time performance data.
  • FIG. 6 is a flow diagram (600) illustrating an internal call flow for predicting performance trends of the one or more network functions, in accordance with some embodiments.
  • the present system performs historical data collection & analysis.
  • the system-IPM Interface (410) initiates the process by continuously gathering performance data, including KPIs and counters, from various network functions (NFs) in real-time.
  • the system analyses historical performance data to identify patterns, trends, and seasonality.
  • the data integrator (404) and the data processing unit (406) set data definition & normalization criteria.
  • the integrated data undergoes defining the data, its purpose, normalization, and cleaning procedures, ensuring its consistency and sent to the database for storage and further analysis.
  • the user configures the feature and selects the hyper-parameter by means of the interface (410).
  • the relevant features as well as hyper parameters are selected.
  • the features can include counter or attributes or other fields over which model for training.
  • the hyper-parameter can include parameter values which is specific to the model and over which model is configured. In general, the hyperparameter is critical configurations that can significantly affect the performance of the AI/ML model.
  • the hyper-parameters may include the maximum depth of the tree, which determines how many levels of decision nodes the tree can have, and the minimum samples required to split a node, which affects how the tree is constructed. These hyper-parameters are not learned during training; instead, they must be set up before the training process begins.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Environmental & Geological Engineering (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The present disclosure relates to a method for predicting performance trends of one or more network functions by one or more processors (202). The method includes collecting historic performance data associated with the one or more network functions. Further, the method includes selecting one or more features from the collected historic performance data. Further, the method includes training an artificial intelligence/machine learning (AI/ML) model with the selected one or more features. Further, the method includes feeding real time performance data of the one or more network functions to the trained AI/ML model. Further, the method includes predicting the performance trends of the one or more network functions utilizing the trained AI/ML model based on the real time performance data.

Description

METHOD AND SYSTEM FOR PREDICTING PERFORMANCE TRENDS OF ONE OR MORE NETWORK FUNCTIONS
FIELD OF THE INVENTION
[0001] The present invention relates to the field of network management and maintenance and, more specifically, to a system and a method for predicting performance matrix for network functions in a network.
BACKGROUND OF THE INVENTION
[0002] With increase in number of users, the network service provisions have to be upgraded to incorporate increased users and to enhance a service quality so as to keep pace with such high demand. Network functions play a vital role in improving the quality of a network by the way of managing traffic, delegating node allocation, managing performance of routing device (e.g., router, or the like) etc. A network function is associated with micro-services executing several tasks (e.g., resource allocation task, session handling task, handover task or the like) in parallel. The data generated by the network services are vast and analysis of such data is essential for enhancement of user experience and to improve service quality. However, there are several problems while working with network functions (NFs) and their performance data like irregular performance matrix with drastic measurement values and parameters which further give rise to problems like inefficient resource allocation, poor user experience due to sudden variables and service disruptions etc.
[0003] In a contemporary network architectural system, the NFs generate an enormous amount of performance data, including key performance indicators (KPIs) and counters. Performance metrics can vary significantly from one day to the next, making it challenging to predict future trends accurately. For example, the change in certain parameters is with a range of +/- 1 % of the standard value are acceptable, as this may result from sudden increases in users, or weather conditions. However, if the value deviates from normal prediction, such as twice the nominal value one day and half the next, it should be assumed that there is some underlying problem like failure of a network device. The contemporary network management approach relied heavily on historical data or past data to allocate resources optimally. However, with the highly variable performance metrics, it may happen that there will be uneven resource allocation, like either over-allocating or under-allocating resources, resulting in inefficient resource allocation. The inaccurate resource allocation and forecasting leads to service degradation and increased operational costs.
[0004] Presently, there is no mechanism to predict behavior and performance of the network functions which would be helpful in determining if there is a fault in the network or not in case the live values go haywire. There is a need of a system which can estimate performance trend of the network functions for a certain period of time based on past data. The system may also be able to calibrate further predictions based on present values and by means of progressive learning. Therefore, presently there is a requirement for a system and a method thereof to accurately predict network function performance trend for a configurable period.
SUMMARY OF THE INVENTION
[0005] One or more embodiments of the present disclosure provide a system and a method for predicting performance trends of one or more network functions.
[0006] In one aspect of the present invention, a method for predicting performance trends of one or more network functions. The method includes collecting, by one more processors, historic performance data associated with the one or more network functions. Further, the method includes selecting, by the one or more processors, one or more features from the collected historic performance data. Further, the method includes training, by the one or more processors, an artificial intelligence/machine learning (AI/ML) model with the selected one or more features. Further, the method includes feeding, by the one or more processors, real time performance data of the one or more network functions to the trained AI/ML model. Further, the method includes predicting, by the one or more processors, the performance trends of the one or more network functions utilizing the trained AI/ML model based on the real time performance data.
[0007] In an embodiment, the performance data includes data related to at least one of, Key Performance Indicators (KPIs) and counters.
[0008] In an embodiment, the one or more features are selected based on a type of model required to be trained, and the one or more features include at least one of, counters and attributes.
[0009] In an embodiment, while training the AI/ML model, the one or more processors enables the AI/ML model to learn patterns/trends/behaviour of the one or more network functions in the network.
[0010] In an embodiment, the performance trends of the one or more network functions are predicted by the one or more processors when deviation is detected in the real time performance data of the one or more network functions as compared to historical trends/patterns of the historic performance data
[0011] In an embodiment, the future performance of the one or more network functions is predicted based on an input received from a user via a user interface pertaining to at least one of, a time period and one or more parameters to predict the performance trends of the one or more network functions, wherein the one or more parameters are set by the user.
[0012] In one aspect of the present invention, a system for predicting performance trends of one or more network functions is disclosed. The system includes a collecting unit, a selecting unit, a training unit, a feeding unit and a predicting unit. The collecting unit is configured to collect, historic performance data associated with the one or more network functions. The selecting unit is configured to select, one or more features from the collected historic performance data. The training unit is configured to train an artificial intelligence/machine learning (AI/ML) model with the selected one or more features. The feeding unit is configured to feed the trained AI/ML model with real time performance data of the one or more network functions. The predicting unit is configured to predict the performance trends of the one or more network functions utilizing the trained AI/ML model based on the real time performance data.
[0013] In one aspect of the present invention, a non-transitory computer-readable medium having stored thereon computer-readable instructions is disclosed. The computer-readable instructions cause the processor to collect historic performance data associated with one or more network functions. The processor selects one or more features from the collected historic performance data. Further, the processor trains an AI/ML model with the selected one or more features. Further, the processor feeds the trained AI/ML model with real time performance data of the one or more network functions. Further, the processor predicts the performance trends of the one or more network functions utilizing the trained AI/ML model based on the real time performance data.
[0014] Other features and aspects of this invention will be apparent from the following description and the accompanying drawings. The features and advantages described in this summary and in the following detailed description are not all- inclusive, and particularly, many additional features and advantages will be apparent to one of ordinary skill in the relevant art, in view of the drawings, specification, and claims hereof. Moreover, it should be noted that the language used in the specification has been principally selected for readability and instructional purposes and may not have been selected to delineate or circumscribe the inventive subject matter, resort to the claims being necessary to determine such inventive subject matter.
BRIEF DESCRIPTION OF THE DRAWINGS
[0015] The accompanying drawings, which are incorporated herein, and constitute a part of this disclosure, illustrate exemplary embodiments of the disclosed methods and systems in which like reference numerals refer to the same parts throughout the different drawings. Components in the drawings are not necessarily to scale, emphasis instead being placed upon clearly illustrating the principles of the present disclosure. Some drawings may indicate the components using block diagrams and may not represent the internal circuitry of each component. It will be appreciated by those skilled in the art that disclosure of such drawings includes disclosure of electrical components, electronic components or circuitry commonly used to implement such components.
[0016] FIG. 1 is an exemplary block diagram of an environment for predicting performance trends of one or more network functions, according to various embodiments of the present disclosure;
[0017] FIG. 2 is a block diagram of a system of FIG. 1, according to various embodiments of the present disclosure;
[0018] FIG. 3 is an example schematic representation of the system of FIG. 1 in which various entities operations are explained, according to various embodiments of the present disclosure;
[0019] FIG. 4 illustrates a system architecture for predicting performance trends of the one or more network functions, in accordance with some embodiments;
[0020] FIG. 5 is an exemplary flow diagram illustrating the method for predicting performance trends of the one or more network functions, according to various embodiments of the present disclosure; and
[0021] FIG. 6 is a flow diagram illustrating an internal call flow for predicting performance trends of the one or more network functions, in accordance with some embodiments.
[0022] Further, skilled artisans will appreciate that elements in the drawings are illustrated for simplicity and may not have necessarily been drawn to scale. For example, the flow charts illustrate the method in terms of the most prominent steps involved to help to improve understanding of aspects of the present invention. Furthermore, in terms of the construction of the device, one or more components of the device may have been represented in the drawings by conventional symbols, and the drawings may show only those specific details that are pertinent to understanding the embodiments of the present invention so as not to obscure the drawings with details that will be readily apparent to those of ordinary skill in the art having benefit of the description herein.
[0023] The foregoing shall be more apparent from the following detailed description of the invention.
DETAILED DESCRIPTION OF THE INVENTION
[0024] Some embodiments of the present disclosure, illustrating all its features, will now be discussed in detail. It must also be noted that as used herein and in the appended claims, the singular forms "a", "an" and "the" include plural references unless the context clearly dictates otherwise.
[0025] Various modifications to the embodiment will be readily apparent to those skilled in the art and the generic principles herein may be applied to other embodiments. However, one of ordinary skill in the art will readily recognize that the present disclosure including the definitions listed here below are not intended to be limited to the embodiments illustrated but is to be accorded the widest scope consistent with the principles and features described herein.
[0026] A person of ordinary skill in the art will readily ascertain that the illustrated steps detailed in the figures and here below are set out to explain the exemplary embodiments shown, and it should be anticipated that ongoing technological development will change the manner in which particular functions are performed. These examples are presented herein for purposes of illustration, and not limitation. Further, the boundaries of the functional building blocks have been arbitrarily defined herein for the convenience of the description. Alternative boundaries can be defined so long as the specified functions and relationships thereof are appropriately performed. Alternatives (including equivalents, extensions, variations, deviations, etc., of those described herein) will be apparent to persons skilled in the relevant art(s) based on the teachings contained herein. Such alternatives fall within the scope and spirit of the disclosed embodiments.
[0027] Before discussing example, embodiments in more detail, it is to be noted that the drawings are to be regarded as being schematic representations and elements that are not necessarily shown to scale. Rather, the various elements are represented such that their function and general purpose becomes apparent to a person skilled in the art. Any connection or coupling between functional blocks, devices, components, or other physical or functional units shown in the drawings or described herein may also be implemented by an indirect connection or coupling. A coupling between components may also be established over a wireless connection. Functional blocks may be implemented in hardware, firmware, software or a combination thereof.
[0028] Further, the flowcharts provided herein, describe the operations as sequential processes. Many of the operations may be performed in parallel, concurrently or simultaneously. In addition, the order of operations maybe re-arranged. The processes may be terminated when their operations are completed, but may also have additional steps not included in the figured. It should be noted, that in some alternative implementations, the functions/acts/ steps noted may occur out of the order noted in the figured. For example, two figures shown in succession may, in fact, be executed substantially concurrently, or may sometimes be executed in the reverse order, depending upon the functionality/acts involved.
[0029] Further, the terms first, second etc... may be used herein to describe various elements, components, regions, layers and/or sections, it should be understood that these elements, components, regions, layers and/or sections should not be limited by these terms. These terms are used only to distinguish one element, component, region, layer or section from another region, layer, or a section. Thus, a first element, component, region layer, or section discussed below could be termed a second element, component, region, layer, or section without departing form the scope of the example embodiments.
[0030] Spatial and functional relationships between elements (for example, between modules) are described using various terms, including “connected,” “engaged,” “interfaced,” and “coupled.” Unless explicitly described as being “direct,” when a relationship between first and second elements is described in the description below, that relationship encompasses a direct relationship where no other intervening elements are present between the first and second elements, and also an indirect relationship where one or more intervening elements are present (either spatially or functionally) between the first and second elements. In contrast, when an element is referred to as being "directly” connected, engaged, interfaced, or coupled to another element, there are no intervening elements present. Other words used to describe the relationship between elements should be interpreted in a like fashion (e.g., "between," versus "directly between," "adjacent," versus "directly adjacent," etc.).
[0031] The terminology used herein is for the purpose of describing particular example embodiments only and is not intended to be limiting. Unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which example embodiments belong. It will be further understood that terms, e.g., those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the relevant art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.
[0032] As used herein, the singular forms “a,” “an,” and “the,” are intended to include the plural forms as well, unless the context clearly indicates otherwise. As used herein, the terms “and/or” and “at least one of’ include any and all combinations of one or more of the associated listed items. It will be further understood that the terms “comprises,” “comprising,” “includes,” and/or “including,” when used herein, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
[0033] Unless specifically stated otherwise, or as is apparent from the description, terms such as “processing” or “computing” or “calculating” or “determining” of “displaying” or the like, refer to the action and processes of a computer system, or similar electronic computing device/hardware, that manipulates and transforms data represented as physical, electronic quantities within the computer system's registers and memories into other data similarly represented as physical quantities within the computer system memories or registers or other such information storage, transmission or display devices.
[0034] Various embodiments of the present invention provide a system and method for forecasting performance trend for the network functions in a network for a certain time period configurable by an interface.
[0035] In response to the complex problem of forecasting future performance trends for NFs, the present system is configured to interact with Integrated Performance Management (IPM) by means of an AI/ML-based interlinking. The present system is configured to predict performance trends accurately for a configurable time frame or period like for next 24 hour or 48 hours or the next 7 days as required and as configured by the user by means of a parameter configuration module (not shown), so as to allow for a proactive approach to network management. The system combines present data, historical data analysis, machine learning, and real-time monitoring to address the challenges associated with performance variability and resource allocation.
[0036] FIG. 1 illustrates an exemplary block diagram of an environment (100) for predicting performance trends of one or more network functions, according to various embodiments of the present disclosure. The environment (100) comprises a plurality of user equipment’s (UEs) (102-1, 102-2, > ,102-n). The at least one UE (102-n) from the plurality of the UEs (102-1, 102-2, 102-n) is configured to connect to a system (108) via a communication network (106). Hereafter, label for the plurality of UEs or one or more UEs is 102.
[0037] In accordance with yet another aspect of the exemplary embodiment, the plurality of UEs (102) may be a wireless device or a communication device that may be a part of the system (108). The wireless device or the UE (102) may include, but are not limited to, a handheld wireless communication device (e.g., a mobile phone, a smart phone, a phablet device, and so on), a wearable computer device (e.g., a headmounted display computer device, a head-mounted camera device, a wristwatch, a computer device, and so on), a laptop computer, a tablet computer, or another type of portable computer, a media playing device, a portable gaming system, and/or any other type of computer device with wireless communication or Voice Over Internet Protocol (VoIP) capabilities. In an embodiment, the UEs (102) may include, but are not limited to, any electrical, electronic, electro-mechanical or an equipment or a combination of one or more of the above devices such as smartphones, virtual reality (VR) devices, augmented reality (AR) devices, laptop, a general-purpose computer, desktop, personal digital assistant, tablet computer, mainframe computer, or any other computing device, where the computing device may include one or more in-built or externally coupled accessories including, but not limited to, a visual aid device such as camera, audio aid, a microphone, a keyboard, input devices for receiving input from a user such as touch pad, touch enabled screen, electronic pen and the like. It may be appreciated that the UEs (102) may not be restricted to the mentioned devices and various other devices may be used. A person skilled in the art will appreciate that the plurality of UEs (102) may include a fixed landline, and a landline with assigned extension within the communication network (106).
[0038] The communication network (106), may use one or more communication interfaces/protocols such as, for example, Voice Over Internet Protocol (VoIP), 802.11 (Wi-Fi), 802.15 (including Bluetooth™), 802.16 (Wi-Max), 802.22, Cellular standards such as Code Division Multiple Access (CDMA), CDMA2000, Wideband CDMA (WCDMA), Radio Frequency Identification (e.g., RFID), Infrared, laser, Near Field Magnetics, etc.
[0039] The communication network (106) includes, by way of example but not limitation, one or more of a wireless network, a wired network, an internet, an intranet, a public network, a private network, a packet- switched network, a circuit-switched network, an ad hoc network, an infrastructure network, a Public-Switched Telephone Network (PSTN), a cable network, a cellular network, a satellite network, a fiber optic network, or some combination thereof. The communication network (106) may include, but is not limited to, a Third Generation (3G) network, a Fourth Generation (4G) network, a Fifth Generation (5G) network, a Sixth Generation (6G) network, a New Radio (NR) network, a Narrow Band Internet of Things (NB-IoT) network, an Open Radio Access Network (O-RAN), and the like.
[0040] The communication network (106) may also include, by way of example but not limitation, at least a portion of one or more networks having one or more nodes that transmit, receive, forward, generate, buffer, store, route, switch, process, or a combination thereof, etc. one or more messages, packets, signals, waves, voltage or current levels, some combination thereof, or so forth. The communication network (106) may also include, by way of example but not limitation, one or more of a wireless network, a wired network, an internet, an intranet, a public network, a private network, a packet-switched network, a circuit-switched network, an ad hoc network, an infrastructure network, a Public-Switched Telephone Network (PSTN), a cable network, a cellular network, a satellite network, a fiber optic network, a VOIP or some combination thereof.
[0041] One or more network elements can be, for example, but not limited to a base station that is located in the fixed or stationary part of the communication network (106). The base station may correspond to a remote radio head, a transmission point, an access point or access node, a macro cell, a small cell, a micro cell, a femto cell, a metro cell. The base station enables transmission of radio signals to the UE (102) or a mobile transceiver. Such a radio signal may comply with radio signals as, for example, standardized by a 3rd Generation Partnership Project (3GPP) or, generally, in line with one or more of the above listed systems. Thus, a base station may correspond to a NodeB, an eNodeB, a Base Transceiver Station (BTS), an access point, a remote radio head, a transmission point, which may be further divided into a remote unit and a central unit. The 3GPP specifications cover cellular telecommunications technologies, including radio access, core network, and service capabilities, which provide a complete system description for mobile telecommunications.
[0042] The system (108) is communicatively coupled to a server (104) via the communication network (106). The server (104) can be, for example, but not limited to a standalone server, a server blade, a server rack, an application server, a bank of servers, a business telephony application server (BTAS), a server farm, a cloud server, an edge server, home server, a virtualized server, one or more processors executing code to function as a server, or the like. In an implementation, the server (104) may operate at various entities or a single entity (include, but is not limited to, a vendor side, a service provider side, a network operator side, a company side, an organization side, a university side, a lab facility side, a business enterprise side, a defense facility side, or any other facility) that provides service.
[0043] The environment (100) further includes the system (108) communicably coupled to the server (e.g., remote server or the like) (104) and each UE of the plurality of UEs (102) via the communication network (106). The remote server (104) is configured to execute the requests in the communication network (106).
[0044] The system (108) is adapted to be embedded within the remote server (104) or is embedded as an individual entity. The system (108) is designed to provide a centralized and unified view of data and facilitate efficient business operations. The system (108) is authorized to access to update/create/delete one or more parameters of their relationship between the requests for predicting performance trends of one or more network functions, which gets reflected in real-time independent of the complexity of network.
[0045] In another embodiment, the system (108) may include an enterprise provisioning server (for example), which may connect with the remote server (104). The enterprise provisioning server provides flexibility for enterprises, ecommerce, finance to update/create/delete information related to the requests for the predicting performance trends of one or more network functions in real time as per their business needs. A user with administrator rights can access and retrieve the requests for the predicting performance trends of one or more network functions and perform real-time analysis in the system (108).
[0046] The system (108) may include, by way of example but not limitation, one or more of a standalone server, a server blade, a server rack, a bank of servers, a business telephony application server (BTAS), a server farm, hardware supporting a part of a cloud service or system, a home server, hardware running a virtualized server, one or more processors executing code to function as a server, one or more machines performing server-side functionality as described herein, at least a portion of any of the above, some combination thereof. In an implementation, system (108) may operate at various entities or single entity (for example include, but is not limited to, a vendor side, service provider side, a network operator side, a company side, an organization side, a university side, a lab facility side, a business enterprise side, ecommerce side, finance side, a defense facility side, or any other facility) that provides service.
[0047] However, for the purpose of description, the system (108) is described as an integral part of the remote server (104), without deviating from the scope of the present disclosure. Operational and construction features of the system (108) will be explained in detail with respect to the following figures.
[0048] FIG. 2 illustrates a block diagram of the system (108) provided for predicting performance trends of the one or more network functions, according to one or more embodiments of the present invention. As per the illustrated embodiment, the system (108) includes the one or more processors (202), the memory (204), a user interface (206), a display (208), an input device (210), and the database (214). Further the system (108) may comprise one or more processors (202). The one or more processors (202), hereinafter referred to as the processor (202) may be implemented as one or more microprocessors, microcomputers, microcontrollers, digital signal processors, central processing units, state machines, logic circuitries, single board computers, and/or any devices that manipulate signals based on operational instructions. As per the illustrated embodiment, the system (108) includes one processor. However, it is to be noted that the system (108) may include multiple processors as per the requirement and without deviating from the scope of the present disclosure.
[0049] An information related to predict the performance trends of the one or more network functions may be provided or stored in the memory (204) of the system (108). Among other capabilities, the processor (202) is configured to fetch and execute computer-readable instructions stored in the memory (204). The memory (204) may be configured to store one or more computer-readable instructions or routines in a non- transitory computer-readable storage medium, which may be fetched and executed to create or share data packets over a network service. The memory (204) may include any non-transitory storage device including, for example, volatile memory such as RAM, or non-volatile memory such as disk memory, EPROMs, FLASH memory, unalterable memory, and the like.
[0050] The memory (204) may comprise any non-transitory storage device including, for example, volatile memory such as Random- Access Memory (RAM), or non-volatile memory such as Electrically Erasable Programmable Read-only Memory (EPROM), flash memory, and the like. In an embodiment, the system (108) may include an interface(s). The interface(s) may comprise a variety of interfaces, for example, interfaces for data input and output devices, referred to as input/output (I/O) devices, storage devices, and the like. The interface(s) may facilitate communication for the system. The interface(s) may also provide a communication pathway for one or more components of the system. Examples of such components include, but are not limited to, processing unit/engine(s) and the database (214). The processing unit/engine(s) may be implemented as a combination of hardware and programming (for example, programmable instructions) to implement one or more functionalities of the processing engine(s).
[0051] The information related to predict the performance trends of the one or more network functions may further be configured to render on the user interface (206). The user interface (206) may include functionality similar to at least a portion of functionality implemented by one or more computer system interfaces such as those described herein and/or generally known to one having ordinary skill in the art. The user interface (206) may be rendered on the display (208), implemented using Liquid Crystal Display (LCD) display technology, Organic Light-Emitting Diode (OLED) display technology, and/or other types of conventional display technology. The display (208) may be integrated within the system (108) or connected externally. Further the input device(s) (210) may include, but not limited to, keyboard, buttons, scroll wheels, cursors, touchscreen sensors, audio command interfaces, magnetic strip reader, optical scanner, etc.
[0052] The database (214) may be communicably connected to the processor (202) and the memory (204). The database (214) may be configured to store and retrieve the request pertaining to features, or services or workflow of the system (108), access rights, attributes, approved list, and authentication data provided by an administrator. In another embodiment, the database (214) may be outside the system (108) and communicated through a wired medium and a wireless medium.
[0053] Further, the processor (202), in an embodiment, may be implemented as a combination of hardware and programming (for example, programmable instructions) to implement one or more functionalities of the processor (202). In the examples described herein, such combinations of hardware and programming may be implemented in several different ways. For example, the programming for the processor (202) may be processor-executable instructions stored on a non-transitory machine-readable storage medium and the hardware for the processor (202) may comprise a processing resource (for example, one or more processors), to execute such instructions. In the present examples, the memory (204) may store instructions that, when executed by the processing resource, implement the processor (202). In such examples, the system (108) may comprise the memory (204) storing the instructions and the processing resource to execute the instructions, or the memory (204) may be separate but accessible to the system (108) and the processing resource. In other examples, the processor (202) may be implemented by an electronic circuitry.
[0054] In order for the system (108) to predict the performance trends of the one or more network functions, the processor (202) includes a collecting unit (216), a selecting unit (218), a training unit (220), a feeding unit (222) and a predicting unit (224). The collecting unit (216), the selecting unit (218), the training unit (220), the feeding unit (222) and the predicting unit (224) may be implemented as a combination of hardware and programming (for example, programmable instructions) to implement one or more functionalities of the processor (202). In the examples described herein, such combinations of hardware and programming may be implemented in several different ways. For example, the programming for the processor (202) may be processor-executable instructions stored on a non-transitory machine-readable storage medium and the hardware for the processor (202) may comprise a processing resource (for example, one or more processors), to execute such instructions. In the present examples, the memory (204) may store instructions that, when executed by the processing resource, implement the processor. In such examples, the system (108) may comprise the memory (204) storing the instructions and the processing resource to execute the instructions, or the memory (204) may be separate but accessible to the system (108) and the processing resource. In other examples, the processor (202) may be implemented by the electronic circuitry.
[0055] In order for the system (108) to predict the performance trends of the one or more network functions, the collecting unit (216), the selecting unit (218), the training unit (220), the feeding unit (222) and the predicting unit (224) are communicably coupled to each other. In an embodiment, the collecting unit (216) collects historic performance data (e.g., call success rate, handover success rate, handover failure rate or the like) associated with the one or more network functions (e.g., Access and Mobility Management Function (AMF), Session Management Function (SMF) or the like). The performance data includes data related to at least one of, Key Performance Indicators (KPIs) and counters including metrics such as call drop rates, session establishment times, and user equipment (UE) handover success rates.
[0056] Further, the selecting unit (218) selects one or more features from the collected historic performance data. The one or more features include at least one of, attributes. The one or more features are selected based on a type of model required to be trained. In an example, the selecting unit (218) chooses specific features from the collected historic performance data for analysis. These features may include various counters, such as the average response time for service requests, and attributes like user location or network load. The selection of the features is based on the type of machine learning model that needs to be trained, for instance, a regression model aimed at predicting future network congestion or a classification model designed to identify anomalies in network performance. By tailoring the selected features to the model requirements, the system (108) enhances the accuracy and effectiveness of the machine learning analysis.
[0057] Further, the training unit (220) trains the AI/ML model with the selected one or more features. While training the AI/ML model, the training unit (220) enables the AI/ML model to learn patterns/trends/behaviour of the one or more network functions in the network (106). In an example, the training unit (220) trains the AI/ML model using the selected features, which include counters such as network latency and attributes like user device type. During the training process, the training unit (220) feeds the model with historical performance data, allowing it to learn patterns and trends associated with the network functions, such as the Access and Mobility Management Function (AMF) and Session Management Function (SMF). For instance, the AI/ML model may identify that increased latency correlates with higher user complaints during peak hours. By recognizing these patterns, the AI/ML model becomes adept at predicting future performance issues and can suggest proactive measures to optimize network operations.
[0058] The feeding unit (222) feeds the trained AI/ML model with real time performance data of the one or more network functions. In an example, the feeding unit (222) supplies the trained AI/ML model with real-time performance data from various network functions, such as the AMF and the SMF. For example, as user sessions are initiated and managed, the feeding unit (222) continuously streams data on current metrics like active session counts, latency, and throughput. This real-time data allows the AI/ML model to apply its learned patterns and trends to make immediate predictions, such as identifying potential network congestion or service degradation. By integrating real-time data, the AI/ML model can enhance its responsiveness and provide actionable insights for optimizing network performance.
[0059] Further, the predicting unit (224) predicts the performance trends of the one or more network functions utilizing the trained AI/ML model based on the real time performance data. In an embodiment, the performance trends of the one or more network functions are predicted by the predicting unit (224) when deviation is detected in the real time performance data of the one or more network functions as compared to historical trends/patterns of the historic performance data.
[0060] In an example, if the real-time data shows a sudden spike in latency for the AMF, the predicting unit (224) compares this deviation against historical trends that indicate normal latency patterns during similar conditions. If the model identifies that such spikes have previously led to increased user complaints or session drops, the predicting unit (224) can predict potential performance issues. This proactive approach enables network operators to take corrective actions, such as reallocating resources or optimizing load balancing, before users are adversely affected. [0061] The future performance of the one or more network functions is predicted based on the input (e.g., resource allocation or the like) received from a user via a user interface (206) pertaining to at least one of, a time period and one or more parameters (e.g., resource allocation, CPU usage or the like) to predict the performance trends of the one or more network functions, where the one or more parameters are set by the user. The example for predicting performance trends in a communication network is explained in FIG. 4 to FIG. 6.
[0062] FIG. 3 is an example schematic representation of the system (300) of FIG. 1 in which various entities operations are explained, according to various embodiments of the present system. It is to be noted that the embodiment with respect to FIG. 3 will be explained with respect to the first UE (102-1) and the system (108) for the purpose of description and illustration and should nowhere be construed as limited to the scope of the present disclosure.
[0063] As mentioned earlier, the first UE (102-1) includes one or more primary processors (305) communicably coupled to the one or more processors (202) of the system (108). The one or more primary processors (305) are coupled with a memory (310) storing instructions which are executed by the one or more primary processors (305). Execution of the stored instructions by the one or more primary processors (305) enables the UE (102-1). The execution of the stored instructions by the one or more primary processors (305) further enables the UE (102-1) to execute the requests in the communication network (106).
[0064] As mentioned earlier, the one or more processors (202) is configured to transmit a response content related to predict the performance trends of the one or more network functions to the UE (102-1). More specifically, the one or more processors (202) of the system (108) is configured to transmit the response content to at least one of the UE (102-1). A kernel (315) is a core component serving as the primary interface between hardware components of the UE (102-1) and the system (108). The kernel (315) is configured to provide the plurality of response contents hosted on the system (108) to access resources available in the communication network (106). The resources include one of a Central Processing Unit (CPU), memory components such as Random Access Memory (RAM) and Read Only Memory (ROM).
[0065] As per the illustrated embodiment, the system (108) includes the one or more processors (202), the memory (204), the input/output interface unit (206), the display (208), and the input device (210). The operations and functions of the one or more processors (202), the memory (204), the input/output interface unit (206), the display (208), and the input device (210) are already explained in FIG. 2. For the sake of brevity, we are not explaining the same operations (or repeated information) in the patent disclosure. Further, the processor (202) includes the collecting unit (216), the selecting unit (218), the training unit (220), the feeding unit (222) and the predicting unit (224). The operations and functions of the collecting unit (216), the selecting unit (218), the training unit (220), the feeding unit (222) and the predicting unit (224) are already explained in FIG. 2. For the sake of brevity, we are not explaining the same operations (or repeated information) in the patent disclosure.
[0066] FIG. 4 illustrates a system architecture (400) for predicting performance trends the of one or more network functions, in accordance with some embodiments. The system architecture (400) includes a system- IPM interface (410) that is linked with an Integrated Performance Management (IPM) (402) that allows the user to define the time period (for example, 24 hours, 48 hours or the like) for which the prediction performance trends is sought.
[0067] The data integrator (404) and the data processing unit (406) continuously collect the network performance data, including key performance indicators (KPIs) and counters, from the various NFs in real-time. The data processing unit (406) analyzes the historical performance data to identify patterns, trends, and seasonality. This analysis provides insights into past performance behavior. Further, the data processing unit (406) continuously monitors real-time performance metrics from the NFs to capture any deviations from the predicted trends. By using the training unit (220), the Al/ML Model is employed to learn from the historical data and make predictions for future performance trends. The predicting unit (224) predicts the performance analytics of the one or more network functions by using the trained AI/ML model. The predicting unit (224) generates predictions for the configured time period, considering historical patterns and real-time deviations. Further, the system architecture (400) also includes a resource allocation module (not shown) that allocates the network resources, such as bandwidth and processing capacity, based on these predictions to ensure that they align with expected demand. Further, the system architecture (400) also includes a learning module (not shown) that allows the system to continuously learn from new data and to adapt its predictions based on emerging trends, ensuring accuracy over time. The most unique aspect of the present invention is its ability to combine advanced Al/ML methodology and mechanism with both historical and real-time performance data to make highly accurate predictions while taking into accounting fluctuation in NF performance metrics.
[0068] For any operation, the system architecture (400) may implement Application programming interface (API) as a medium of communication to communicate with the server(s) (104) in the network (106). The system architecture (400) may operate and exchange the information in JSON (JavaScript Object Notation) format.
[0069] FIG. 5 is an exemplary flow diagram illustrating the method for predicting performance trends of one or more network functions, according to various embodiments of the present disclosure.
[0070] At 502, the method includes collecting the historic performance data associated with the one or more network functions. In an embodiment, the method allows the collecting unit (216) to collect the historic performance data associated with the one or more network functions.
[0071] At 504, the method includes selecting the one or more features from the collected historic performance data. In an embodiment, the method allows the selecting unit (218) to select the one or more features from the collected historic performance data.
[0072] At 506, the method includes training the AI/ML model with the selected one or more features. In an embodiment, the method allows the training unit (220) to train the AI/ML model with the selected one or more features.
[0073] At 508, the method includes feeding the real time performance data of the one or more network functions to the trained AI/ML model. In an embodiment, the method allows the feeding unit (222) to feed the trained AI/ML model with real time performance data of the one or more network functions.
[0074] At 510, the method includes predicting the performance trends of the one or more network functions utilizing the trained AI/ML model based on the real time performance data. In an embodiment, the method allows the predicting unit (224) to predict the performance trends of the one or more network functions utilizing the trained AI/ML model based on the real time performance data.
[0075] FIG. 6 is a flow diagram (600) illustrating an internal call flow for predicting performance trends of the one or more network functions, in accordance with some embodiments.
[0076] At 602, the present system performs historical data collection & analysis. The system-IPM Interface (410) initiates the process by continuously gathering performance data, including KPIs and counters, from various network functions (NFs) in real-time. The system analyses historical performance data to identify patterns, trends, and seasonality.
[0077] At 604, the data integrator (404) and the data processing unit (406) set data definition & normalization criteria. The integrated data undergoes defining the data, its purpose, normalization, and cleaning procedures, ensuring its consistency and sent to the database for storage and further analysis. [0078] At 606, the user configures the feature and selects the hyper-parameter by means of the interface (410). The relevant features as well as hyper parameters are selected. The features can include counter or attributes or other fields over which model for training. The hyper-parameter can include parameter values which is specific to the model and over which model is configured. In general, the hyperparameter is critical configurations that can significantly affect the performance of the AI/ML model. For instance, in a decision tree model, the hyper-parameters may include the maximum depth of the tree, which determines how many levels of decision nodes the tree can have, and the minimum samples required to split a node, which affects how the tree is constructed. These hyper-parameters are not learned during training; instead, they must be set up before the training process begins.
[0079] At 608, the AI/ML model is fed with the data and analysis outputs. The collected data and extracted features and configured hyperparameters are employed to learn from the historical data and make predictions for future performance trends using advanced machine learning models which is either designed or pre-defined.
[0080] At 610, each training configuration is assigned a unique training name, and the progress is tracked in the AI/ML model. Once training is done, the future trends prediction is performed and validation metrics such as accuracy are shown. Also, the model is stored in the data-lake (408) by giving unique training name for instant usage in other NFs training. The model performance is found optimal, a new input data is provided by selecting a new data source. To further improve model performance, manual or auto tuning of hyper parameter is done. The comparative data from real time monitoring are also provided to the machine learning models by means of the training unit (220).
[0081] At 612, the method includes retraining the machine learning model. At 614, the predicted future trends are sent to the network teams for them to take proactive measures in order to prevent any service disruptions.
[0082] In preferred embodiments, the system and method may be executed in a manner where for any operation, the API or any compatible alternative may be considered a medium of communication and every operation may be performed via http request, for communication between user and other network elements like server; information exchanges may perform in JSON, YAML, Avro, MongoDB, OData, Python or any other compatible format.
[0083] In preferred embodiments, the method may also include various steps to collect information from network elements like servers and other network functions, trigger consecutive operational procedures etc., improve learning methodology for the Machine Learning Models and may not be considered strictly limited to the above steps.
[0084] Below is the technical advancement of the present invention:
[0085] The present disclosure relates to a system and a method to forecast performance of network functions (NF) in the network (106) for a configurable time frame which is essential for resource allocation, operation and management of the network, fault observation etc. The present invention is configured to combine advanced AI/ML model training for highly accurate prediction when provided with both historical and real-time performance data while taking into accounting fluctuation in NF performance metrics.
[0086] The invention provides accurate predictions of performance trends for all NFs, enabling proactive resource allocation and management. With precise forecasting, the network resources can be allocated optimally, reducing operational costs and resource waste. Enhanced User Experience: Improved resource allocation results in a consistently high-quality user experience, leading to higher customer satisfaction. The present system-interfaced with the IPM (402) helps network performance data to be directly loaded for AI/ML model training.
[0087] A person of ordinary skill in the art will readily ascertain that the illustrated embodiments and steps in description and drawings (FIGS. 1-6) are set out to explain the exemplary embodiments shown, and it should be anticipated that ongoing technological development will change the manner in which particular functions are performed. These examples are presented herein for purposes of illustration, and not limitation. Further, the boundaries of the functional building blocks have been arbitrarily defined herein for the convenience of the description. Alternative boundaries can be defined so long as the specified functions and relationships thereof are appropriately performed. Alternatives (including equivalents, extensions, variations, deviations, etc., of those described herein) will be apparent to persons skilled in the relevant art(s) based on the teachings contained herein. Such alternatives fall within the scope and spirit of the disclosed embodiments.
[0088] Method steps: A person of ordinary skill in the art will readily ascertain that the illustrated steps are set out to explain the exemplary embodiments shown, and it should be anticipated that ongoing technological development will change the manner in which particular functions are performed. These examples are presented herein for purposes of illustration, and not limitation. Further, the boundaries of the functional building blocks have been arbitrarily defined herein for the convenience of the description. Alternative boundaries can be defined so long as the specified functions and relationships thereof are appropriately performed. Alternatives (including equivalents, extensions, variations, deviations, etc., of those described herein) will be apparent to persons skilled in the relevant art(s) based on the teachings contained herein. Such alternatives fall within the scope and spirit of the disclosed embodiments.
[0089] The present invention offers multiple advantages over the prior art and the above listed are a few examples to emphasize on some of the advantageous features. The listed advantages are to be read in a non-limiting manner.
REFERENCE NUMERALS
[0090] Environment - 100
[0091] UEs- 102, 102-1-102-n
[0092] Server - 104
[0093] Communication network - 106
[0094] System - 108
[0095] Processor - 202
[0096] Memory - 204
[0097] User Interface - 206
[0098] Display - 208
[0099] Input device - 210
[00100] Database - 214
[00101] Collecting unit - 216
[00102] Selecting unit - 218
[00103] Training unit - 220
[00104] Feeding unit - 222
[00105] Predicting unit - 224
[00106] System - 300
[00107] Primary processors -305
[00108] Memory- 310
[00109] Kernel- 315
[00110] System architecture - 400
[00111] IPM - 402
[00112] Data integrator- 404
[00113] Data processing unit - 406
[00114] Data lake - 408
[00115] System-IPM interface - 410

Claims

CLAIMS: We Claim
1. A method for predicting performance trends of one or more network functions, the method comprising the steps of: collecting, by one or more processors (202), historic performance data associated with the one or more network functions; selecting, by the one or more processors (202), one or more features from the collected historic performance data; training, by the one or more processors (202), an artificial intelligence/machine learning (AI/ML) model with the selected one or more features; feeding, by the one or more processors (202), real time performance data of the one or more network functions to the trained AI/ML model; and predicting, by the one or more processors (202), performance trends of the one or more network functions utilizing the trained Al/ML model based on the real time performance data.
2. The method as claimed in claim 1 , wherein the performance data includes data related to at least one of, Key Performance Indicators (KPIs) and counters.
3. The method as claimed in claim 1, wherein the one or more features are selected based on a type of model required to be trained, the one or more features include at least one of, counters and attributes.
4. The method as claimed in claim 1 , wherein while training the model, the one or more processors, enables the AI/ML model to learn patterns/trends/behaviour of each of the plurality of network functions in the network.
5. The method as claimed in claim 1 , wherein the wherein the performance trends of the one or more network functions are predicted by the one or more processors (202) when deviation is detected in the real time performance data of the one or more network functions as compared to historical trends/patterns of the historic performance data.
6. The method as claimed in claim 1 , wherein the future performance of the one or more network functions is predicted based on an input received from a user via a user interface (206) pertaining to at least one of, a time period and one or more parameters to predict the future performance of the one or more network functions, wherein the one or more parameters are set by the user.
7. A system (108) for predicting performance trends of one or more network functions, the system (108) comprising: a collecting unit (216), configured to, collect, historic performance data associated with the one or more network functions; a selecting unit (218), configured to, select, one or more features from the collected historic performance data; a training unit (220), configured to, train, an artificial intelligence/machine learning (AI/ML) model with the selected one or more features; a feeding unit (222), configured to, feed, the trained Al/ML model with real time performance data of the one or more network functions; and a predicting unit (224), configured to, predict, the performance trends of the one or more network functions utilizing the trained Al/ML model based on the real time performance data.
8. The system (108) as claimed in claim 7, wherein the performance data includes data related to at least one of, Key Performance Indicators (KPIs) and counters.
. The system (108) as claimed in claim 7, wherein the one or more features are selected based on a type of model required to be trained, the one or more features include at least one of, counters and attributes.
10. The system (108) as claimed in claim 7, wherein while training the Al/ML model, the training unit (220), enables the Al/ML model to learn patterns/trends/behaviour of the one or more network functions in the network.
11. The system (108) as claimed in claim 7, wherein the performance trends of the one or more network functions are predicted by the predicting unit (224) when deviation is detected in the real time performance data of the one or more network functions as compared to historical trends/patterns of the historic performance data.
12. The system (108) as claimed in claim 7, wherein the future performance of the one or more network functions is predicted based on an input received from a user via a user interface (206) pertaining to at least one of, a time period and one or more parameters to predict the performance trends of the one or more network functions, wherein the one or more parameters are set by the user.
13. A non-transitory computer-readable medium having stored thereon computer- readable instructions that, when executed by a processor (202), causes the processor (202) to: collect, historic performance data associated with one or more network functions; select, one or more features from the collected historic performance data; train, an artificial intelligence/machine learning (Al/ML) model with the selected one or more features; feed, the trained Al/ML model with real time performance data of the one or more network functions; and predict, the performance trends of the one or more network functions utilizing the trained Al/ML model based on the real time performance data.
PCT/IN2024/052031 2023-10-10 2024-10-10 Method and system for predicting performance trends of one or more network functions Pending WO2025079092A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
IN202321068028 2023-10-10
IN202321068028 2023-10-10

Publications (1)

Publication Number Publication Date
WO2025079092A1 true WO2025079092A1 (en) 2025-04-17

Family

ID=95395256

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IN2024/052031 Pending WO2025079092A1 (en) 2023-10-10 2024-10-10 Method and system for predicting performance trends of one or more network functions

Country Status (1)

Country Link
WO (1) WO2025079092A1 (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140324371A1 (en) * 2013-04-26 2014-10-30 Telefonaktiebolaget L M Ericsson (Publ) Predicting a network performance measurement from historic and recent data
US20170034720A1 (en) * 2015-07-28 2017-02-02 Futurewei Technologies, Inc. Predicting Network Performance
WO2023012359A1 (en) * 2021-08-05 2023-02-09 Telefonaktiebolaget Lm Ericsson (Publ) Method for monitoring performance of an artificial intelligence (ai)/machine learning (ml) model or algorithm

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140324371A1 (en) * 2013-04-26 2014-10-30 Telefonaktiebolaget L M Ericsson (Publ) Predicting a network performance measurement from historic and recent data
US20170034720A1 (en) * 2015-07-28 2017-02-02 Futurewei Technologies, Inc. Predicting Network Performance
WO2023012359A1 (en) * 2021-08-05 2023-02-09 Telefonaktiebolaget Lm Ericsson (Publ) Method for monitoring performance of an artificial intelligence (ai)/machine learning (ml) model or algorithm

Similar Documents

Publication Publication Date Title
RU2753962C2 (en) Network assistant based on artificial intelligence
US11252261B2 (en) System and method for analyzing web application network performance
US9432865B1 (en) Wireless cell tower performance analysis system and method
US12132797B2 (en) Intelligent monitoring systems to predict likelihood of subscriber churn
US20150244645A1 (en) Intelligent infrastructure capacity management
CN104937585A (en) Dynamic recommendation of routing rules for contact center use
US12020272B2 (en) Market segment analysis of product or service offerings
US11722371B2 (en) Utilizing unstructured data in self-organized networks
US20110106579A1 (en) System and Method of Management and Reduction of Subscriber Churn in Telecommunications Networks
US11310125B2 (en) AI-enabled adaptive TCA thresholding for SLA assurance
US11327747B2 (en) Sentiment based offline version modification
US20230085756A1 (en) Systems and methods relating to routing incoming interactions in a contact center
WO2025079092A1 (en) Method and system for predicting performance trends of one or more network functions
Leontiadis et al. The good, the bad, and the KPIs: how to combine performance metrics to better capture underperforming sectors in mobile networks
Frias et al. Measuring Mobile Broadband Challenges and Implications for Policymaking
Yusuf-Asaju et al. Mobile network quality of experience using big data analytics approach
US20230269652A1 (en) Control of communication handovers based on criticality of running software programs
WO2025017691A1 (en) Method and system for executing requests in network
WO2025079119A1 (en) System and method for managing installation of optical fiber devices
WO2025057244A1 (en) System and method of managing one or more application programming interface (api) requests in network
WO2025017746A1 (en) Method and system for generating reports
WO2025017598A1 (en) System and method for answer seizure ratio prediction
EP3829110B1 (en) Self-managing a network for maximizing quality of experience
WO2025017702A1 (en) Method and system for providing service experience analytics
WO2025079091A1 (en) Method and system for managing migration of customers between networks

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 24876850

Country of ref document: EP

Kind code of ref document: A1