WO2025196790A1 - System and method for scheduling an automated call test on a user device - Google Patents
System and method for scheduling an automated call test on a user deviceInfo
- Publication number
- WO2025196790A1 WO2025196790A1 PCT/IN2025/050128 IN2025050128W WO2025196790A1 WO 2025196790 A1 WO2025196790 A1 WO 2025196790A1 IN 2025050128 W IN2025050128 W IN 2025050128W WO 2025196790 A1 WO2025196790 A1 WO 2025196790A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- call test
- automated call
- test
- network
- work order
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L41/00—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
- H04L41/22—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks comprising specially adapted graphical user interfaces [GUI]
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L41/00—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
- H04L41/14—Network analysis or design
- H04L41/145—Network analysis or design involving simulating, designing, planning or modelling of a network
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L43/00—Arrangements for monitoring or testing data switching networks
- H04L43/06—Generation of reports
- H04L43/067—Generation of reports using time frame reporting
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L43/00—Arrangements for monitoring or testing data switching networks
- H04L43/08—Monitoring or testing based on specific metrics, e.g. QoS, energy consumption or environmental parameters
- H04L43/0876—Network utilisation, e.g. volume of load or congestion level
- H04L43/0888—Throughput
Definitions
- a portion of the disclosure of this patent document contains material, which is subject to intellectual property rights such as, but are not limited to, copyright, design, trademark, Integrated Circuit (IC) layout design, and/or trade dress protection, belonging to JIO PLATFORMS LIMITED or its affiliates (hereinafter referred as owner).
- owner has no objection to the facsimile reproduction by anyone of the patent document or the patent disclosure, as it appears in the Patent and Trademark Office patent files or records, but otherwise reserves all rights whatsoever. All rights to such intellectual property are fully reserved by the owner.
- the embodiments of the present disclosure generally relate to telecommunications network testing.
- the present disclosure relates to a system for scheduling an automated call test on a user device.
- Work order refers to a set of instructions for performing an automated call test, including details such as the type of test, scheduling information, and specific parameters to be measured.
- KPIs Key Performance Indicators
- CSSR call setup success rate
- E-RAB evolved radio access bearer
- Automated call test refers to a process where a user device automatically initiates and completes a call to measure various network performance indicators without requiring manual intervention from the user.
- Short call test refers to a brief automated call designed to measure the rapid connection and disconnection performance of the network.
- Long call test refers to an extended automated call designed to evaluate sustained connection quality and stability of the network.
- Background process refers to a software routine that runs on the user device to execute the automated call test without interfering with or being visible to the active user applications.
- Fire-based Cloud Messaging refers to a cross -platform messaging solution that allows the system to send messages to user devices reliably.
- Coverage platform refers to a system component responsible for creating and managing work orders for automated call tests across a network. Coverage platform plays a central role in monitoring and optimizing network performance by creating, scheduling, and managing work orders for various types of automated tests, such as voice calls, data sessions, and other network quality assessments. The platform provides visibility into the network’s coverage and performance across different geographic areas, ensuring that tests are conducted in regions of interest or where performance issues are identified. In an example, the coverage platform may be a network coverage monitoring platform.
- Evolved Radio Access Bearer Evolved Radio Access Bearer (E-RAB) refers to the data connection between the user device and the core network in LTE (Long-Term Evolution) systems.
- Codec refers to the software or hardware used for encoding and decoding digital data streams or signals, particularly in the context of voice calls.
- Speed test server refers to a dedicated server used to conduct network performance tests, including measuring data transfer rates and latency.
- RANs radio access networks
- RANs typically consist of radio base stations with large antennas that wirelessly connect user devices to the broader network infrastructure.
- 6G and increasing user demands RANs are becoming increasingly complex, featuring higher speeds, more interconnected units, and the integration of various sub-networks into larger ones.
- a system for scheduling an automated call test on a user equipment comprises a memory and one or more processors configured to execute instructions stored in the memory.
- the instructions include creating a work order for the automated call test in a coverage platform using a work order management module.
- the instructions are for sending, by a communication module a push notification to one of a plurality of user devices selected based on a device identifier (ID) stored in a database.
- the instructions are for executing, by a data processing module, the scheduled automated call test on the selected user equipment by initiating the automated call test at a time specified in the work order and performing the automated call test as defined in the push notification.
- the push notification comprises a script defining procedures for the automated call test, a scheduled date and time for executing the automated call test.
- the user equipment is configured to execute the automated call test based on the script, scheduled date and time received in the push notification.
- the one or more processors is further configured for receive, by a data processing module, collected data comprising a plurality of key performance indicators (KPIs) from the automated call test executed by the user equipment in background as defined in the push notification.
- KPIs key performance indicators
- the plurality of KPIs collected by the data processing module comprises a call setup success rate (CSSR), an evolved radio access bearer (E-RAB) drop rate, an interference level, a handover success rate and failure rate, codec details for a specified geographical area, and a traffic capacity.
- CSSR call setup success rate
- E-RAB evolved radio access bearer
- the system is further configured to record, by a database management module, the received collected data of the automated call test.
- the data processing module is further configured to initiate the automated call test by instructing the user equipment to dial a toll-free number automatically.
- the defined automated call test is one of a short call test or a long call test.
- the short call test is designed to measure rapid connection and disconnection performance
- the long call test is designed to evaluate sustained connection quality and stability.
- system further comprises a user interface module configured to provide a user interface for scheduling the automated call tests with customizable parameters, viewing results of the executed test, accessing historical performance data, and configuring alerts for defined KPIs.
- the work order management module is further configured to create multiple work orders for different types of automated call tests, including voice calls and data sessions.
- the work order management module is further configured to assign priorities to work orders based on network performance urgency.
- the work order management module is further configured to manage a distributed network of speed test servers for conducting the automated call test.
- a method for scheduling an automated call test on a user equipment comprises creating, by a work order management module, a work order for the automated call test in a coverage platform.
- the method comprises sending, by a communication module a push notification to one of a plurality of user devices selected on a device identifier (ID) stored in a database.
- the method comprises execute, by a data processing module, the scheduled automated call test on the selected user equipment by initiating the automated call test at a time specified in the work order and performing the automated call test as defined in the push notification.
- the automated call test comprises at least one of a short call test or a long call test.
- the method further comprises receiving, by a data processing module, collected data comprising a plurality of key performance indicators (KPIs) from the automated call test executed by the user equipment in background as defined in the push notification.
- KPIs key performance indicators
- the plurality of KPIs collected by the data processing module comprises a call setup success rate (CSSR), an evolved radio access bearer (E-RAB) drop rate, an interference level, a handover success rate and failure rate, codec details for a specified geographical area, and a traffic capacity.
- the method further comprises recording, by a database management module, the received collected data of the automated call test.
- the method further comprises initiating, by the data processing module, the automated call test by instructing the user equipment to dial a toll-free number automatically.
- the method further comprises creating, by the work order management module, multiple work orders for different types of automated call tests, including voice calls and data sessions.
- the method further comprises assigning, by the work order management module, priorities to work orders based on network performance urgency.
- the method further comprises managing, by the work order management module, a distributed network of speed test servers for conducting the automated call test.
- a User Equipment for facilitating an automated call test.
- the UE is configured to receive, from a communication module of a system, a push notification comprising instructions for an automated call test.
- the UE is selected based on a device identifier (ID) stored in a database.
- the UE is configured to execute, by a data processing module, the scheduled automated call test on the selected user equipment by initiating the automated call test at a time specified in the work order and performing the automated call test as defined in the push notification.
- a non-transitory computer- readable storage medium storing computer-executable instructions is described.
- the instructions When executed by one or more processors, the instructions cause the one or more processors to perform a method for scheduling an automated call test on a user equipment.
- the method comprises creating, by a work order management module, a work order for the automated call test in a coverage platform.
- the method comprises sending, by a communication module, a push notification to one of a plurality of user devices selected based on a device identifier (ID) stored in a database.
- the method comprises execute, by a data processing module, the scheduled automated call test on the selected user equipment by initiating the automated call test at a time specified in the work order and performing the automated call test as defined in the push notification.
- the automated call test comprises at least one of a short call test or a long call test.
- An objective of the present disclosure is to provide a system and a method for scheduling automated call tests on user devices, thereby enabling efficient network performance monitoring without manual intervention.
- An objective of the present disclosure is to provide a system and a method that assigns work orders to specific devices based on device identifiers, thereby ensuring targeted and systematic network testing across various locations.
- An objective of the present disclosure is to provide a system and a method that executes call tests in the background of user devices, thereby minimizing disruption to users while collecting valuable network data.
- An objective of the present disclosure is to provide a system and a method that collects and records key performance indicators from automated call tests, thereby facilitating real-time network optimization and improvement.
- An objective of the present disclosure is to provide a system and a method that sends scripts and configuration parameters via push notifications, thereby enabling flexible and customizable call test execution.
- An objective of the present disclosure is to provide a system and a method that aggregates data from multiple devices and test types, thereby generating comprehensive network coverage maps and performance reports.
- An objective of the present disclosure is to provide a system and a method that implements both short and long call tests, thereby evaluating various aspects of network performance and stability.
- An objective of the present disclosure is to provide a system and a method that analyzes collected data to identify trends and anomalies, thereby enabling proactive network issue resolution and optimization.
- An objective of the present disclosure is to provide a system and a method that manages a distributed network of speed test servers, thereby ensuring reliable and geographically diverse network testing capabilities.
- An objective of the present disclosure is to provide a system and a method that stores and retrieves historical test data, thereby supporting long-term network performance analysis and strategic planning.
- An objective of the present disclosure is to provide a system and a method that generates recommendations for network optimization, thereby assisting network operators in improving service quality and user experience.
- Other objectives and advantages of the present disclosure will be more apparent from the following description, which is not intended to limit the scope of the present disclosure.
- FIG. 1 illustrates an exemplary network architecture of a system for scheduling an automated call test on a user equipment, in accordance with embodiments of the present disclosure.
- FIG. 2 illustrates an exemplary micro service-based architecture of the system for scheduling the automated call test on the user equipment based on a work order, in accordance with embodiments of the present disclosure.
- FIG. 3 illustrates an exemplary system architecture for scheduling the automated call test on the user equipment based on the work order, in accordance with an embodiment of the present disclosure.
- FIG. 4 illustrates an exemplary flow diagram for scheduling the automated call test on the user equipment based on the work order, in accordance with an embodiment of the present disclosure.
- FIG. 5 illustrates a method for scheduling the automated call test on the user equipment based on the work order, in accordance with an embodiment of the present disclosure.
- FIG. 6 illustrates an exemplary computer system in which or with which embodiments of the present disclosure may be implemented.
- FCM Fire-based cloud messaging
- individual embodiments may be described as a process which is depicted as a flowchart, a flow diagram, a data flow diagram, a structure diagram, or a block diagram. Although a flowchart may describe the operations as a sequential process, many of the operations can be performed in parallel or concurrently. In addition, the order of the operations may be re-arranged.
- a process is terminated when its operations are completed but could have additional steps not included in a figure.
- a process may correspond to a method, a function, a procedure, a subroutine, a subprogram, etc. When a process corresponds to a function, its termination can correspond to a return of the function to the calling function or the main function.
- exemplary and/or “demonstrative” is used herein to mean serving as an example, instance, or illustration.
- the subject matter disclosed herein is not limited by such examples.
- any aspect or design described herein as “exemplary” and/or “demonstrative” is not necessarily to be construed as preferred or advantageous over other aspects or designs, nor is it meant to preclude equivalent exemplary structures and techniques known to those of ordinary skill in the art.
- the terms “includes,” “has,” “contains,” and other similar words are used in either the detailed description or the claims, such terms are intended to be inclusive in a manner similar to the term “comprising” as an open transition word without precluding any additional or other elements.
- the aspects of the present disclosure are directed to a system and method for scheduling an automated call test on a user equipment based on a work order.
- the system is configured to detect the current location of the UE and monitor and obtain data corresponding to a plurality of key performance indicators (KPIs) of a site and multiple operators.
- KPIs key performance indicators
- the system is further configured for processing and visualizing the collected data, analyzing the site based on user input and processed data, and generating recommendations tailored to the user type. This comprehensive approach enables real-time visualization, analysis, and operator selection for network sites, enhancing both field engineers' efficiency and end customers' network selection process.
- FIG. 1 illustrates a network architecture (100) of a system (102) for scheduling an automated call test on a user equipment (108) based on a work order, in accordance with embodiments of the present disclosure.
- the system (102) may be configured to implement an Operation Support Systems /Business Support Systems (OSS/BSS) service.
- the system (102) is connected to a network (104), which is further connected to at least one computing device 108-1, 108-2, ... 108-N (collectively referred to as computing device 108, herein) associated with one or more users 110-1, 110-2, . . . 110-N (collectively referred as user (110), herein).
- the computing device (108) may be personal computers, laptops, tablets, wristwatches, or any custom-built computing device integrated within a modern diagnostic machine that can connect to a network as an loT (Internet of Things) device.
- loT Internet of Things
- the computing device (108) may also be referred to as User Equipment (UE) or user device. Accordingly, the terms “computing device” and “User Equipment” may be used interchangeably throughout the disclosure.
- the user (110) is a network operator or a field engineer. Further, the network (104) can be configured with a centralized server that stores compiled data.
- the system (102) may receive at least one input data from the user (110) via the at least one computing device (108).
- the user (110) may be configured to initiate the process of scheduling the automated call test, through an application interface of a mobile application installed in the computing devices (108).
- the mobile application may be configured to communicate with a network analysis server.
- the mobile application may be a software application from an application distribution platform. Examples of application distribution platforms include the App Store for iOS provided by Apple, Inc., Play Store for Android OS provided by Google Inc., and such application distribution platforms.
- the computing device (108) may transmit the at least one captured data packet over a point-to-point or point-to-multipoint communication channel or network (104) to the system (102).
- the computing device (108) may involve collection, analysis, and sharing of data received from the system (102) via the network (104).
- the system (102) may be connected to one or more web servers (112) and a fire -based cloud messaging (FCM) server (114) via the network (104).
- the FCM server may be a cross -platform messaging solution that allows the system (102) to deliver messages reliably.
- the FCM server enables sending notification messages to drive user re-engagement and retention.
- the FCM server can send two types of messages: notification and data.
- the FCM server supports message targeting to single devices, groups of devices, or topics, and can be used with Android, iOS, and web applications.
- the network (104) may include, but not be limited to, at least a portion of one or more networks having one or more nodes that transmit, receive, forward, generate, buffer, store, route, switch, process, or a combination thereof, etc. one or more messages, packets, signals, waves, voltage or current levels, some combination thereof, or so forth.
- the network 104 may include, but not be limited to, a wireless network, a wired network, an internet, an intranet, a public network, a private network, a packet- switched network, a circuit-switched network, an ad hoc network, an infrastructure network, a Public-Switched Telephone Network (PSTN), a cable network, a cellular network, a satellite network, a fiber optic network, or some combination thereof.
- PSTN Public-Switched Telephone Network
- FIG. 1 shows exemplary components of the network architecture (100), in other embodiments, the network architecture (100) may include fewer components, different components, differently arranged components, or additional functional components than depicted in FIG. 1. Additionally, or alternatively, one or more components of the network architecture (100) may perform functions described as being performed by one or more other components of the network architecture (100).
- FIG. 2 illustrates an exemplary micro service-based architecture (200) of the system (102) for scheduling the automated call test on the UE (108), in accordance with an embodiment of the present disclosure.
- the system (102) includes one or more processor(s) (202), a memory (204), a database (208), and an interface(s) (206).
- the one or more processor(s) (202) may include one or more modules/engines selected from any of a work order management module (212), a communication module (214), a data processing module (216), a database management module (218), a user interface module (220) and other module(s) (222) having functions that may include but are not limited to receiving data, processing data, testing, storage, and peripheral functions, such as wireless communication unit for remote operation, audio unit for alerts and the like.
- the one or more processor(s) (202) is configured to initiate the process of scheduling the automated call test through the mobile application interface of the U) (108).
- the application interface is configured to transmit one or more instructions to the one or more processor(s) (202).
- the interface(s) (206) is included within the system (102) to serve as a medium for data exchange, configured to facilitate user interaction with the mobile application.
- the interface(s) (206) may be composed of interfaces for data input and output devices, storage devices, and the like, providing a communication pathway for the various components of the system (102).
- the interface(s) (206) may comprise a variety of interfaces, for example, interfaces for data input and output devices, referred to as VO devices, storage devices, and the like.
- the interface(s) (206) may facilitate communication to/from the system (102).
- the one or more processor(s) (202) may be implemented as a combination of hardware and programming (for example, programmable instructions) to implement one or more functionalities of the one or more processor(s) (202).
- programming for the one or more processor(s) (202) may be processor-executable instructions stored on a non-transitory machine-readable storage medium
- the hardware for the one or more processor(s) (202) may comprise a processing resource (for example, one or more processors), to execute such instructions.
- the machine-readable storage medium may store instructions that, when executed by the processing resource, implement the one or more processor(s) (202).
- system (102) may comprise the machine -readable storage medium storing the instructions and the processing resource to execute the instructions, or the machine-readable storage medium may be separate but accessible to the system (102) and the processing resource.
- the one or more processor(s) (202) may be implemented by electronic circuitry.
- the database (208) is configured to serve as a centralized repository for storing and retrieving various operational data.
- the database (208) is designed to interact seamlessly with other components of the system (102) to support the functionality of the system effectively.
- the database (208) may store data that may be either stored or generated as a result of functionalities implemented by any of the components of the system (102).
- the database (208) may be separate from the system (102).
- the database (208) may reside at a remote location or be integrated with the server, depending on the configuration of the system.
- the database may be hosted at a remote location, such as a cloud-based environment or a dedicated data center, enabling centralized data storage and facilitating access from multiple devices or systems across a network. This configuration allows for enhanced scalability, redundancy, and accessibility, supporting distributed systems where data access is required across various regions.
- the database (208) may be integrated with the server, wherein the data is stored locally on-site.
- the database (208) may encompass various types, depending on the specific requirements of the application.
- a relational database may be employed, wherein data is stored in tables with predefined relationships, ensuring data consistency and supporting complex queries.
- a NoSQL database may be used, designed to handle unstructured or semi-structured data, offering scalability and flexibility for real-time applications.
- a distributed database may be implemented, wherein data is spread across multiple locations to ensure high availability, fault tolerance, and efficient access across regions.
- a cloud database may be utilized, providing scalable and on- demand data storage with internet-based accessibility.
- an in-memory database may be used, storing data in the system's main memory to enable faster data access.
- a graph database may be employed for managing complex relationships in data, such as those found in social networks or recommendation systems.
- an object- oriented database may be utilized, storing data in the form of objects to model complex data relationships.
- the work order management module (212) may create and manage work orders for automated call tests. For instance, a network operator may need to assess the call quality in a newly developed residential area.
- the work order management module (212) may create a work order specifying parameters such as the test duration (e.g., 2 minutes), call type (e.g., voice call), and specific network band to be tested (e.g., 4G LTE).
- the work orders may be created at a coverage platform.
- the work order management module (212) may then assign the created work order to a specific user equipment (108) based on its device identifier (ID) stored in the database (208). For example, if the database (208) indicates that UE with ID "A1B2C3" is frequently located in the target residential area, the work order management module (212) may assign the work order to this device. This targeted assignment ensures that the test is conducted in the relevant geographical area without requiring manual dispatching of testing personnel.
- the work order management module (212) may be further configured to schedule multiple work orders over an extended period. For example, to monitor the impact of a newly installed 5G tower, the work order management module (212) may schedule daily automated call tests for a period of three months. These tests could be distributed among various UEs in the vicinity of the tower.
- the work order management module (212) may be configured for assessing the following parameters:
- the work order management module (212) may divide the area around the network into sectors and ensure that tests are conducted in each sector. For instance, the work order management module (212) might assign tests to UEs located north, south, east, and west of the tower.
- the work order management module (212) may schedule tests at different times of the day to capture variations in network performance. For example, the work order management module (212) might schedule tests during peak hours (e.g., 9 AM and 6 PM) and off-peak hours (e.g., 3 AM and 11 AM).
- peak hours e.g. 9 AM and 6 PM
- off-peak hours e.g., 3 AM and 11 AM.
- the work order management module (212) may distribute tests across different device models to account for potential device-specific performance variations. For instance, it might assign tests to both high-end smartphones and budget devices to ensure comprehensive coverage.
- Historical data consideration If historical data shows that a particular area consistently experiences issues during rainy weather, the work order management module (212) might increase the frequency of tests in that area during the rainy season.
- the work order management module (212) might dynamically increase the number of tests in that sector to gather more data and identify the root cause.
- the work order management module (212) may be capable of creating multiple work orders for different types of automated call tests, catering to various network services and technologies. For example, the work order management module (212) may simultaneously create work orders for voice call quality tests on a 4G network and data throughput tests on a 5G network. In a specific scenario, the work order management module (212) might create a work order for testing Voice over LTE (VoLTE) call quality in urban areas, while also generating a separate work order for evaluating 6G data speeds in newly deployed mm Wave sectors.
- VoIP Voice over LTE
- the work order management module (212) may assign priorities to work orders based on network performance urgency. For instance, if customer complaints about dropped calls in a particular business district have spiked, the work order management module (212) may assign a high priority to call stability tests in that area. Conversely, routine data speed tests in a stable residential area might receive a lower priority. This prioritization ensures that critical issues are addressed promptly, minimizing customer dissatisfaction and potential revenue loss. For instance, if the system detects a high volume of call drops in a particular geographical region, the work order management module (212) assigns a high priority to the work order related to automated call tests in that region to quickly assess the underlying cause and mitigate the issue.
- the work order management module (212) may assign a lower priority to the corresponding work orders, scheduling them for execution at a later time.
- the priority assignment is based on predefined thresholds set by the network operator, which define what constitutes an urgent network performance issue. These priorities are communicated to the relevant speed test servers, which execute the automated call tests on user devices according to the assigned priority level, ensuring that critical network areas are addressed promptly while less urgent tasks are handled as resources allow.
- the work order management module (212) may manage a distributed network of speed test servers for conducting the automated call tests. This management may involve selecting the most appropriate server based on geographical proximity and current load. For example, if a work order requires testing in New York City, the work order management module (212) may choose a speed test server located in Newark, New Jersey, to minimize latency. If that server is experiencing high load, the work order management module (212) might instead route the test to a less busy server in Philadelphia, balancing the need for geographical proximity with optimal server performance.
- the distributed network of speed test servers is employed for executing automated call tests across various locations in the network.
- the work order management module (212) facilitates the coordination of these servers by distributing test assignments based on location, network conditions, and specific test requirements. It ensures that the appropriate speed test servers are selected for conducting each test based on factors such as proximity to the user equipment (UE), availability, and capacity.
- UE user equipment
- the distributed nature of the network allows for parallel testing across multiple regions, enhancing the scalability and efficiency of the testing process.
- the work order management module (212) may interface with a central control system that schedules and assigns tests to these distributed servers, allowing for load balancing and the efficient use of resources. It monitors the performance of the servers and ensures that the tests, such as call setup success rate (CSSR), handover success rate, and interference levels, are executed according to the specified parameters and timeframes. Once the test is executed, the speed test servers collect and transmit the results (e.g., KPIs such as call setup time, codec details, and network drop rates) back to the central database for analysis.
- CSSR call setup success rate
- KPIs such as call setup time, codec details, and network drop rates
- the work order management module (212) may also optimize test execution and data collection across diverse geographical locations. For instance, in a country with varying levels of network infrastructure, the work order management module (212) might create work orders that test 4G networks in urban areas, 3G networks in suburban regions, and 2G networks in rural locations. This approach ensures comprehensive coverage and allows comparative network performance analysis across different technologies and geographical contexts.
- the communication module (214) may be configured to send push notifications to the selected user equipment (108), via the FCM server (114). For example, when a new work order is created for testing network performance, the communication module (214) may identify all eligible UEs/devices in that area and prepare personalized push notifications for each UE.
- These push notifications sent by the communication module (214) may contain crucial information (configuration parameters) for executing the automated call test.
- a typical push notification might include a JavaScript Object Notation (JSON) payload with multiple key elements.
- JSON JavaScript Object Notation
- the script defining the test procedures could be a series of commands like "initiate_call”, “measure_signal_strength”, “record_call_quality”, and "end_call”.
- the scheduled date and time for execution might be specified as "2023-07-15 14:30:00 UTC", ensuring the test occurs at a predetermined time.
- the predetermined time may be tuned or updated by the network operator based on the requirements.
- the requirements may refer to various factors that may influence the scheduling or execution of the test.
- the factors may include network load, resource availability, performance optimization, and time zone coordination.
- the requirements may also include regulatory or compliance needs, incident resolution, or external factors like weather or business hours.
- the network operator may adjust the predetermined time (test time) to ensure it aligns with these conditions, ensuring that the test is performed under optimal circumstances and provides accurate, reliable results.
- the configuration parameters included in the push notification may be detailed. For example, the call duration might be set to 120 seconds, allowing for a comprehensive assessment of call stability.
- Specific network settings to be tested could include "force_LTE_only" to ensure the test focuses on 4G performance or "enable_VoLTE" to test advanced voice services.
- the communication module (214) may employ various security measures to ensure that the push notifications are delivered securely to the intended user equipment (108). For example, the communication module (214) might use end-to- end encryption for all push notifications. Additionally, the communication module (214) may implement a token-based authentication system, where each push notification includes a unique, time-limited token that the receiving device must validate before executing the test.
- the communication module (214) may use the device identifiers stored in the database (208) to tailor the delivery method for each user equipment. For instance, if the database indicates that a particular device frequently loses cellular connectivity, the communication module (214) might send the push notification via cellular data and Wi-Fi to ensure receipt. The communication module (214) may also employ a retry mechanism, attempting to resend notifications at increasing intervals if delivery confirmation is not received.
- the communication module (214) may be configured to send high- priority push notifications. These push notifications might override device settings to ensure immediate delivery and prompt test execution, allowing for rapid assessment of network recovery.
- the data processing module (216) is configured to execute the scheduled automated call test on the selected user equipment (108) by initiating the automated call test at a time specified in the work order and performing the automated call test as defined in the push notification.
- the automated call test comprises at least one of a short call test or a long call test.
- the short call test is designed to measure rapid connection and disconnection performance.
- the user equipment (108) might be instructed to establish a connection, maintain it for a brief period (e.g., 5-10 seconds), and then terminate the connection. This process may be repeated multiple times in quick succession. For instance, the test might involve making 50 short calls over a 5-minute period.
- the long call test is designed to evaluate sustained connection quality and stability.
- the user equipment (108) establishes a connection and maintains it for an extended period, such as 10 minutes or even longer. During this time, various network parameters are continuously monitored. For example, a long call test might involve establishing a voice call for 15 minutes while the system monitors call quality metrics, signal strength, and any instances of call dropping or quality degradation.
- the data processing module (216) plays a crucial role in managing the execution of the automated call test. Upon receiving the work order, the data processing module (216) schedules the test based on the specified time in the work order. For example, if the work order indicates that the test should be performed at 2:00 AM local time to minimize impact on regular network traffic, the data processing module (216) will initiate the test at precisely that time.
- the execution of the automated call test follows the parameters defined in the push notification sent to the user equipment (108).
- This push notification contains all necessary information for the test, including the type of test to be performed (short call test, long call test, or both), the specific network parameters to be tested, and any other relevant configuration details.
- the automated call test may include short and long call tests to assess network performance comprehensively.
- the data processing module (216) might instruct the user equipment (108) to perform a series of 20 short calls, followed by a single long call (20 minutes), and conclude with another series of 20 short calls. This combination allows for evaluating rapid connection handling and sustained call stability within a single test session.
- the data processing module (216) may instruct the user equipment (108) to establish a connection, maintain it for 10 seconds, and then disconnect.
- the data processing module (216) might measure metrics like connection establishment time (e.g., 1.2 seconds) and successful disconnect rate (e.g., 99.9%).
- the data processing module (216) may direct the user equipment (108) to maintain a connection for an extended period, such as 10 minutes. During this time, the data processing module (216) might assess metrics like signal stability (e.g., a standard deviation of signal strength: ⁇ 2 dBm), packet loss rate (e.g., 0.1%), and jitter (e.g., 15 ms).
- signal stability e.g., a standard deviation of signal strength: ⁇ 2 dBm
- packet loss rate e.g., 0.1%)
- jitter e.g. 15 ms
- the data processing module (216) continuously collects and monitors the test data.
- This data includes a variety of key performance indicators (KPIs), such as call setup success rate, call drop rate, signal strength, voice quality metrics, and more.
- KPIs key performance indicators
- the specific KPIs collected may vary depending on the nature of the test and the network parameters being evaluated.
- the system (102) may gather valuable, real- world performance data without requiring manual intervention by executing these automated call tests as defined in the work order and push notification. This approach allows for consistent, scheduled testing across various network conditions and geographical locations, providing network operators with crucial insights for optimizing their service quality and user experience.
- the data processing module (216) may receive collected data comprising a plurality of key performance indicators (KPIs) from the automated call tests executed by the user equipment (108) in the background, as defined in the push notifications. For example, during a 5G network test in a busy financial district, the data processing module (216) might collect the following KPIs:
- CSSR Call Setup Success Rate
- E-RAB Evolved Radio Access Bearer
- the data processing module (216) may be further configured to initiate the automated call test by instructing the user equipment (108) to dial a toll- free number automatically. For example, the data processing module (216) might instruct the user equipment (108) to dial " 1-800-TEST-NET" at 3:00 AM local time to conduct a network performance test during off-peak hours.
- the data processing module (216) may monitor the progress of the test in real-time, allowing for immediate detection of any issues or anomalies. For instance, if the data processing module (216) detects that the signal strength suddenly drops from -X dBm to -B dBm during a test call in a usually stable area, the data processing module (216) might flag this anomaly for immediate investigation.
- the data processing module (216) may automatically end the call by collecting final test data from the user equipment (108).
- This final data might include metrics such as overall call quality score (e.g., 4.5 out of 5), average throughput (e.g., 150 Mbps for a 5G test), and total packets lost (e.g., 10 out of 10,000 packets).
- the data processing module (216) may release allocated network resources. For example, if the test utilized a dedicated network slice on a 5G network, the data processing module (216) would signal the network to release this slice, making it available for regular user traffic.
- the data processing module (216) may send a completion notification to the user equipment (108).
- This notification might include a summary of the test results, such as "Test Completed Successfully. Duration: 120 seconds. Average Signal Strength: -65 dBm.”
- the data processing module (216) may compare the collected KPIs against predefined thresholds to derive meaningful insights from the collected data.
- the predefined thresholds refer to specific values or ranges set which may be modified by the network operator.
- the predefined thresholds may be modified based on various factors, including but not limited to, changes in system performance, such as improvements or degradation in network speed, latency, or error rates, to better reflect the operational capabilities or limitations of the system. Modifications may also be made in response to evolving business requirements, regulatory updates, or compliance obligations. Additionally, threshold adjustments may be informed by the analysis of historical data, trends, or performance patterns. Further, scalability needs, incidents or fault analysis, and the introduction of new technologies or system upgrades may necessitate the modification of thresholds.
- CSSR call setup success rate
- the data processing module (216) may also identify trends and patterns in network performance based on the collected KPIs, offering valuable long-term insights for network optimization. For instance, the data processing module (216) might detect that data throughput in a business district consistently drops by 30% between 1 PM and 2 PM on weekdays, suggesting additional capacity is needed during lunch hours.
- the data processing module (216) may generate alerts, enabling prompt attention to potential problems. For example, if the handover failure rate between two specific cell towers exceeds 5% for three consecutive days, the data processing module (216) might generate a high-priority alert for the network operations team.
- the data processing module (216) may also generate performance reports based on the collected data, including trends of the KPIs over the specified time interval.
- a monthly report may be generated by the data processing module (216), including visualizations such as:
- a bar chart comparing handover success rates between different network technologies e.g., 4G to 4G, 4G to 5G, 5G to 4G
- the data processing module (216) may aggregate the collected data from multiple user equipments (108), providing a comprehensive view of network performance across various devices and locations. For example, in a metropolitan area, the data processing module (216) might collect and aggregate data from 10,000 different user equipments over a month, including smartphones, tablets, and loT devices, spread across residential, commercial, and industrial zones.
- the data processing module (216) may normalize data from diverse device types, accounting for differences in hardware capabilities or operating systems. For instance, when comparing signal strength measurements, the data processing module (216) might apply a calibration factor to adjust for known variations in antenna sensitivity between different smartphone models. A high-end smartphone reporting -85 dBm might be normalized to -80 dBm to align with measurements from mid-range devices in the same location.
- the data processing module (216) may categorize performance metrics based on network technologies, facilitating technology -specific analysis and optimization efforts. For example, the data processing module (216) might separate data into categories such as:
- 4G LTE metrics Average download speed of 50 Mbps, latency of 30ms
- 5G Sub-6 GHz metrics Average download speed of 300 Mbps, latency of 10ms
- 5G mm Wave metrics Average download speed of 1.5 Gbps, latency of 5ms
- the data processing module (216) may generate coverage maps, offering visual representations of network performance across geographical areas. For instance, the data processing module (216) might create a heat map of a city where:
- Red areas indicate 5G mm Wave coverage with speeds > 1 Gbps
- Orange areas show 5G Sub-6 coverage with speeds between 100-500 Mbps
- Yellow areas represent 4G LTE coverage with speeds between 10-50 Mbps
- Green areas denote 3G coverage with speeds ⁇ 10 Mbps
- the data processing module (216) may analyze the collected data to support ongoing network improvement efforts to identify specific network performance issues. For example, the data processing module (216) might detect that in a particular suburban area, 5G handover success rate drops below 90% during peak hours (6 PM - 8 PM), while maintaining over 99% success rate at other times.
- the data processing module (216) may generate recommendations for network optimization, providing actionable insights for network operators.
- the data processing module (216) might recommend: Adjusting antenna tilt on Cell Tower A to improve coverage overlap with neighboring towers
- the data processing module (216) may also track the impact of implemented optimizations over time, allowing for continuous refinement of network performance strategies. For instance, after implementing the above recommendations, the data processing module (216) might report:
- Week 1 5G handover success rate improved to 93% during peak hours
- Week 2 Further improvement to 95% success rate
- Week 3 Stability achieved at 97% success rate
- the database management module (218) may be responsible for recording the received collected data from the automated call tests. For example, when the user equipment (108) completes a series of tests, the database management module (218) may immediately receive and store the data, tagging it with relevant metadata such as timestamp, location coordinates, and device type.
- the database management module (218) may ensure that all test data is properly stored, organized, and accessible for future analysis and reporting. For instance, the database management module (218) might organize data into hierarchical structures by region (e.g., Midwest), then by city (e.g., Chicago), then by network technology (e.g., 5G), and finally by specific KPIs (e.g., download speed, latency).
- region e.g., Midwest
- city e.g., Chicago
- network technology e.g., 5G
- specific KPIs e.g., download speed, latency
- the database management module (218) may be configured to store historical test data, enabling long-term trend analysis and performance tracking.
- the database management module (218) may support efficient retrieval of stored historical test data, facilitating comprehensive analysis and reporting capabilities. For instance, if an analyst needs to compare 4G LTE performance in XYZ place over the past three summers, the database management module (218) could quickly retrieve and compile this specific dataset.
- the user interface module (220) provides a graphical interface for interacting with the system.
- the user interface module (220) might offer a web-based dashboard accessible to network operators and administrators.
- the interface provided by the user interface module (220) may allow users to schedule automated call tests with customizable parameters, catering to specific testing requirements or network conditions.
- the network operator may use the interface to schedule a series of high-priority tests in an area where a music festival is planned, setting parameters like test frequency (e.g., every 30 minutes), duration (e.g., throughout the 3-day event), and specific KPIs to monitor (e.g., focusing on data throughput and latency).
- Users may view the results of completed tests through the user interface module (220), providing immediate access to performance data. For example, after a day of testing at the music festival, the user interface module (220) might display a summary showing average download speeds of X Mbps, with peak speeds reaching Z Mbps during off-peak hours.
- the interface provided by the user interface module (220) may also offer access to historical performance data, enabling trend analysis and long-term performance tracking. For instance, users might be able to generate graphs showing how average download speeds in the festival area have improved year-over-year, from A Mbps three years ago to B Mbps in the current year.
- users may be able to configure alerts for specific performance thresholds through the user interface module (220), ensuring prompt notification of critical issues. For example, a user might set an alert to be triggered if the call drop rate exceeds P% in any given hour, or if the average data throughput falls below Q Mbps in a 5G coverage area.
- the automated call tests executed by the user equipment (108) may operate through a background process that functions independently of active user applications. For instance, even while a user is actively browsing the web or using a navigation app, the user equipment (108) may conduct a short call test without any noticeable impact on the activities of the user.
- the background execution may allow for more frequent and consistent testing, providing a more accurate and comprehensive picture of network performance. For instance, the system might be able to conduct brief network performance checks every hour, 24 hours a day, across thousands of devices in a city, resulting in a highly granular and real-time view of network conditions.
- This approach may ensure that the tests can be conducted without disrupting normal device usage or requiring active user participation. For example, a long call test might be scheduled for 3 AM local time, when the user is likely asleep, and the UE is idle, ensuring minimal interference with the user's normal usage patterns.
- FIG. 3 illustrates an exemplary system architecture (300) for scheduling the automated call test on the user equipment (108), in accordance with an embodiment of the present disclosure.
- a system architecture (300) comprises a web portal (302), a load balancer (304), a plurality of web servers (WS) (112), an application server (308), the database (208), a reporting server (312), the FCM server (114) and the user equipment (108).
- the plurality of web servers (WS) comprises WS1 (112-1), WS2 (112-2), and so on.
- the work order is created and scheduled from the web portal (302) by the work order management module (212).
- the scheduled work order is assigned from the web portal (302) to a particular user equipment (108) based on the device identifier (ID) stored in the database (208).
- the load balancer (304) distributes incoming requests across the multiple web servers (112) to ensure optimal resource utilization and system performance.
- the application server (308) may host the core functionality of the system (102), including the work order management module (212), communication module (214), data processing module (216), and database management module (218).
- the application server (308) processes the work orders, manages the execution of automated call tests, and handles data processing and storage.
- the reporting server (312) generates reports based on the collected data and provides analytical insights.
- the reporting server (312) interfaces with the database (208) to retrieve historical data and generate performance trends.
- a call test work order is executed in the background without user intervention.
- the push notification includes scripts, execution data, and scheduled time for the call test.
- a script of the call test is sent to the user equipment (108) via the push notification.
- the script of the call test is run at a scheduled time.
- the automated call test is performed on the user equipment (108) for testing the plurality of key performance indicators (KPIs).
- KPIs comprise a call setup success rate (CSSR), an evolved radio access bearer (E- RAB) drop rate, an interference level, a handover success rate, and codec details for specified geographical areas.
- the automated call goes to a server toll-free number.
- the call is connected and started automatically.
- the call test is started and ended automatically without user intervention.
- the call runs for a defined time interval (for example, 2 minutes), and the KPIs are collected.
- the data corresponding to KPIs is collected by the data processing module (216) and recorded to the database (208) by the database management module (218). This collected data is used for network optimization and improvement of network coverage.
- the KPIs comprise success and failure rate of handover, area traffic capacity, and other relevant metrics.
- the system (102) supports scheduling multiple work orders for different user equipments (108) over extended periods (e.g., 1 month or more).
- the work order management module (212) can assign and distribute various call instructions (e.g., short call and long call) to multiple user equipments (108) across different geographical areas.
- a plurality of call instructions is distributed to a plurality of user equipments (108).
- the call instructions are assigned for a specific time interval (e.g., 1 month or more).
- a specific time interval e.g. 1 month or more.
- the user equipments (108) are registered in a table format in the database (208).
- the work order is communicated to the user equipment (108) through a push notification via the FCM server (114).
- the work order is run at a specific time.
- the data corresponding to the work order is recorded to the database (208).
- multiple work orders are sent to the plurality of user equipments (108).
- the work orders are sent with scripts, execution data, and time.
- the call test is assigned for a specific time period (e.g., 1 month) to the plurality of user equipments (108). The call test is run without user intervention. After completion of the test, all data is collected.
- FIG. 4 illustrates an exemplary flow diagram (400) for scheduling the automated call test on the user equipment (108), in accordance with an embodiment of the present disclosure.
- Step 402 includes creating, by the work order management module (212), the work order for an automated call test in a coverage platform (CP).
- the communication module (214) sends the push notification, via the FCM server (114), to one of the plurality of user equipments (108) selected based on the device identifier (ID) stored in the database (208).
- the push notification comprises a script defining procedures for the automated call test, a scheduled date and time for executing the automated call test.
- the script defining procedures refers to a set of predefined instructions or a program that specifies how the automated call test should be conducted. These procedures may include the sequence of actions, test parameters, and conditions that guide the execution of the test on the user’s device.
- the automated call test script may include a structure:
- Test Name Automated Call Test for Network Performance (Short Call)
- CSSR Call Setup Success Rate
- E-RAB Evolved Radio Access Bearer
- Step 1 Verify device readiness (ensure the device is powered on and connected to the network).
- Step 2 Check network conditions (ensure the device is connected to the appropriate network, 4G/5G).
- Step 3 Fetch the device identifier (ID) from the database (e.g., IMEI, MSISDN, or MAC address).
- ID device identifier
- Step 4 Initiate a call to the pre-designated toll-free number (e.g., 12345) using the device's mobile network.
- Step 5 Verify the connection attempt and monitor the call setup process. Track the time taken for the connection to be established.
- Step 6 Record the initial call parameters, such as: o Call setup time o Initial codec used o Signal strength at the time of the call
- Step 7 Monitor the call during the test: o Measure Call Setup Success Rate (CSSR). o Record Evolved Radio Access Bearer (E-RAB) drop rate during the call. o Monitor and log the interference level during the call, ensuring it falls within acceptable thresholds. o Track handover success rate (whether the call successfully transfers between network cells).
- CSSR Measure Call Setup Success Rate
- E-RAB Evolved Radio Access Bearer
- Step 8 Continuously monitor device behavior, including CPU, battery, and signal strength during the test.
- Step 9 Automatically monitor the call duration for a predefined time (e.g., 30 seconds for a short call).
- Step 10 If the test is a long call, allow the call to continue for a longer time (e.g., 5 minutes) and ensure metrics are captured throughout.
- Step 11 Automatically end the call once the predefined duration or condition is met (e.g., completion of the call, or no further KPIs are needed).
- Step 12 Disconnect the call and ensure proper logging of the test results.
- Step 13 Collect and store the following KPIs during the test: o Call Setup Success Rate (CSSR) o Evolved Radio Access Bearer (E-RAB) drop rate o Interference level o Handover success rate o Codec used during the call • Step 14: Store all test results in a local cache on the device until the test is completed.
- CSSR Call Setup Success Rate
- E-RAB Evolved Radio Access Bearer
- Interference level o Handover success rate
- Codec used during the call • Step 14: Store all test results in a local cache on the device until the test is completed.
- Step 15 Once the call is terminated, synchronize the collected data with the database.
- o Action Upload the KPIs to the central database.
- o Action Mark the test as complete in the system logs.
- the user equipment (108) receives the push notification from the FCM server (114).
- the user equipment (108) executes the automated call test in the background at the specified time without user intervention based on the script, scheduled date and time, and configuration parameters received in the push notification.
- a data processing module (216) receives collected data comprising a plurality of key performance indicators (KPIs) from the automated call test executed by the user equipment (108).
- the plurality of KPIs collected comprises a call setup success rate (CSSR), an evolved radio access bearer (E-RAB) drop rate, an interference level, a handover success rate and failure rate, codec details for a specified geographical area, and a traffic capacity.
- CSSR call setup success rate
- E-RAB evolved radio access bearer
- the database management module (218) records the received collected data of the automated call test to the database (208). This recorded data can be used for various purposes such as analyzing network performance, identifying trends, and generating performance reports.
- the automated call test may comprise both a short call test designed to measure rapid connection and disconnection performance, and a long call test designed to evaluate sustained connection quality and stability.
- the user equipment (108) executes both types of tests as specified in the work order.
- FIG. 5 illustrates an exemplary flow diagram of a method (500) for scheduling the automated call test on the user equipment (108) based on the work order, in accordance with embodiments of the present disclosure.
- the method (500) includes creating, by the work order management module (212), the work order for the automated call test in a coverage platform. This step initiates the process of scheduling and executing automated call tests across a network.
- the work order management module (212) is configured to handle the creation, assignment, and tracking of work orders related to network testing and optimization.
- the work order management module (212) interfaces with the coverage platform.
- the coverage platform provides a comprehensive view of network coverage and performance across different geographical areas.
- Creating the work order involves defining the parameters of the automated call test, including the type of test to be performed, the target areas, and the specific metrics to be measured.
- the work order management module (212) may create multiple work orders for different types of automated call tests, including voice calls and data sessions. These work orders can be prioritized based on network performance urgency, allowing the system to focus on critical areas or issues first.
- the work order management module (212) also manages a distributed network of speed test servers for conducting the automated call tests, ensuring that tests can be performed efficiently across various locations.
- the work order management module may perform following steps:
- the work order management module (212) receives a request or automatically triggers the creation of the work order for the automated call test. This work order serves as the formal request to initiate network testing, including scheduling and executing the call tests across the network.
- the work order management module (212) defines key parameters for the automated call test within the work order. These parameters may include: Test Type: Specifying the type of test, such as a voice call test, data session, or other network performance tests. Target Areas: Identifying the geographical regions or specific network coverage zones where the test should be performed, based on areas of interest or known performance issues. Metrics to Be Measured: Defining which performance metrics the automated call test will measure, such as call success rate, call drop rate, latency, throughput, signal strength, etc. Assignment and Tracking: Once the parameters are set, the work order is assigned within the system. The work order management module (212) tracks the progress of the work order, ensuring that each test is executed as planned. This may involve setting deadlines or priorities, particularly in cases where there are critical network performance issues.
- the work order management module (212) interfaces with the coverage platform to ensure that the automated call tests align with the platform’s capabilities.
- Prioritization of Work Orders If there are multiple work orders for different types of automated call tests, the work order management module (212) may prioritize them based on the urgency of the network performance issues. High- priority work orders are executed first, addressing critical issues that could impact user experience or service reliability.
- the method (500) includes sending, by the communication module (214), via the FCM server (114), a push notification to one of a plurality of user equipments selected on a device identifier (ID) stored in the database.
- the device identifier (ID) may include several types of unique identifiers that are used to distinguish a specific user equipment (UE) within the network.
- the device identifier (ID) identifiers allow the system to accurately target devices for tests or other network management tasks.
- the at least one device identifier may be an IMEI (International Mobile Equipment Identity), an MSISDN (Mobile Station International Subscriber Directory Number), a MAC Address (Media Access Control Address), a UUID (universal unique identifier), a device serial number, and a subscription permanent identifier (SUPI).
- IMEI International Mobile Equipment Identity
- MSISDN Mobile Station International Subscriber Directory Number
- MAC Address Media Access Control Address
- UUID universal unique identifier
- SUPI subscription permanent identifier
- the communication module (214) is responsible for facilitating the transmission of information between the system and the user equipment.
- the communication module (214) leverages the FCM server (114), a reliable and efficient messaging platform, to send push notifications to the selected user equipment.
- the push notification contains information for executing the automated call test.
- the push notification comprises a script defining procedures for the automated call test, a scheduled date and time for executing the test.
- the script defining procedure comprehensive set of instructions allows the user equipment to execute the test autonomously without requiring user intervention.
- the selection of the user equipment (108) is based on the device identifier stored in a database, ensuring that the appropriate devices are chosen for specific tests. This selection process may involve considerations such as the device's location, capabilities, and previous test history.
- the work order management module (212) assigns the work order to the user equipment. Furthermore, the work order management module (212) may schedule multiple work orders for automated call tests to be executed by the plurality of user equipments over a specified time interval of at least one month. This long-term scheduling capability allows for comprehensive and ongoing network performance monitoring.
- the work order management module (212) also implements a distribution algorithm that ensures coverage across different geographical areas and time periods, providing a balanced and representative sample of network performance data. The distribution algorithm allocates and schedules automated call tests across different user devices, geographical areas, and time periods to ensure comprehensive network testing coverage.
- the sending of the push notification by the communication module (214) may include the following steps:
- the communication module (214) first identifies a specific user equipment (108) based on a device identifier (ID).
- the device identifier could be a unique identifier like a phone number, MAC address, IMEI number, or another type of device- specific ID that links the user equipment to a particular record in the system.
- the device ID is retrieved from the database (208), which stores device identifiers along with associated user or device information.
- the communication module queries the database (208) to locate the device ID of the user equipment (108) that needs to receive the push notification. This could involve searching for a specific user or device based on various factors like location, network conditions, or user preferences.
- the communication module (214) prepares the push notification.
- the push notification may contain various types of information, such as network alerts, test results, or system messages, depending on the purpose of the communication.
- the communication module (214) then sends the push notification to the selected user equipment (108). This is often done through a push notification service (such as Apple Push Notification Service (APNS) for iOS or Firebase Cloud Messaging (FCM) for Android).
- APNS Apple Push Notification Service
- FCM Firebase Cloud Messaging
- the communication module interfaces with these services to deliver the notification to the targeted device.
- the push notification is delivered to the user equipment (108) via the push notification service, which ensures that the message is sent efficiently to the right device.
- the user equipment (108) receives the notification, which is typically displayed on the device screen, alerting the user to the event or information being communicated.
- the scheduled automated call tests are planned for two devices in different locations to assess network performance.
- the network operator retrieves the unique device identifiers (IDs) for each user equipment (UE) from the database. For instance, device 1, located in New York, is set for a voice call test at 10:00 AM UTC, with parameters to measure call success rate, call drop rate, and latency. Similarly, device 2, located in San Francisco, is scheduled for a test at 10:15 AM UTC.
- the communication module (214) sends push notifications to each device to inform the users about the upcoming test.
- the tests are executed on both devices, with the communication module coordinating the execution across a distributed network of speed test servers.
- the results are collected: device 1 in New York shows a high call success rate but detects a slight increase in call drop rate, while device 2 in San Francisco performs well with minimal issues.
- the UE is selected based on the work order that specifies the parameters for the automated call test.
- the work order management module (212) creates a work order that includes details such as the test type, location, and required metrics to be measured. Once the work order is created, the module identifies which UEs are to be tested by referencing the work order's target criteria, such as the geographic area or network conditions. For instance, if the work order specifies a test in New York, the system selects Device 1 located in that area. Similarly, if the work order for a test in San Francisco is created, Device 2 is chosen. After selecting the appropriate UEs, the communication module (214) sends push notifications to the devices to inform them of the scheduled tests.
- the selected devices then undergo the automated call tests as per the specifications in the work orders, with the results later used to evaluate network performance and make necessary optimizations.
- the work order is assigned through the web portal to users whose devices are registered in the database.
- the web portal serves as the interface through which the network operator or system administrator may create and assign the work order.
- the portal allows the operator to select specific devices from the database, where the UE is registered with its unique device identifier (ID).
- the method (500) includes executing, by the data processing module (216), the scheduled automated call test on the selected user equipment (108). This execution involves initiating the automated call test at a time specified in the work order and performing the automated call test as defined in the push notification.
- the automated call test comprises at least one of a short call test or a long call test.
- the execution of the automated call test is a critical step in evaluating network performance.
- the data processing module (216) manages this process, ensuring that the test is conducted according to the parameters set in the work order and the push notification.
- the timing of the test initiation is determined by the work order. For example, if the work order specifies that the test should be conducted at 3:00 AM to capture network performance during off-peak hours, the data processing module (216) will trigger the test at exactly that time. This precise scheduling allows for consistent and comparable results across multiple tests and locations.
- the push notification sent to the user equipment (108) contains detailed instructions for performing the automated call test. These instructions dictate whether the test will be a short call test, a long call test, or a combination of both.
- the user equipment (108) may be instructed to perform a series of brief connections. For instance, the test might involve making 30 calls, each lasting 10 seconds, over a 5-minute period. This rapid succession of short calls helps assess the network's ability to handle frequent connection requests and quick disconnections, simulating scenarios like busy call centers or areas with high call turnover.
- the long call test the user equipment (108) establishes and maintains a single, extended connection. As an example, this could involve initiating a call and keeping it active for 30 minutes.
- the system continuously monitors various performance metrics, providing insight into the network's ability to maintain stable, high-quality connections over longer durations.
- This type of test is particularly useful for evaluating network performance for users who engage in lengthy calls, such as conference calls or long-distance conversations.
- KPIs key performance indicators
- CSSR call setup success rate
- E-RAB evolved radio access bearer
- CSSR call setup success rate
- E-RAB evolved radio access bearer
- the specific KPIs gathered may vary depending on the nature of the test and the particular aspects of network performance being evaluated.
- the method enables the collection of valuable, real-world performance data without the need for manual intervention or disruption to regular network usage.
- This automated, scheduled approach to testing allows for consistent evaluation of network performance across various conditions, times, and locations, providing network operators with essential insights for service optimization and quality improvement.
- the method (500) further comprises several additional steps for receiving, by the data processing module (216), collected data comprising a plurality of key performance indicators (KPIs) from the automated call test executed by the user equipment in the background as defined in the push notification.
- the data processing module (216) is a critical component that handles the collection, processing, and analysis of the test results.
- the automated call test is executed by the user equipment through a background process that operates independently of active user applications, ensuring that the test does not interfere with the user's normal device usage.
- the plurality of KPIs collected by the data processing module (216) provides a comprehensive view of network performance.
- the data processing module (216) initiates the automated call test by instructing the user equipment to automatically dial a toll-free number, monitors the progress of the test in real-time, and automatically ends the test upon completion. This process involves collecting final test data from the user equipment, releasing network resources allocated for the test, and sending a completion notification to the user equipment (108).
- the method (500) further comprises several additional steps for recording, by the database management module (218), the received collected data of the automated call test.
- the database management module (218) is responsible for storing, organizing, and managing the vast amount of data generated by the automated call tests. It ensures that the collected data is properly recorded with the central database, making it available for further analysis and reporting.
- the recording process involves storing the raw data and organizing it in a way that facilitates easy retrieval and analysis.
- the database management module (218) stores historical test data and supports the retrieval of this stored data for analysis and reporting purposes. This historical data is invaluable for identifying long-term trends and patterns in network performance.
- the method (500) further comprises several additional steps and features that enhance its functionality and value.
- the automated call test includes both a short call test and a long call test.
- the short call test is designed to measure rapid connection and disconnection performance, providing insights into the network's ability to establish and terminate calls quickly.
- the long call test is designed to evaluate sustained connection quality and stability, offering a view of the network's performance over extended periods.
- the user equipment (108) executes both these tests as specified in the work order, providing a comprehensive assessment of network performance under different scenarios.
- the data processing module (216) performs several critical functions beyond data collection.
- the data processing module (216) compares the collected KPIs against predefined thresholds, allowing for quick identification of performance issues.
- the data processing module (216) also identifies trends and patterns in network performance based on the collected KPIs, providing valuable insights into the network's behavior over time. Based on this comparison and identified trends, the data processing module (216) generates alerts for detected anomalies or persistent issues, enabling proactive network management.
- the data processing module (216) generates performance reports based on the collected data, including trends of the KPIs over the specified time interval. These reports are crucial for understanding network performance, identifying areas for improvement, and making informed decisions about network optimization.
- the system (102) also includes the user interface module (220) that provides a user-friendly interface for various functions.
- This interface allows scheduling automated call tests with customizable parameters, giving network administrators flexibility in designing tests to meet specific needs.
- the user interface module (220) also enables viewing results of completed tests, providing quick access to performance data.
- the interface supports accessing historical performance data, allowing for long-term trend analysis. Additionally, the user interface module (220) allows for configuring alerts for specific performance thresholds, enabling proactive monitoring of critical network parameters.
- the data processing module (216) further enhances the value of the collected data through several advanced processing techniques.
- the data processing module (216) aggregates the collected data from multiple user equipments, providing a comprehensive view of network performance across various devices and locations.
- the data processing module (216) normalizes data from diverse device types, ensuring that data from different sources can be meaningfully compared and analyzed.
- the data processing module (216) categorizes performance metrics based on network technologies, allowing for technologyspecific performance analysis. Based on this aggregated and categorized data, the module generates coverage maps, providing a visual representation of network performance across geographical areas.
- the user equipment (108) for facilitating an automated call test is described.
- the UE is configured to receive from a communication module (214) of a system (102), a push notification comprising instructions for an automated call test.
- the UE (108) is selected based on a device identifier (ID) stored in the database (208), ensuring that the appropriate device is chosen for the specific test.
- ID device identifier
- the UE (108) executes the automated call test in the background as defined in the push notification. This background execution ensures that the test does not interfere with the user's normal device usage.
- the UE collects data comprising a plurality of key performance indicators (KPIs) from the executed automated call test. These KPIs provide a comprehensive view of network performance from the user's perspective.
- KPIs key performance indicators
- the UE (108) After completing the test, the UE (108) transmits the collected data to the data processing module (216) of the system (102). Finally, the UE receives a completion notification from the system (102) upon successful recording of the collected data by the database management module (218) of the system (102). This notification confirms that the test data has been successfully recorded and is ready for analysis.
- FIG. 6 illustrates an exemplary computer system (600) in which or with which the embodiments of the present disclosure may be implemented.
- the computer system (600) may include an external storage device (610), a bus (620), a main memory (630), a read-only memory (640), a mass storage device (650), a communication port(s) (660), and a processor (670).
- the processor (670) may include various modules associated with embodiments of the present disclosure.
- the communication port(s) (660) may be any of an RS-232 port for use with a modem-based dialup connection, a 10/100 Ethernet port, a Gigabit or 10 Gigabit port using copper or fiber, a serial port, a parallel port, or other existing or future ports.
- the communication ports(s) (660) may be chosen depending on a network, such as a Local Area Network (LAN), Wide Area Network (WAN), or any network to which the computer system (600) connects.
- LAN Local Area Network
- WAN Wide Area Network
- the main memory (630) may be Random Access Memory (RAM), or any other dynamic storage device commonly known in the art.
- the read-only memory (640) may be any static storage device(s) e.g., but not limited to, a Programmable Read Only Memory (PROM) chip for storing static information e.g., start-up or basic input/output system (BIOS) instructions for the processor (670).
- the mass storage device (650) may be any current or future mass storage solution, which can be used to store information and/or instructions.
- Exemplary mass storage solutions include, but are not limited to, Parallel Advanced Technology Attachment (PATA) or Serial Advanced Technology Attachment (SATA) hard disk drives or solid-state drives (internal or external, e.g., having Universal Serial Bus (USB) and/or Firewire interfaces).
- PATA Parallel Advanced Technology Attachment
- SATA Serial Advanced Technology Attachment
- USB Universal Serial Bus
- the bus (620) may communicatively couple the processor(s) (670) with the other memory, storage, and communication blocks.
- the bus (620) may be, e.g. a Peripheral Component Interconnect (PCI) / PCI Extended (PCI-X) bus, Small Computer System Interface (SCSI), Universal Serial Bus (USB), or the like, for connecting expansion cards, drives, and other subsystems as well as other buses, such a front side bus (FSB), which connects the processor (670) to the computer system (600).
- PCI Peripheral Component Interconnect
- PCI-X PCI Extended
- SCSI Small Computer System Interface
- USB Universal Serial Bus
- operator, and administrative interfaces e.g., a display, keyboard, and cursor control device may also be coupled to the bus (620) to support direct operator interaction with the computer system (600).
- Other operator and administrative interfaces can be provided through network connections connected through the communication port(s) (660).
- Components described above are meant only to exemplify various possibilities. In no way should the aforementioned exemplary computer system (600) limit the scope of the present disclosure.
- the computer system (600) includes a non-transitory computer-readable storage medium storing computerexecutable instructions. When executed by one or more processors, the instructions cause the one or more processors to perform a method for scheduling an automated call test on a User Equipment (UE) (108) based on a work order.
- the method comprises creating a work order for an automated call test in a coverage platform using a work order management module (212).
- a communication module (214) sends a push notification via a fire-based cloud messaging (FCM) server (114) to a selected UE (108) based on a device identifier (ID) stored in a database (208).
- FCM fire-based cloud messaging
- a data processing module (216) receives collected data comprising a plurality of key performance indicators (KPIs) from the automated call test executed by the UE (108) in the background as defined in the push notification. Finally, a database management module (218) records the received collected data of the automated call test. This method enables efficient scheduling and execution of automated call tests on user devices, facilitating the collection of network performance data without manual intervention.
- KPIs key performance indicators
- the present disclosure provides technical advancement related to automated network performance testing and optimization. This advancement addresses the limitations of existing solutions by introducing a comprehensive, user equipment-based system for real-time network performance monitoring and analysis.
- the disclosure involves a sophisticated work order management and execution system that leverages cloud messaging and background processes on user devices, which offers significant improvements in the scale, frequency, and geographic coverage of network testing.
- automated call tests that run independently of user interaction
- the disclosed invention enhances the ability to collect real-world performance data across diverse network conditions and locations, resulting in more accurate and actionable insights for network optimization.
- the system's capability to schedule, execute, and analyze both short and long-duration tests across multiple devices simultaneously represents a significant leap forward in network performance assessment.
- the present disclosure provides a system and method for scheduling and executing automated call tests on user equipment without manual intervention. This automation significantly reduces the need for human resources in network testing and allows for more frequent and comprehensive performance assessments.
- the present disclosure enables the collection of real-time network Key Performance Indicators (KPIs) directly from user devices. This approach provides more accurate and representative data of actual user experiences compared to traditional network testing methods.
- KPIs Key Performance Indicators
- the present disclosure facilitates benchmarking against other operators, allowing network providers to assess their performance relative to competitors. This comparative data is crucial for identifying areas of improvement and maintaining a competitive edge in the telecommunications market.
- the present disclosure improves the consumer experience by providing a user-friendly dashboard on a web portal.
- This dashboard allows for easy visualization of network performance data, enabling technical and non-technical users to understand and act on network quality information.
- the present disclosure offers a flexible scheduling system to manage multiple work orders across various devices and geographical areas. This capability ensures comprehensive network coverage and allows for targeted testing in specific locations or time periods.
- the present disclosure incorporates short and long call tests, providing insights into rapid connection performance and sustained call quality. This dual approach offers a more nuanced understanding of network behavior under different usage scenarios.
- the present disclosure includes advanced data processing capabilities, such as KPI threshold comparisons, trend analysis, and automated alert generation. These features enable proactive network management and rapid response to emerging issues.
- the present disclosure allows for customization of test parameters, enabling network operators to focus on specific aspects of performance or adapt tests to particular network configurations or technologies.
Landscapes
- Engineering & Computer Science (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Environmental & Geological Engineering (AREA)
- Human Computer Interaction (AREA)
- Mobile Radio Communication Systems (AREA)
Abstract
The present disclosure provides a system (102) and a method for scheduling automated call test on user equipment (108) based on a work order. Unlike conventional methods that require manual intervention, the disclosed system provides a fully automated solution for executing tests on a wide range of user devices. The work order management module (212) creates and assigns work orders for automated call tests, which are executed with minimal user involvement. A communication module (214) sends a push notification to a selected user equipment (108) and a data processing module (216) executes the automated call test according to the specifications defined in the push notification. The automated call test comprises at least one of a short call test or a long call test. The system enables efficient and automated network performance testing on user devices without requiring direct user intervention, facilitating real-time monitoring and optimization of network performance.
Description
SYSTEM AND METHOD FOR SCHEDULING AN AUTOMATED CALL TEST ON A USER DEVICE
RESERVATION OF RIGHTS
[0001] A portion of the disclosure of this patent document contains material, which is subject to intellectual property rights such as, but are not limited to, copyright, design, trademark, Integrated Circuit (IC) layout design, and/or trade dress protection, belonging to JIO PLATFORMS LIMITED or its affiliates (hereinafter referred as owner). The owner has no objection to the facsimile reproduction by anyone of the patent document or the patent disclosure, as it appears in the Patent and Trademark Office patent files or records, but otherwise reserves all rights whatsoever. All rights to such intellectual property are fully reserved by the owner.
FIELD OF THE DISCLOSURE
[0002] The embodiments of the present disclosure generally relate to telecommunications network testing. In particular, the present disclosure relates to a system for scheduling an automated call test on a user device.
DEFINITION
[0003] As used in the present disclosure, the following terms are generally intended to have the meaning as set forth below, except to the extent that the context in which they are used to indicate otherwise.
[0004] Work order refers to a set of instructions for performing an automated call test, including details such as the type of test, scheduling information, and specific parameters to be measured.
[0005] Push notification refers to a message sent to a user device containing instructions and parameters for executing an automated call test.
[0006] Key Performance Indicators (KPIs) refer to measurable values demonstrating how effectively a network performs. In the context of the present disclosure, KPIs include call setup success rate (CSSR), evolved radio access bearer (E-RAB) drop rate, interference level, handover success rate and failure rate, codec details, and traffic capacity.
[0007] Automated call test refers to a process where a user device automatically initiates and completes a call to measure various network performance indicators without requiring manual intervention from the user.
[0008] Short call test refers to a brief automated call designed to measure the rapid connection and disconnection performance of the network.
[0009] Long call test refers to an extended automated call designed to evaluate sustained connection quality and stability of the network.
[0010] Background process refers to a software routine that runs on the user device to execute the automated call test without interfering with or being visible to the active user applications.
[0011] Fire-based Cloud Messaging (FCM) refers to a cross -platform messaging solution that allows the system to send messages to user devices reliably.
[0012] Coverage platform refers to a system component responsible for creating and managing work orders for automated call tests across a network. Coverage platform plays a central role in monitoring and optimizing network performance by creating, scheduling, and managing work orders for various types of automated tests, such as voice calls, data sessions, and other network quality assessments. The platform provides visibility into the network’s coverage and performance across different geographic areas, ensuring that tests are conducted in regions of interest or where performance issues are identified. In an example, the coverage platform may be a network coverage monitoring platform.
[0013] Evolved Radio Access Bearer (E-RAB) refers to the data connection between the user device and the core network in LTE (Long-Term Evolution) systems.
[0014] Codec refers to the software or hardware used for encoding and decoding digital data streams or signals, particularly in the context of voice calls.
[0015] Speed test server refers to a dedicated server used to conduct network performance tests, including measuring data transfer rates and latency.
BACKGROUND OF THE DISCLOSURE
[0016] The following description of related art is intended to provide background information pertaining to the field of the disclosure. This section may include certain aspects of the art that may be related to various features of the present disclosure. However, it should be appreciated that this section be used only to enhance the understanding of the reader with respect to the present disclosure, and not as admissions of prior art.
[0017] As wireless technologies advance, radio access networks (RANs) play a crucial role in connecting user equipment to core networks. RANs typically consist of radio base stations with large antennas that wirelessly connect user devices to the broader network infrastructure. With the advent of 6G and increasing user demands, RANs are becoming increasingly complex, featuring higher speeds, more interconnected units, and the integration of various sub-networks into larger ones.
[0018] The evolution of wireless technologies has also led to changes in user behavior and expectations. Users now demand the ability to send different types of data simultaneously, including text, voice, video, and multimedia files. There is a growing demand for fast and reliable internet, especially for activities like gaming, audio, and video streaming on mobile devices. Users expect better
network quality to minimize delays and ensure successful voice calls, leading to a heightened interest in real-time monitoring and optimization of RAN performance.
[0019] However, as networks become more complex, ensuring optimal performance becomes increasingly challenging. Network operators face significant difficulties in testing, troubleshooting, and identifying the causes of performance issues in both new and existing communication networks. Traditional techniques for monitoring and optimizing RAN performance often fail to accurately detect the root causes of performance degradation. These conventional methods typically require substantial manual effort from telecom operators, making them inefficient and time-consuming.
[0020] Currently, scheduling call tests running without user intervention is not possible through a web portal for users (e.g., field engineers) on their devices. This limitation prevents network operators from obtaining real-time network key performance indicators (KPIs) for optimization purposes. As a result, consumer experience is negatively affected, and network operators lack the necessary data to improve coverage and performance efficiently.
[0021] Moreover, the dynamic nature of modern networks, with frequent configuration changes and updates, adds another layer of complexity to performance management. Existing systems often struggle to keep track of these changes and their impact on network performance in real-time. This can lead to delays in identifying and resolving issues, potentially resulting in poor user experiences and increased customer complaints.
[0022] Conventional systems and methods face difficulty in efficiently monitoring, analyzing, and optimizing RAN performance, particularly in the context of rapidly evolving network configurations and the need for automated, background testing.
[0023] There is, therefore, a need in the art to provide a method and a system that can overcome the shortcomings of the existing prior arts by offering automated,
scheduled call testing capabilities that run in the background of user devices without manual intervention, enabling real-time collection of network KPIs for optimization and improved consumer experience.
SUMMARY OF THE DISCLOSURE
[0024] In an exemplary embodiment, a system for scheduling an automated call test on a user equipment is described. The system comprises a memory and one or more processors configured to execute instructions stored in the memory. The instructions include creating a work order for the automated call test in a coverage platform using a work order management module. The instructions are for sending, by a communication module a push notification to one of a plurality of user devices selected based on a device identifier (ID) stored in a database. The instructions are for executing, by a data processing module, the scheduled automated call test on the selected user equipment by initiating the automated call test at a time specified in the work order and performing the automated call test as defined in the push notification.
[0025] In some embodiments, the push notification comprises a script defining procedures for the automated call test, a scheduled date and time for executing the automated call test. The user equipment is configured to execute the automated call test based on the script, scheduled date and time received in the push notification.
[0026] In some embodiments, the one or more processors is further configured for receive, by a data processing module, collected data comprising a plurality of key performance indicators (KPIs) from the automated call test executed by the user equipment in background as defined in the push notification. The plurality of KPIs collected by the data processing module comprises a call setup success rate (CSSR), an evolved radio access bearer (E-RAB) drop rate, an interference level, a handover success rate and failure rate, codec details for a specified geographical area, and a traffic capacity. The system is further configured
to record, by a database management module, the received collected data of the automated call test.
[0027] In some embodiments, the data processing module is further configured to initiate the automated call test by instructing the user equipment to dial a toll-free number automatically.
[0028] In some embodiments, the defined automated call test is one of a short call test or a long call test. The short call test is designed to measure rapid connection and disconnection performance, and the long call test is designed to evaluate sustained connection quality and stability.
[0029] In some embodiments, the system further comprises a user interface module configured to provide a user interface for scheduling the automated call tests with customizable parameters, viewing results of the executed test, accessing historical performance data, and configuring alerts for defined KPIs.
[0030] In some embodiments, the work order management module is further configured to create multiple work orders for different types of automated call tests, including voice calls and data sessions. The work order management module is further configured to assign priorities to work orders based on network performance urgency. The work order management module is further configured to manage a distributed network of speed test servers for conducting the automated call test.
[0031] In another exemplary embodiment, a method for scheduling an automated call test on a user equipment is described. The method comprises creating, by a work order management module, a work order for the automated call test in a coverage platform. The method comprises sending, by a communication module a push notification to one of a plurality of user devices selected on a device identifier (ID) stored in a database. The method comprises execute, by a data processing module, the scheduled automated call test on the selected user equipment by initiating the automated call test at a time specified in the work order
and performing the automated call test as defined in the push notification. The automated call test comprises at least one of a short call test or a long call test.
[0032] In some embodiments, the method further comprises receiving, by a data processing module, collected data comprising a plurality of key performance indicators (KPIs) from the automated call test executed by the user equipment in background as defined in the push notification. The plurality of KPIs collected by the data processing module comprises a call setup success rate (CSSR), an evolved radio access bearer (E-RAB) drop rate, an interference level, a handover success rate and failure rate, codec details for a specified geographical area, and a traffic capacity. The method further comprises recording, by a database management module, the received collected data of the automated call test.
[0033] In some embodiments, the method further comprises initiating, by the data processing module, the automated call test by instructing the user equipment to dial a toll-free number automatically.
[0034] In some embodiments, the method further comprises creating, by the work order management module, multiple work orders for different types of automated call tests, including voice calls and data sessions. The method further comprises assigning, by the work order management module, priorities to work orders based on network performance urgency. The method further comprises managing, by the work order management module, a distributed network of speed test servers for conducting the automated call test.
[0035] In yet another exemplary embodiment, a User Equipment (UE) for facilitating an automated call test is described. The UE is configured to receive, from a communication module of a system, a push notification comprising instructions for an automated call test. The UE is selected based on a device identifier (ID) stored in a database. The UE is configured to execute, by a data processing module, the scheduled automated call test on the selected user equipment by initiating the automated call test at a time specified in the work order and performing the automated call test as defined in the push notification.
[0036] In yet another exemplary embodiment, a non-transitory computer- readable storage medium storing computer-executable instructions is described. When executed by one or more processors, the instructions cause the one or more processors to perform a method for scheduling an automated call test on a user equipment. The method comprises creating, by a work order management module, a work order for the automated call test in a coverage platform. The method comprises sending, by a communication module, a push notification to one of a plurality of user devices selected based on a device identifier (ID) stored in a database. The method comprises execute, by a data processing module, the scheduled automated call test on the selected user equipment by initiating the automated call test at a time specified in the work order and performing the automated call test as defined in the push notification. The automated call test comprises at least one of a short call test or a long call test.
OBJECTIVES OF THE DISCLOSURE
[0037] Some of the objects of the present disclosure, which at least one embodiment herein satisfies are as listed herein below:
[0038] An objective of the present disclosure is to provide a system and a method for scheduling automated call tests on user devices, thereby enabling efficient network performance monitoring without manual intervention.
[0039] An objective of the present disclosure is to provide a system and a method that assigns work orders to specific devices based on device identifiers, thereby ensuring targeted and systematic network testing across various locations.
[0040] An objective of the present disclosure is to provide a system and a method that executes call tests in the background of user devices, thereby minimizing disruption to users while collecting valuable network data.
[0041] An objective of the present disclosure is to provide a system and a method that collects and records key performance indicators from automated call tests, thereby facilitating real-time network optimization and improvement.
[0042] An objective of the present disclosure is to provide a system and a method that sends scripts and configuration parameters via push notifications, thereby enabling flexible and customizable call test execution.
[0043] An objective of the present disclosure is to provide a system and a method that aggregates data from multiple devices and test types, thereby generating comprehensive network coverage maps and performance reports.
[0044] An objective of the present disclosure is to provide a system and a method that implements both short and long call tests, thereby evaluating various aspects of network performance and stability.
[0045] An objective of the present disclosure is to provide a system and a method that analyzes collected data to identify trends and anomalies, thereby enabling proactive network issue resolution and optimization.
[0046] An objective of the present disclosure is to provide a system and a method that manages a distributed network of speed test servers, thereby ensuring reliable and geographically diverse network testing capabilities.
[0047] An objective of the present disclosure is to provide a system and a method that stores and retrieves historical test data, thereby supporting long-term network performance analysis and strategic planning.
[0048] An objective of the present disclosure is to provide a system and a method that generates recommendations for network optimization, thereby assisting network operators in improving service quality and user experience.
[0049] Other objectives and advantages of the present disclosure will be more apparent from the following description, which is not intended to limit the scope of the present disclosure.
BRIEF DESCRIPTION OF DRAWINGS
[0050] The accompanying drawings, which are incorporated herein, and constitute a part of this disclosure, illustrate exemplary embodiments of the disclosed methods and systems in which like reference numerals refer to the same parts throughout the different drawings. Components in the drawings are not necessarily to scale, emphasis instead being placed upon clearly illustrating the principles of the present disclosure. Some drawings may indicate the components using block diagrams and may not represent the internal circuitry of each component. It will be appreciated by those skilled in the art that disclosure of such drawings includes the disclosure of electrical components, electronic components or circuitry commonly used to implement such components.
[0051] FIG. 1 illustrates an exemplary network architecture of a system for scheduling an automated call test on a user equipment, in accordance with embodiments of the present disclosure.
[0052] FIG. 2 illustrates an exemplary micro service-based architecture of the system for scheduling the automated call test on the user equipment based on a work order, in accordance with embodiments of the present disclosure.
[0053] FIG. 3 illustrates an exemplary system architecture for scheduling the automated call test on the user equipment based on the work order, in accordance with an embodiment of the present disclosure.
[0054] FIG. 4 illustrates an exemplary flow diagram for scheduling the automated call test on the user equipment based on the work order, in accordance with an embodiment of the present disclosure.
[0055] FIG. 5 illustrates a method for scheduling the automated call test on the user equipment based on the work order, in accordance with an embodiment of the present disclosure.
[0056] FIG. 6 illustrates an exemplary computer system in which or with which embodiments of the present disclosure may be implemented.
[0057] The foregoing shall be more apparent from the following more detailed description of the disclosure.
LIST OF REFERENCE NUMERALS
100 - Network architecture
102 - System
104 - Network
108-1, 108-2...108-N - User equipment
110-1, 110-2...110-N - Users
112 - Web servers
114 - Fire-based cloud messaging (FCM) server
202 - One or more processor(s)
204 - Memory
206 - Interfaces
210 - Database
212 - Work order management module
214 - Communication module
216 - Data processing module
218 - Database management module
220 - User interface module
222 - Other module(s)
300 - System architecture
302 - Web portal
304 - Load balancer
308 - Application server
312 - Reporting server
400 - Flow diagram
402, 404, 406, and 408 - Steps of flow diagram 400
500 - Method
502, 504, and 506 - Steps of method 500
600 - Computer system
610 - External storage device
620 - Bus
630 - Main memory
640 - Read-only memory
650 - Mass storage device
660 - Communication port(s)
670 - Processor
DETAILED DESCRIPTION OF THE DISCLOSURE
[0058] In the following description, for the purposes of explanation, various specific details are set forth in order to provide a thorough understanding of embodiments of the present disclosure. It will be apparent, however, that embodiments of the present disclosure may be practiced without these specific details. Several features described hereafter can each be used independently of one another or with any combination of other features. An individual feature may not address all of the problems discussed above or might address only some of the problems discussed above. Some of the problems discussed above might not be fully addressed by any of the features described herein.
[0059] The ensuing description provides exemplary embodiments only, and is not intended to limit the scope, applicability, or configuration of the disclosure. Rather, the ensuing description of the exemplary embodiments will provide those skilled in the art with an enabling description for implementing an exemplary embodiment. It should be understood that various changes may be made in the
function and arrangement of elements without departing from the spirit and scope of the disclosure as set forth.
[0060] Specific details are given in the following description to provide a thorough understanding of the embodiments. However, it will be understood by one of ordinary skill in the art that the embodiments may be practiced without these specific details. For example, circuits, systems, networks, processes, and other components may be shown as components in block diagram form in order not to obscure the embodiments in unnecessary detail. In other instances, well-known circuits, processes, algorithms, structures, and techniques may be shown without unnecessary detail in order to avoid obscuring the embodiments.
[0061] Also, it is noted that individual embodiments may be described as a process which is depicted as a flowchart, a flow diagram, a data flow diagram, a structure diagram, or a block diagram. Although a flowchart may describe the operations as a sequential process, many of the operations can be performed in parallel or concurrently. In addition, the order of the operations may be re-arranged. A process is terminated when its operations are completed but could have additional steps not included in a figure. A process may correspond to a method, a function, a procedure, a subroutine, a subprogram, etc. When a process corresponds to a function, its termination can correspond to a return of the function to the calling function or the main function.
[0062] The word “exemplary” and/or “demonstrative” is used herein to mean serving as an example, instance, or illustration. For the avoidance of doubt, the subject matter disclosed herein is not limited by such examples. In addition, any aspect or design described herein as “exemplary” and/or “demonstrative” is not necessarily to be construed as preferred or advantageous over other aspects or designs, nor is it meant to preclude equivalent exemplary structures and techniques known to those of ordinary skill in the art. Furthermore, to the extent that the terms “includes,” “has,” “contains,” and other similar words are used in either the detailed description or the claims, such terms are intended to be inclusive in a manner similar
to the term “comprising” as an open transition word without precluding any additional or other elements.
[0063] Reference throughout this specification to “one embodiment” or “an embodiment” or “an instance” or “one instance” means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the present disclosure. Thus, the appearances of the phrases “in one embodiment” or “in an embodiment” in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments.
[0064] The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the disclosure. As used herein, the singular forms “a”, “an” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms “comprises” and/or “comprising,” when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. As used herein, the term “and/or” includes any and all combinations of one or more of the associated listed items.
[0065] Efficient monitoring and optimization of network performance in modem telecommunications networks is crucial, yet increasingly challenging due to the complexity of network infrastructures and the need for real-time data collection. Currently, scheduling the automated call tests running in the background without user intervention is not possible through a web portal for users such as field engineers on their devices. This limitation results in inadequate collection of network key performance indicators (KPIs), hindering effective network optimization. Consequently, the overall consumer experience is negatively affected.
[0066] Accordingly, there is a need for systems and methods to perform scheduling the automated call tests running in the background of user devices without requiring manual intervention. Such a solution would enable more comprehensive and efficient collection of network performance data, leading to improved network optimization and enhanced user experience.
[0067] The aspects of the present disclosure are directed to a system and method for scheduling an automated call test on a user equipment based on a work order. The system is configured to detect the current location of the UE and monitor and obtain data corresponding to a plurality of key performance indicators (KPIs) of a site and multiple operators. The system is further configured for processing and visualizing the collected data, analyzing the site based on user input and processed data, and generating recommendations tailored to the user type. This comprehensive approach enables real-time visualization, analysis, and operator selection for network sites, enhancing both field engineers' efficiency and end customers' network selection process.
[0068] The various embodiments throughout the disclosure will be explained in more detail with reference to FIGS 1-6.
[0069] FIG. 1 illustrates a network architecture (100) of a system (102) for scheduling an automated call test on a user equipment (108) based on a work order, in accordance with embodiments of the present disclosure.
[0070] In an embodiment, the system (102) may be configured to implement an Operation Support Systems /Business Support Systems (OSS/BSS) service. The system (102) is connected to a network (104), which is further connected to at least one computing device 108-1, 108-2, ... 108-N (collectively referred to as computing device 108, herein) associated with one or more users 110-1, 110-2, . . . 110-N (collectively referred as user (110), herein). The computing device (108) may be personal computers, laptops, tablets, wristwatches, or any custom-built computing device integrated within a modern diagnostic machine that can connect to a network as an loT (Internet of Things) device. In an embodiment, the
computing device (108) may also be referred to as User Equipment (UE) or user device. Accordingly, the terms “computing device” and “User Equipment” may be used interchangeably throughout the disclosure. In an aspect, the user (110) is a network operator or a field engineer. Further, the network (104) can be configured with a centralized server that stores compiled data.
[0071] In an embodiment, the system (102) may receive at least one input data from the user (110) via the at least one computing device (108). In an aspect, the user (110) may be configured to initiate the process of scheduling the automated call test, through an application interface of a mobile application installed in the computing devices (108). The mobile application may be configured to communicate with a network analysis server. In some examples, the mobile application may be a software application from an application distribution platform. Examples of application distribution platforms include the App Store for iOS provided by Apple, Inc., Play Store for Android OS provided by Google Inc., and such application distribution platforms. In an embodiment, the computing device (108) may transmit the at least one captured data packet over a point-to-point or point-to-multipoint communication channel or network (104) to the system (102). In an embodiment, the computing device (108) may involve collection, analysis, and sharing of data received from the system (102) via the network (104). Furthermore, the system (102) may be connected to one or more web servers (112) and a fire -based cloud messaging (FCM) server (114) via the network (104). The FCM server may be a cross -platform messaging solution that allows the system (102) to deliver messages reliably. The FCM server enables sending notification messages to drive user re-engagement and retention. The FCM server can send two types of messages: notification and data. The FCM server supports message targeting to single devices, groups of devices, or topics, and can be used with Android, iOS, and web applications. Integrating the FCM server with the system (102) allows for efficient, secure, and scalable real-time messaging, crucial for timely work order notifications to the user.
[0072] In an exemplary embodiment, the network (104) may include, but not be limited to, at least a portion of one or more networks having one or more nodes that transmit, receive, forward, generate, buffer, store, route, switch, process, or a combination thereof, etc. one or more messages, packets, signals, waves, voltage or current levels, some combination thereof, or so forth. In an exemplary embodiment, the network 104 may include, but not be limited to, a wireless network, a wired network, an internet, an intranet, a public network, a private network, a packet- switched network, a circuit-switched network, an ad hoc network, an infrastructure network, a Public-Switched Telephone Network (PSTN), a cable network, a cellular network, a satellite network, a fiber optic network, or some combination thereof.
[0073] Although FIG. 1 shows exemplary components of the network architecture (100), in other embodiments, the network architecture (100) may include fewer components, different components, differently arranged components, or additional functional components than depicted in FIG. 1. Additionally, or alternatively, one or more components of the network architecture (100) may perform functions described as being performed by one or more other components of the network architecture (100).
[0074] FIG. 2 with reference to FIG. 1, illustrates an exemplary micro service-based architecture (200) of the system (102) for scheduling the automated call test on the UE (108), in accordance with an embodiment of the present disclosure.
[0075] The system (102) includes one or more processor(s) (202), a memory (204), a database (208), and an interface(s) (206). In an exemplary embodiment, the one or more processor(s) (202)may include one or more modules/engines selected from any of a work order management module (212), a communication module (214), a data processing module (216), a database management module (218), a user interface module (220) and other module(s) (222) having functions that may include but are not limited to receiving data, processing data, testing, storage, and
peripheral functions, such as wireless communication unit for remote operation, audio unit for alerts and the like.
[0076] The one or more processor(s) (202) is configured to initiate the process of scheduling the automated call test through the mobile application interface of the U) (108). In an embodiment, the application interface is configured to transmit one or more instructions to the one or more processor(s) (202).
[0077] In an embodiment, the one or more processor(s) (202) may be implemented as one or more microprocessors, microcomputers, microcontrollers, edge or fog microcontrollers, digital signal processors, central processing units, logic circuitries, and/or any devices that process data based on operational instructions. Among other capabilities, the one or more processor(s) (202) may be configured to fetch and execute computer-readable instructions stored in the memory (204) of the system (102). The memory (204) may be configured to store one or more computer-readable instructions or routines in a non-transitory computer readable storage medium, which may be fetched and executed to create or share data packets over a network service. The memory (204) may comprise any non-transitory storage device including, for example, volatile memory such as Random Access Memory (RAM), or non-volatile memory such as Erasable Programmable Read-Only Memory (EPROM), flash memory, and the like.
[0078] The interface(s) (206) is included within the system (102) to serve as a medium for data exchange, configured to facilitate user interaction with the mobile application. The interface(s) (206) may be composed of interfaces for data input and output devices, storage devices, and the like, providing a communication pathway for the various components of the system (102). The interface(s) (206) may comprise a variety of interfaces, for example, interfaces for data input and output devices, referred to as VO devices, storage devices, and the like. The interface(s) (206) may facilitate communication to/from the system (102).
[0079] In an embodiment, the one or more processor(s) (202) may be implemented as a combination of hardware and programming (for example,
programmable instructions) to implement one or more functionalities of the one or more processor(s) (202). In the examples described herein, such combinations of hardware and programming may be implemented in several different ways. For example, the programming for the one or more processor(s) (202) may be processor-executable instructions stored on a non-transitory machine-readable storage medium, and the hardware for the one or more processor(s) (202) may comprise a processing resource (for example, one or more processors), to execute such instructions. In the present examples, the machine-readable storage medium may store instructions that, when executed by the processing resource, implement the one or more processor(s) (202). In such examples, the system (102) may comprise the machine -readable storage medium storing the instructions and the processing resource to execute the instructions, or the machine-readable storage medium may be separate but accessible to the system (102) and the processing resource. In other examples, the one or more processor(s) (202) may be implemented by electronic circuitry.
[0080] In an embodiment, the database (208) is configured to serve as a centralized repository for storing and retrieving various operational data. The database (208) is designed to interact seamlessly with other components of the system (102) to support the functionality of the system effectively. The database (208) may store data that may be either stored or generated as a result of functionalities implemented by any of the components of the system (102). In an embodiment, the database (208) may be separate from the system (102).
[0081] The database (208) may reside at a remote location or be integrated with the server, depending on the configuration of the system. In one aspect, the database may be hosted at a remote location, such as a cloud-based environment or a dedicated data center, enabling centralized data storage and facilitating access from multiple devices or systems across a network. This configuration allows for enhanced scalability, redundancy, and accessibility, supporting distributed systems where data access is required across various regions. In another aspect, the database (208) may be integrated with the server, wherein the data is stored locally on-site.
The database (208) may encompass various types, depending on the specific requirements of the application. In one aspect, a relational database may be employed, wherein data is stored in tables with predefined relationships, ensuring data consistency and supporting complex queries. In another aspect, a NoSQL database may be used, designed to handle unstructured or semi-structured data, offering scalability and flexibility for real-time applications. Additionally, a distributed database may be implemented, wherein data is spread across multiple locations to ensure high availability, fault tolerance, and efficient access across regions. Alternatively, a cloud database may be utilized, providing scalable and on- demand data storage with internet-based accessibility. For applications requiring high-performance processing, an in-memory database may be used, storing data in the system's main memory to enable faster data access. Moreover, a graph database may be employed for managing complex relationships in data, such as those found in social networks or recommendation systems. In some embodiments, an object- oriented database may be utilized, storing data in the form of objects to model complex data relationships.
[0082] In an embodiment, the work order management module (212) may create and manage work orders for automated call tests. For instance, a network operator may need to assess the call quality in a newly developed residential area. The work order management module (212) may create a work order specifying parameters such as the test duration (e.g., 2 minutes), call type (e.g., voice call), and specific network band to be tested (e.g., 4G LTE). The work orders may be created at a coverage platform.
[0083] The work order management module (212) may then assign the created work order to a specific user equipment (108) based on its device identifier (ID) stored in the database (208). For example, if the database (208) indicates that UE with ID "A1B2C3" is frequently located in the target residential area, the work order management module (212) may assign the work order to this device. This targeted assignment ensures that the test is conducted in the relevant geographical area without requiring manual dispatching of testing personnel.
[0084] The work order management module (212) may be further configured to schedule multiple work orders over an extended period. For example, to monitor the impact of a newly installed 5G tower, the work order management module (212) may schedule daily automated call tests for a period of three months. These tests could be distributed among various UEs in the vicinity of the tower.
[0085] The work order management module (212) may be configured for assessing the following parameters:
Geographical coverage: The work order management module (212) may divide the area around the network into sectors and ensure that tests are conducted in each sector. For instance, the work order management module (212) might assign tests to UEs located north, south, east, and west of the tower.
Time distribution: The work order management module (212) may schedule tests at different times of the day to capture variations in network performance. For example, the work order management module (212) might schedule tests during peak hours (e.g., 9 AM and 6 PM) and off-peak hours (e.g., 3 AM and 11 AM).
Device variety: The work order management module (212) may distribute tests across different device models to account for potential device-specific performance variations. For instance, it might assign tests to both high-end smartphones and budget devices to ensure comprehensive coverage.
Historical data consideration: If historical data shows that a particular area consistently experiences issues during rainy weather, the work order management module (212) might increase the frequency of tests in that area during the rainy season.
Network conditions: If real-time data indicates congestion in a specific sector, the work order management module (212) might dynamically increase the number of tests in that sector to gather more data and identify the root cause.
[0086] The work order management module (212) may be capable of creating multiple work orders for different types of automated call tests, catering to various network services and technologies. For example, the work order management module (212) may simultaneously create work orders for voice call quality tests on a 4G network and data throughput tests on a 5G network. In a specific scenario, the work order management module (212) might create a work order for testing Voice over LTE (VoLTE) call quality in urban areas, while also generating a separate work order for evaluating 6G data speeds in newly deployed mm Wave sectors.
[0087] The work order management module (212) may assign priorities to work orders based on network performance urgency. For instance, if customer complaints about dropped calls in a particular business district have spiked, the work order management module (212) may assign a high priority to call stability tests in that area. Conversely, routine data speed tests in a stable residential area might receive a lower priority. This prioritization ensures that critical issues are addressed promptly, minimizing customer dissatisfaction and potential revenue loss. For instance, if the system detects a high volume of call drops in a particular geographical region, the work order management module (212) assigns a high priority to the work order related to automated call tests in that region to quickly assess the underlying cause and mitigate the issue. Conversely, if the network conditions are stable and no immediate performance issues are detected, the work order management module (212) may assign a lower priority to the corresponding work orders, scheduling them for execution at a later time. The priority assignment is based on predefined thresholds set by the network operator, which define what constitutes an urgent network performance issue. These priorities are communicated to the relevant speed test servers, which execute the automated call tests on user devices according to the assigned priority level, ensuring that critical network areas are addressed promptly while less urgent tasks are handled as resources allow.
[0088] Furthermore, the work order management module (212) may manage a distributed network of speed test servers for conducting the automated call tests.
This management may involve selecting the most appropriate server based on geographical proximity and current load. For example, if a work order requires testing in New York City, the work order management module (212) may choose a speed test server located in Newark, New Jersey, to minimize latency. If that server is experiencing high load, the work order management module (212) might instead route the test to a less busy server in Philadelphia, balancing the need for geographical proximity with optimal server performance. The distributed network of speed test servers is employed for executing automated call tests across various locations in the network. These servers are distributed across different geographical areas, providing the ability to perform tests in diverse network conditions and ensure that performance metrics are gathered from a wide range of locations. The work order management module (212) facilitates the coordination of these servers by distributing test assignments based on location, network conditions, and specific test requirements. It ensures that the appropriate speed test servers are selected for conducting each test based on factors such as proximity to the user equipment (UE), availability, and capacity. The distributed nature of the network allows for parallel testing across multiple regions, enhancing the scalability and efficiency of the testing process.
[0089] In one embodiment, the work order management module (212) may interface with a central control system that schedules and assigns tests to these distributed servers, allowing for load balancing and the efficient use of resources. It monitors the performance of the servers and ensures that the tests, such as call setup success rate (CSSR), handover success rate, and interference levels, are executed according to the specified parameters and timeframes. Once the test is executed, the speed test servers collect and transmit the results (e.g., KPIs such as call setup time, codec details, and network drop rates) back to the central database for analysis. The management of these distributed servers allows for comprehensive and accurate automated call testing across a network, providing valuable insights for network optimization and performance monitoring.
[0090] The work order management module (212) may also optimize test execution and data collection across diverse geographical locations. For instance, in a country with varying levels of network infrastructure, the work order management module (212) might create work orders that test 4G networks in urban areas, 3G networks in suburban regions, and 2G networks in rural locations. This approach ensures comprehensive coverage and allows comparative network performance analysis across different technologies and geographical contexts.
[0091] In an embodiment, the communication module (214) may be configured to send push notifications to the selected user equipment (108), via the FCM server (114). For example, when a new work order is created for testing network performance, the communication module (214) may identify all eligible UEs/devices in that area and prepare personalized push notifications for each UE.
[0092] These push notifications sent by the communication module (214) may contain crucial information (configuration parameters) for executing the automated call test. A typical push notification might include a JavaScript Object Notation (JSON) payload with multiple key elements. For instance, the script defining the test procedures could be a series of commands like "initiate_call", "measure_signal_strength", "record_call_quality", and "end_call". The scheduled date and time for execution might be specified as "2023-07-15 14:30:00 UTC", ensuring the test occurs at a predetermined time. In an aspect, the predetermined time may be tuned or updated by the network operator based on the requirements. The requirements may refer to various factors that may influence the scheduling or execution of the test. The factors may include network load, resource availability, performance optimization, and time zone coordination. The requirements may also include regulatory or compliance needs, incident resolution, or external factors like weather or business hours. The network operator may adjust the predetermined time (test time) to ensure it aligns with these conditions, ensuring that the test is performed under optimal circumstances and provides accurate, reliable results. The configuration parameters included in the push notification may be detailed. For example, the call duration might be set to 120 seconds, allowing for a
comprehensive assessment of call stability. Specific network settings to be tested could include "force_LTE_only" to ensure the test focuses on 4G performance or "enable_VoLTE" to test advanced voice services.
[0093] The communication module (214) may employ various security measures to ensure that the push notifications are delivered securely to the intended user equipment (108). For example, the communication module (214) might use end-to- end encryption for all push notifications. Additionally, the communication module (214) may implement a token-based authentication system, where each push notification includes a unique, time-limited token that the receiving device must validate before executing the test.
[0094] To optimize delivery efficiency, the communication module (214) may use the device identifiers stored in the database (208) to tailor the delivery method for each user equipment. For instance, if the database indicates that a particular device frequently loses cellular connectivity, the communication module (214) might send the push notification via cellular data and Wi-Fi to ensure receipt. The communication module (214) may also employ a retry mechanism, attempting to resend notifications at increasing intervals if delivery confirmation is not received.
[0095] In cases where immediate testing is required, such as after a reported network outage, the communication module (214) may be configured to send high- priority push notifications. These push notifications might override device settings to ensure immediate delivery and prompt test execution, allowing for rapid assessment of network recovery.
[0096] In one embodiment, the data processing module (216) is configured to execute the scheduled automated call test on the selected user equipment (108) by initiating the automated call test at a time specified in the work order and performing the automated call test as defined in the push notification. The automated call test comprises at least one of a short call test or a long call test. In an embodiment, the short call test is designed to measure rapid connection and disconnection performance. In a typical short call test scenario, the user equipment
(108) might be instructed to establish a connection, maintain it for a brief period (e.g., 5-10 seconds), and then terminate the connection. This process may be repeated multiple times in quick succession. For instance, the test might involve making 50 short calls over a 5-minute period. This type of test is particularly useful for assessing network performance in high-traffic scenarios or areas where users frequently make brief calls. The long call test, on the other hand, is designed to evaluate sustained connection quality and stability. In a long call test, the user equipment (108) establishes a connection and maintains it for an extended period, such as 10 minutes or even longer. During this time, various network parameters are continuously monitored. For example, a long call test might involve establishing a voice call for 15 minutes while the system monitors call quality metrics, signal strength, and any instances of call dropping or quality degradation.
[0097] The data processing module (216) plays a crucial role in managing the execution of the automated call test. Upon receiving the work order, the data processing module (216) schedules the test based on the specified time in the work order. For example, if the work order indicates that the test should be performed at 2:00 AM local time to minimize impact on regular network traffic, the data processing module (216) will initiate the test at precisely that time.
[0098] The execution of the automated call test follows the parameters defined in the push notification sent to the user equipment (108). This push notification contains all necessary information for the test, including the type of test to be performed (short call test, long call test, or both), the specific network parameters to be tested, and any other relevant configuration details.
[0099] In some cases, the automated call test may include short and long call tests to assess network performance comprehensively. For instance, the data processing module (216) might instruct the user equipment (108) to perform a series of 20 short calls, followed by a single long call (20 minutes), and conclude with another series of 20 short calls. This combination allows for evaluating rapid connection handling and sustained call stability within a single test session. For
example, in the short call test, the data processing module (216) may instruct the user equipment (108) to establish a connection, maintain it for 10 seconds, and then disconnect. The data processing module (216) might measure metrics like connection establishment time (e.g., 1.2 seconds) and successful disconnect rate (e.g., 99.9%).
[00100] For the long call test, the data processing module (216) may direct the user equipment (108) to maintain a connection for an extended period, such as 10 minutes. During this time, the data processing module (216) might assess metrics like signal stability (e.g., a standard deviation of signal strength: ±2 dBm), packet loss rate (e.g., 0.1%), and jitter (e.g., 15 ms).
[00101] Throughout the execution of the automated call test, the data processing module (216) continuously collects and monitors the test data. This data includes a variety of key performance indicators (KPIs), such as call setup success rate, call drop rate, signal strength, voice quality metrics, and more. The specific KPIs collected may vary depending on the nature of the test and the network parameters being evaluated.
[00102] The system (102) may gather valuable, real- world performance data without requiring manual intervention by executing these automated call tests as defined in the work order and push notification. This approach allows for consistent, scheduled testing across various network conditions and geographical locations, providing network operators with crucial insights for optimizing their service quality and user experience.
[00103] The data processing module (216) may receive collected data comprising a plurality of key performance indicators (KPIs) from the automated call tests executed by the user equipment (108) in the background, as defined in the push notifications. For example, during a 5G network test in a busy financial district, the data processing module (216) might collect the following KPIs:
Call Setup Success Rate (CSSR)
Evolved Radio Access Bearer (E-RAB) Drop Rate
Interference Levels
Handover Success Rate
Handover Failure Rate
Codec Details: EVS (Enhanced Voice Services)
Traffic Capacity
[00104] The data processing module (216) may be further configured to initiate the automated call test by instructing the user equipment (108) to dial a toll- free number automatically. For example, the data processing module (216) might instruct the user equipment (108) to dial " 1-800-TEST-NET" at 3:00 AM local time to conduct a network performance test during off-peak hours.
[00105] The data processing module (216) may monitor the progress of the test in real-time, allowing for immediate detection of any issues or anomalies. For instance, if the data processing module (216) detects that the signal strength suddenly drops from -X dBm to -B dBm during a test call in a usually stable area, the data processing module (216) might flag this anomaly for immediate investigation.
[00106] Upon completion of the test, the data processing module (216) may automatically end the call by collecting final test data from the user equipment (108). This final data might include metrics such as overall call quality score (e.g., 4.5 out of 5), average throughput (e.g., 150 Mbps for a 5G test), and total packets lost (e.g., 10 out of 10,000 packets).
[00107] After collecting the data, the data processing module (216) may release allocated network resources. For example, if the test utilized a dedicated network slice on a 5G network, the data processing module (216) would signal the network to release this slice, making it available for regular user traffic.
[00108] Finally, the data processing module (216) may send a completion notification to the user equipment (108). This notification might include a summary
of the test results, such as "Test Completed Successfully. Duration: 120 seconds. Average Signal Strength: -65 dBm."
[00109] The data processing module (216) may compare the collected KPIs against predefined thresholds to derive meaningful insights from the collected data. In an example, the predefined thresholds refer to specific values or ranges set which may be modified by the network operator. In an example, the predefined thresholds may be modified based on various factors, including but not limited to, changes in system performance, such as improvements or degradation in network speed, latency, or error rates, to better reflect the operational capabilities or limitations of the system. Modifications may also be made in response to evolving business requirements, regulatory updates, or compliance obligations. Additionally, threshold adjustments may be informed by the analysis of historical data, trends, or performance patterns. Further, scalability needs, incidents or fault analysis, and the introduction of new technologies or system upgrades may necessitate the modification of thresholds. External factors, such as market conditions, network traffic patterns, or environmental changes, may also serve as a basis for altering predefined thresholds to ensure optimal system performance and reliability. For example, if the call setup success rate (CSSR) threshold is set at 98%, and the collected data shows a CSSR of 96.5% in a particular area, the data processing module (216) may flag this as a potential issue requiring investigation.
[00110] The data processing module (216) may also identify trends and patterns in network performance based on the collected KPIs, offering valuable long-term insights for network optimization. For instance, the data processing module (216) might detect that data throughput in a business district consistently drops by 30% between 1 PM and 2 PM on weekdays, suggesting additional capacity is needed during lunch hours.
[00111] When anomalies or persistent issues are detected based on these comparisons and identified trends, the data processing module (216) may generate alerts, enabling prompt attention to potential problems. For example, if the
handover failure rate between two specific cell towers exceeds 5% for three consecutive days, the data processing module (216) might generate a high-priority alert for the network operations team.
[00112] The data processing module (216) may also generate performance reports based on the collected data, including trends of the KPIs over the specified time interval. A monthly report may be generated by the data processing module (216), including visualizations such as:
A line graph showing daily average CSSR over the past 30 days
A heat map of interference levels across different geographical areas
A bar chart comparing handover success rates between different network technologies (e.g., 4G to 4G, 4G to 5G, 5G to 4G)
A pie chart breaking down the usage of different audio codecs in the network
[00113] These comprehensive reports generated by the data processing module (216) may provide network operators with actionable insights for continuous network improvement and optimization.
[00114] The data processing module (216) may aggregate the collected data from multiple user equipments (108), providing a comprehensive view of network performance across various devices and locations. For example, in a metropolitan area, the data processing module (216) might collect and aggregate data from 10,000 different user equipments over a month, including smartphones, tablets, and loT devices, spread across residential, commercial, and industrial zones.
[00115] To ensure consistency and comparability of data, the data processing module (216) may normalize data from diverse device types, accounting for differences in hardware capabilities or operating systems. For instance, when comparing signal strength measurements, the data processing module (216) might apply a calibration factor to adjust for known variations in antenna sensitivity between different smartphone models. A high-end smartphone reporting -85 dBm
might be normalized to -80 dBm to align with measurements from mid-range devices in the same location.
[00116] The data processing module (216) may categorize performance metrics based on network technologies, facilitating technology -specific analysis and optimization efforts. For example, the data processing module (216) might separate data into categories such as:
4G LTE metrics: Average download speed of 50 Mbps, latency of 30ms
5G Sub-6 GHz metrics: Average download speed of 300 Mbps, latency of 10ms
5G mm Wave metrics: Average download speed of 1.5 Gbps, latency of 5ms
[00117] Using this aggregated and categorized data, the data processing module (216) may generate coverage maps, offering visual representations of network performance across geographical areas. For instance, the data processing module (216) might create a heat map of a city where:
Red areas indicate 5G mm Wave coverage with speeds > 1 Gbps
Orange areas show 5G Sub-6 coverage with speeds between 100-500 Mbps Yellow areas represent 4G LTE coverage with speeds between 10-50 Mbps Green areas denote 3G coverage with speeds < 10 Mbps
[00118] The data processing module (216) may analyze the collected data to support ongoing network improvement efforts to identify specific network performance issues. For example, the data processing module (216) might detect that in a particular suburban area, 5G handover success rate drops below 90% during peak hours (6 PM - 8 PM), while maintaining over 99% success rate at other times.
[00119] Based on this analysis, the data processing module (216) may generate recommendations for network optimization, providing actionable insights for network operators. Continuing the previous example, the data processing module (216) might recommend:
Adjusting antenna tilt on Cell Tower A to improve coverage overlap with neighboring towers
Increasing backhaul capacity for the affected sector during peak hours Implementing a more aggressive load balancing algorithm to distribute users across the available spectrum
[00120] The data processing module (216) may also track the impact of implemented optimizations over time, allowing for continuous refinement of network performance strategies. For instance, after implementing the above recommendations, the data processing module (216) might report:
Week 1: 5G handover success rate improved to 93% during peak hours Week 2: Further improvement to 95% success rate Week 3: Stability achieved at 97% success rate
[00121] This ongoing monitoring by the data processing module (216) ensures that optimization efforts are effective and allows for quick adjustments if the desired improvements are not achieved.
[00122] The database management module (218) may be responsible for recording the received collected data from the automated call tests. For example, when the user equipment (108) completes a series of tests, the database management module (218) may immediately receive and store the data, tagging it with relevant metadata such as timestamp, location coordinates, and device type.
[00123] The database management module (218) may ensure that all test data is properly stored, organized, and accessible for future analysis and reporting. For instance, the database management module (218) might organize data into hierarchical structures by region (e.g., Midwest), then by city (e.g., Chicago), then by network technology (e.g., 5G), and finally by specific KPIs (e.g., download speed, latency).
[00124] The database management module (218) may be configured to store historical test data, enabling long-term trend analysis and performance tracking.
The database management module (218) may support efficient retrieval of stored historical test data, facilitating comprehensive analysis and reporting capabilities. For instance, if an analyst needs to compare 4G LTE performance in XYZ place over the past three summers, the database management module (218) could quickly retrieve and compile this specific dataset.
[00125] The user interface module (220) provides a graphical interface for interacting with the system. For example, the user interface module (220) might offer a web-based dashboard accessible to network operators and administrators. The interface provided by the user interface module (220) may allow users to schedule automated call tests with customizable parameters, catering to specific testing requirements or network conditions. For instance, the network operator may use the interface to schedule a series of high-priority tests in an area where a music festival is planned, setting parameters like test frequency (e.g., every 30 minutes), duration (e.g., throughout the 3-day event), and specific KPIs to monitor (e.g., focusing on data throughput and latency).
[00126] Users may view the results of completed tests through the user interface module (220), providing immediate access to performance data. For example, after a day of testing at the music festival, the user interface module (220) might display a summary showing average download speeds of X Mbps, with peak speeds reaching Z Mbps during off-peak hours.
[00127] The interface provided by the user interface module (220) may also offer access to historical performance data, enabling trend analysis and long-term performance tracking. For instance, users might be able to generate graphs showing how average download speeds in the festival area have improved year-over-year, from A Mbps three years ago to B Mbps in the current year.
[00128] Additionally, users may be able to configure alerts for specific performance thresholds through the user interface module (220), ensuring prompt notification of critical issues. For example, a user might set an alert to be triggered
if the call drop rate exceeds P% in any given hour, or if the average data throughput falls below Q Mbps in a 5G coverage area.
[00129] The automated call tests executed by the user equipment (108) may operate through a background process that functions independently of active user applications. For instance, even while a user is actively browsing the web or using a navigation app, the user equipment (108) may conduct a short call test without any noticeable impact on the activities of the user. The background execution may allow for more frequent and consistent testing, providing a more accurate and comprehensive picture of network performance. For instance, the system might be able to conduct brief network performance checks every hour, 24 hours a day, across thousands of devices in a city, resulting in a highly granular and real-time view of network conditions.
[00130] This approach may ensure that the tests can be conducted without disrupting normal device usage or requiring active user participation. For example, a long call test might be scheduled for 3 AM local time, when the user is likely asleep, and the UE is idle, ensuring minimal interference with the user's normal usage patterns.
[00131] FIG. 3 illustrates an exemplary system architecture (300) for scheduling the automated call test on the user equipment (108), in accordance with an embodiment of the present disclosure.
[00132] Referring to FIG. 3, a system architecture (300) comprises a web portal (302), a load balancer (304), a plurality of web servers (WS) (112), an application server (308), the database (208), a reporting server (312), the FCM server (114) and the user equipment (108). The plurality of web servers (WS) comprises WS1 (112-1), WS2 (112-2), and so on.
[00133] In an aspect, the work order is created and scheduled from the web portal (302) by the work order management module (212). The scheduled work order is assigned from the web portal (302) to a particular user equipment (108)
based on the device identifier (ID) stored in the database (208). The load balancer (304) distributes incoming requests across the multiple web servers (112) to ensure optimal resource utilization and system performance.
[00134] The application server (308) may host the core functionality of the system (102), including the work order management module (212), communication module (214), data processing module (216), and database management module (218). The application server (308) processes the work orders, manages the execution of automated call tests, and handles data processing and storage.
[00135] The reporting server (312) generates reports based on the collected data and provides analytical insights. The reporting server (312) interfaces with the database (208) to retrieve historical data and generate performance trends.
[00136] When the user equipment (108) receives the work order via a push notification from the FCM server (114), a call test work order is executed in the background without user intervention. The push notification includes scripts, execution data, and scheduled time for the call test.
[00137] To run the call test (e.g., short call/long call), a script of the call test is sent to the user equipment (108) via the push notification. The script of the call test is run at a scheduled time. The automated call test is performed on the user equipment (108) for testing the plurality of key performance indicators (KPIs). The KPIs comprise a call setup success rate (CSSR), an evolved radio access bearer (E- RAB) drop rate, an interference level, a handover success rate, and codec details for specified geographical areas.
[00138] The automated call goes to a server toll-free number. The call is connected and started automatically. The call test is started and ended automatically without user intervention. The call runs for a defined time interval (for example, 2 minutes), and the KPIs are collected.
[00139] After the call test is finished, the data corresponding to KPIs is collected by the data processing module (216) and recorded to the database (208) by the database management module (218). This collected data is used for network optimization and improvement of network coverage. The KPIs comprise success and failure rate of handover, area traffic capacity, and other relevant metrics.
[00140] The system (102) supports scheduling multiple work orders for different user equipments (108) over extended periods (e.g., 1 month or more). The work order management module (212) can assign and distribute various call instructions (e.g., short call and long call) to multiple user equipments (108) across different geographical areas.
[00141] In an aspect, a plurality of call instructions is distributed to a plurality of user equipments (108). The call instructions are assigned for a specific time interval (e.g., 1 month or more). When the call test is finished, the data corresponding to KPIs is recorded to the database (208) by the database management module (218).
[00142] The user equipments (108) are registered in a table format in the database (208). The work order is communicated to the user equipment (108) through a push notification via the FCM server (114). The work order is run at a specific time. The data corresponding to the work order is recorded to the database (208).
[00143] Further, multiple work orders are sent to the plurality of user equipments (108). The work orders are sent with scripts, execution data, and time. The call test is assigned for a specific time period (e.g., 1 month) to the plurality of user equipments (108). The call test is run without user intervention. After completion of the test, all data is collected.
[00144] The complete automated background call test work orders can be assigned anytime from the web portal (302), providing flexibility in scheduling and executing network performance tests.
[00145] FIG. 4 illustrates an exemplary flow diagram (400) for scheduling the automated call test on the user equipment (108), in accordance with an embodiment of the present disclosure.
[00146] Step 402 includes creating, by the work order management module (212), the work order for an automated call test in a coverage platform (CP).
[00147] At step 404, the communication module (214) sends the push notification, via the FCM server (114), to one of the plurality of user equipments (108) selected based on the device identifier (ID) stored in the database (208). The push notification comprises a script defining procedures for the automated call test, a scheduled date and time for executing the automated call test. The script defining procedures refers to a set of predefined instructions or a program that specifies how the automated call test should be conducted. These procedures may include the sequence of actions, test parameters, and conditions that guide the execution of the test on the user’s device. In an example, the automated call test script may include a structure:
Test Name: Automated Call Test for Network Performance (Short Call)
Objective: To measure KPIs such as Call Setup Success Rate (CSSR), Evolved Radio Access Bearer (E-RAB) drop rate, interference levels, and handover success rate for a specific user device.
• Step 1: Verify device readiness (ensure the device is powered on and connected to the network).
• Step 2: Check network conditions (ensure the device is connected to the appropriate network, 4G/5G).
• Step 3: Fetch the device identifier (ID) from the database (e.g., IMEI, MSISDN, or MAC address).
• Step 4: Initiate a call to the pre-designated toll-free number (e.g., 12345) using the device's mobile network.
• Step 5: Verify the connection attempt and monitor the call setup process. Track the time taken for the connection to be established.
• Step 6: Record the initial call parameters, such as: o Call setup time o Initial codec used o Signal strength at the time of the call
• Step 7: Monitor the call during the test: o Measure Call Setup Success Rate (CSSR). o Record Evolved Radio Access Bearer (E-RAB) drop rate during the call. o Monitor and log the interference level during the call, ensuring it falls within acceptable thresholds. o Track handover success rate (whether the call successfully transfers between network cells).
• Step 8: Continuously monitor device behavior, including CPU, battery, and signal strength during the test.
• Step 9: Automatically monitor the call duration for a predefined time (e.g., 30 seconds for a short call).
• Step 10: If the test is a long call, allow the call to continue for a longer time (e.g., 5 minutes) and ensure metrics are captured throughout.
• Step 11: Automatically end the call once the predefined duration or condition is met (e.g., completion of the call, or no further KPIs are needed).
• Step 12: Disconnect the call and ensure proper logging of the test results.
• Step 13: Collect and store the following KPIs during the test: o Call Setup Success Rate (CSSR) o Evolved Radio Access Bearer (E-RAB) drop rate o Interference level o Handover success rate o Codec used during the call
• Step 14: Store all test results in a local cache on the device until the test is completed.
• Step 15: Once the call is terminated, synchronize the collected data with the database. o Action: Upload the KPIs to the central database. o Action: Mark the test as complete in the system logs.
[00148] At step 406, the user equipment (108) receives the push notification from the FCM server (114). The user equipment (108) executes the automated call test in the background at the specified time without user intervention based on the script, scheduled date and time, and configuration parameters received in the push notification.
[00149] At step 408, a data processing module (216) receives collected data comprising a plurality of key performance indicators (KPIs) from the automated call test executed by the user equipment (108). The plurality of KPIs collected comprises a call setup success rate (CSSR), an evolved radio access bearer (E-RAB) drop rate, an interference level, a handover success rate and failure rate, codec details for a specified geographical area, and a traffic capacity. Further, the database management module (218) records the received collected data of the automated call test to the database (208). This recorded data can be used for various purposes such as analyzing network performance, identifying trends, and generating performance reports.
[00150] The automated call test may comprise both a short call test designed to measure rapid connection and disconnection performance, and a long call test designed to evaluate sustained connection quality and stability. The user equipment (108) executes both types of tests as specified in the work order.
[00151] FIG. 5 illustrates an exemplary flow diagram of a method (500) for scheduling the automated call test on the user equipment (108) based on the work order, in accordance with embodiments of the present disclosure.
[00152] At step 502, the method (500) includes creating, by the work order management module (212), the work order for the automated call test in a coverage platform. This step initiates the process of scheduling and executing automated call tests across a network. The work order management module (212) is configured to handle the creation, assignment, and tracking of work orders related to network testing and optimization. The work order management module (212) interfaces with the coverage platform. The coverage platform provides a comprehensive view of network coverage and performance across different geographical areas. Creating the work order involves defining the parameters of the automated call test, including the type of test to be performed, the target areas, and the specific metrics to be measured. The work order management module (212) may create multiple work orders for different types of automated call tests, including voice calls and data sessions. These work orders can be prioritized based on network performance urgency, allowing the system to focus on critical areas or issues first. The work order management module (212) also manages a distributed network of speed test servers for conducting the automated call tests, ensuring that tests can be performed efficiently across various locations.
[00153] In an operative aspect, to create the work order for the automated call test in the coverage platform, the work order management module (212) may perform following steps:
• Initiating the Work Order: The work order management module (212) receives a request or automatically triggers the creation of the work order for the automated call test. This work order serves as the formal request to initiate network testing, including scheduling and executing the call tests across the network.
• Defining Test Parameters: The work order management module (212) defines key parameters for the automated call test within the work order. These parameters may include:
Test Type: Specifying the type of test, such as a voice call test, data session, or other network performance tests. Target Areas: Identifying the geographical regions or specific network coverage zones where the test should be performed, based on areas of interest or known performance issues. Metrics to Be Measured: Defining which performance metrics the automated call test will measure, such as call success rate, call drop rate, latency, throughput, signal strength, etc. Assignment and Tracking: Once the parameters are set, the work order is assigned within the system. The work order management module (212) tracks the progress of the work order, ensuring that each test is executed as planned. This may involve setting deadlines or priorities, particularly in cases where there are critical network performance issues.
• Integration with Coverage Platform: The work order management module (212) interfaces with the coverage platform to ensure that the automated call tests align with the platform’s capabilities.
• Prioritization of Work Orders: If there are multiple work orders for different types of automated call tests, the work order management module (212) may prioritize them based on the urgency of the network performance issues. High- priority work orders are executed first, addressing critical issues that could impact user experience or service reliability.
[00154] At step 504, the method (500) includes sending, by the communication module (214), via the FCM server (114), a push notification to one of a plurality of user equipments selected on a device identifier (ID) stored in the database. The device identifier (ID) may include several types of unique identifiers that are used to distinguish a specific user equipment (UE) within the network. The
device identifier (ID) identifiers allow the system to accurately target devices for tests or other network management tasks. In an example, the at least one device identifier may be an IMEI (International Mobile Equipment Identity), an MSISDN (Mobile Station International Subscriber Directory Number), a MAC Address (Media Access Control Address), a UUID (universal unique identifier), a device serial number, and a subscription permanent identifier (SUPI).
[00155] The communication module (214) is responsible for facilitating the transmission of information between the system and the user equipment. The communication module (214) leverages the FCM server (114), a reliable and efficient messaging platform, to send push notifications to the selected user equipment. The push notification contains information for executing the automated call test. Specifically, the push notification comprises a script defining procedures for the automated call test, a scheduled date and time for executing the test. The script defining procedure comprehensive set of instructions allows the user equipment to execute the test autonomously without requiring user intervention. The selection of the user equipment (108) is based on the device identifier stored in a database, ensuring that the appropriate devices are chosen for specific tests. This selection process may involve considerations such as the device's location, capabilities, and previous test history. Based on this device identifier, the work order management module (212) assigns the work order to the user equipment. Furthermore, the work order management module (212) may schedule multiple work orders for automated call tests to be executed by the plurality of user equipments over a specified time interval of at least one month. This long-term scheduling capability allows for comprehensive and ongoing network performance monitoring. The work order management module (212) also implements a distribution algorithm that ensures coverage across different geographical areas and time periods, providing a balanced and representative sample of network performance data. The distribution algorithm allocates and schedules automated call tests across different user devices, geographical areas, and time periods to ensure comprehensive network testing coverage.
[00156] In an operative aspect, the sending of the push notification by the communication module (214) may include the following steps:
• The communication module (214) first identifies a specific user equipment (108) based on a device identifier (ID). The device identifier could be a unique identifier like a phone number, MAC address, IMEI number, or another type of device- specific ID that links the user equipment to a particular record in the system. The device ID is retrieved from the database (208), which stores device identifiers along with associated user or device information.
• The communication module queries the database (208) to locate the device ID of the user equipment (108) that needs to receive the push notification. This could involve searching for a specific user or device based on various factors like location, network conditions, or user preferences.
• Once the correct user equipment (108) is identified, the communication module (214) prepares the push notification. The push notification may contain various types of information, such as network alerts, test results, or system messages, depending on the purpose of the communication.
• Using network protocols (for example, HTTP or WebSocket), the communication module (214) then sends the push notification to the selected user equipment (108). This is often done through a push notification service (such as Apple Push Notification Service (APNS) for iOS or Firebase Cloud Messaging (FCM) for Android). The communication module interfaces with these services to deliver the notification to the targeted device.
• The push notification is delivered to the user equipment (108) via the push notification service, which ensures that the message is sent efficiently to the right device. The user equipment (108) receives the notification, which is typically displayed on the device screen, alerting the user to the event or information being communicated.
[00157] In an aspect, the scheduled automated call tests are planned for two devices in different locations to assess network performance. The network operator retrieves the unique device identifiers (IDs) for each user equipment (UE) from the database. For instance, device 1, located in New York, is set for a voice call test at 10:00 AM UTC, with parameters to measure call success rate, call drop rate, and latency. Similarly, device 2, located in San Francisco, is scheduled for a test at 10:15 AM UTC. Once the tests are scheduled, the communication module (214) sends push notifications to each device to inform the users about the upcoming test. At the designated times, the tests are executed on both devices, with the communication module coordinating the execution across a distributed network of speed test servers. After completion, the results are collected: device 1 in New York shows a high call success rate but detects a slight increase in call drop rate, while device 2 in San Francisco performs well with minimal issues. These results are compiled into a report for further analysis, helping the operator identify areas needing optimization, such as the New York region.
[00158] In another aspect, the UE is selected based on the work order that specifies the parameters for the automated call test. The work order management module (212) creates a work order that includes details such as the test type, location, and required metrics to be measured. Once the work order is created, the module identifies which UEs are to be tested by referencing the work order's target criteria, such as the geographic area or network conditions. For instance, if the work order specifies a test in New York, the system selects Device 1 located in that area. Similarly, if the work order for a test in San Francisco is created, Device 2 is chosen. After selecting the appropriate UEs, the communication module (214) sends push notifications to the devices to inform them of the scheduled tests. The selected devices then undergo the automated call tests as per the specifications in the work orders, with the results later used to evaluate network performance and make necessary optimizations. In another example, the work order is assigned through the web portal to users whose devices are registered in the database. In this process, the web portal serves as the interface through which the network operator or system
administrator may create and assign the work order. The portal allows the operator to select specific devices from the database, where the UE is registered with its unique device identifier (ID).
[00159] At step 506, the method (500) includes executing, by the data processing module (216), the scheduled automated call test on the selected user equipment (108). This execution involves initiating the automated call test at a time specified in the work order and performing the automated call test as defined in the push notification. The automated call test comprises at least one of a short call test or a long call test. The execution of the automated call test is a critical step in evaluating network performance. The data processing module (216) manages this process, ensuring that the test is conducted according to the parameters set in the work order and the push notification. The timing of the test initiation is determined by the work order. For example, if the work order specifies that the test should be conducted at 3:00 AM to capture network performance during off-peak hours, the data processing module (216) will trigger the test at exactly that time. This precise scheduling allows for consistent and comparable results across multiple tests and locations.
[00160] In an embodiment, the push notification sent to the user equipment (108) contains detailed instructions for performing the automated call test. These instructions dictate whether the test will be a short call test, a long call test, or a combination of both. In the case of a short call test, the user equipment (108) may be instructed to perform a series of brief connections. For instance, the test might involve making 30 calls, each lasting 10 seconds, over a 5-minute period. This rapid succession of short calls helps assess the network's ability to handle frequent connection requests and quick disconnections, simulating scenarios like busy call centers or areas with high call turnover. For the long call test, the user equipment (108) establishes and maintains a single, extended connection. As an example, this could involve initiating a call and keeping it active for 30 minutes. During this extended call, the system continuously monitors various performance metrics, providing insight into the network's ability to maintain stable, high-quality
connections over longer durations. This type of test is particularly useful for evaluating network performance for users who engage in lengthy calls, such as conference calls or long-distance conversations.
[00161] Throughout the execution of these tests, the data processing module (216) collects one or more key performance indicators (KPIs). These KPIs include a call setup success rate (CSSR), which measures the likelihood of successfully establishing a call, an evolved radio access bearer (E-RAB) drop rate indicating the frequency of unexpected disconnections, interference levels that affect signal quality, handover success and failure rates crucial for understanding network stability during user movement, codec details for the specified geographical area providing insights into voice quality, and traffic capacity which helps in understanding network load and congestion. The specific KPIs gathered may vary depending on the nature of the test and the particular aspects of network performance being evaluated. By executing the automated call test in this manner, the method enables the collection of valuable, real-world performance data without the need for manual intervention or disruption to regular network usage. This automated, scheduled approach to testing allows for consistent evaluation of network performance across various conditions, times, and locations, providing network operators with essential insights for service optimization and quality improvement.
[00162] The method (500) further comprises several additional steps for receiving, by the data processing module (216), collected data comprising a plurality of key performance indicators (KPIs) from the automated call test executed by the user equipment in the background as defined in the push notification. The data processing module (216) is a critical component that handles the collection, processing, and analysis of the test results. The automated call test is executed by the user equipment through a background process that operates independently of active user applications, ensuring that the test does not interfere with the user's normal device usage. The plurality of KPIs collected by the data processing module (216) provides a comprehensive view of network performance.
The data processing module (216) initiates the automated call test by instructing the user equipment to automatically dial a toll-free number, monitors the progress of the test in real-time, and automatically ends the test upon completion. This process involves collecting final test data from the user equipment, releasing network resources allocated for the test, and sending a completion notification to the user equipment (108).
[00163] The method (500) further comprises several additional steps for recording, by the database management module (218), the received collected data of the automated call test. The database management module (218) is responsible for storing, organizing, and managing the vast amount of data generated by the automated call tests. It ensures that the collected data is properly recorded with the central database, making it available for further analysis and reporting. The recording process involves storing the raw data and organizing it in a way that facilitates easy retrieval and analysis. The database management module (218) stores historical test data and supports the retrieval of this stored data for analysis and reporting purposes. This historical data is invaluable for identifying long-term trends and patterns in network performance.
[00164] The method (500) further comprises several additional steps and features that enhance its functionality and value. The automated call test includes both a short call test and a long call test. The short call test is designed to measure rapid connection and disconnection performance, providing insights into the network's ability to establish and terminate calls quickly. The long call test, on the other hand, is designed to evaluate sustained connection quality and stability, offering a view of the network's performance over extended periods. The user equipment (108) executes both these tests as specified in the work order, providing a comprehensive assessment of network performance under different scenarios.
[00165] The data processing module (216) performs several critical functions beyond data collection. The data processing module (216) compares the collected KPIs against predefined thresholds, allowing for quick identification of
performance issues. The data processing module (216) also identifies trends and patterns in network performance based on the collected KPIs, providing valuable insights into the network's behavior over time. Based on this comparison and identified trends, the data processing module (216) generates alerts for detected anomalies or persistent issues, enabling proactive network management. Furthermore, the data processing module (216) generates performance reports based on the collected data, including trends of the KPIs over the specified time interval. These reports are crucial for understanding network performance, identifying areas for improvement, and making informed decisions about network optimization.
[00166] The system (102) also includes the user interface module (220) that provides a user-friendly interface for various functions. This interface allows scheduling automated call tests with customizable parameters, giving network administrators flexibility in designing tests to meet specific needs. The user interface module (220) also enables viewing results of completed tests, providing quick access to performance data. The interface supports accessing historical performance data, allowing for long-term trend analysis. Additionally, the user interface module (220) allows for configuring alerts for specific performance thresholds, enabling proactive monitoring of critical network parameters.
[00167] The data processing module (216) further enhances the value of the collected data through several advanced processing techniques. The data processing module (216) aggregates the collected data from multiple user equipments, providing a comprehensive view of network performance across various devices and locations. The data processing module (216) normalizes data from diverse device types, ensuring that data from different sources can be meaningfully compared and analyzed. The data processing module (216) categorizes performance metrics based on network technologies, allowing for technologyspecific performance analysis. Based on this aggregated and categorized data, the module generates coverage maps, providing a visual representation of network performance across geographical areas.
[00168] In another exemplary embodiment, the user equipment (108) for facilitating an automated call test is described. The UE is configured to receive from a communication module (214) of a system (102), a push notification comprising instructions for an automated call test. The UE (108) is selected based on a device identifier (ID) stored in the database (208), ensuring that the appropriate device is chosen for the specific test. Upon receiving the push notification, the UE (108) executes the automated call test in the background as defined in the push notification. This background execution ensures that the test does not interfere with the user's normal device usage. During the test, the UE collects data comprising a plurality of key performance indicators (KPIs) from the executed automated call test. These KPIs provide a comprehensive view of network performance from the user's perspective. After completing the test, the UE (108) transmits the collected data to the data processing module (216) of the system (102). Finally, the UE receives a completion notification from the system (102) upon successful recording of the collected data by the database management module (218) of the system (102). This notification confirms that the test data has been successfully recorded and is ready for analysis.
[00169] The automated call test facilitated by the UE (108) plays a crucial role in network performance monitoring and optimization. By executing these tests in the background across numerous devices, network operators can gather real- world performance data at scale. This data provides invaluable insights into network behavior under various conditions and locations, enabling targeted improvements and optimizations. The ability to conduct both short and long call tests allows for a comprehensive assessment of network performance, covering both rapid connection scenarios and sustained usage situations. The collection of a wide range of KPIs enables detailed analysis of different aspects of network performance, from call setup success to handover efficiency and voice quality. By participating in these automated tests, the UE becomes an active contributor to the ongoing process of network enhancement, leading to improved service quality for all users.
[00170] FIG. 6 illustrates an exemplary computer system (600) in which or with which the embodiments of the present disclosure may be implemented.
[00171] As shown in FIG. 6, the computer system (600) may include an external storage device (610), a bus (620), a main memory (630), a read-only memory (640), a mass storage device (650), a communication port(s) (660), and a processor (670). A person skilled in the art will appreciate that the computer system (600) may include more than one processor and communication ports. The processor (670) may include various modules associated with embodiments of the present disclosure. The communication port(s) (660) may be any of an RS-232 port for use with a modem-based dialup connection, a 10/100 Ethernet port, a Gigabit or 10 Gigabit port using copper or fiber, a serial port, a parallel port, or other existing or future ports. The communication ports(s) (660) may be chosen depending on a network, such as a Local Area Network (LAN), Wide Area Network (WAN), or any network to which the computer system (600) connects.
[00172] In an embodiment, the main memory (630) may be Random Access Memory (RAM), or any other dynamic storage device commonly known in the art. The read-only memory (640) may be any static storage device(s) e.g., but not limited to, a Programmable Read Only Memory (PROM) chip for storing static information e.g., start-up or basic input/output system (BIOS) instructions for the processor (670). The mass storage device (650) may be any current or future mass storage solution, which can be used to store information and/or instructions. Exemplary mass storage solutions include, but are not limited to, Parallel Advanced Technology Attachment (PATA) or Serial Advanced Technology Attachment (SATA) hard disk drives or solid-state drives (internal or external, e.g., having Universal Serial Bus (USB) and/or Firewire interfaces).
[00173] In an embodiment, the bus (620) may communicatively couple the processor(s) (670) with the other memory, storage, and communication blocks. The bus (620) may be, e.g. a Peripheral Component Interconnect (PCI) / PCI Extended (PCI-X) bus, Small Computer System Interface (SCSI), Universal Serial Bus
(USB), or the like, for connecting expansion cards, drives, and other subsystems as well as other buses, such a front side bus (FSB), which connects the processor (670) to the computer system (600).
[00174] In another embodiment, operator, and administrative interfaces, e.g., a display, keyboard, and cursor control device may also be coupled to the bus (620) to support direct operator interaction with the computer system (600). Other operator and administrative interfaces can be provided through network connections connected through the communication port(s) (660). Components described above are meant only to exemplify various possibilities. In no way should the aforementioned exemplary computer system (600) limit the scope of the present disclosure.
[00175] In another exemplary embodiment, the computer system (600) includes a non-transitory computer-readable storage medium storing computerexecutable instructions. When executed by one or more processors, the instructions cause the one or more processors to perform a method for scheduling an automated call test on a User Equipment (UE) (108) based on a work order. The method comprises creating a work order for an automated call test in a coverage platform using a work order management module (212). A communication module (214) sends a push notification via a fire-based cloud messaging (FCM) server (114) to a selected UE (108) based on a device identifier (ID) stored in a database (208). A data processing module (216) receives collected data comprising a plurality of key performance indicators (KPIs) from the automated call test executed by the UE (108) in the background as defined in the push notification. Finally, a database management module (218) records the received collected data of the automated call test. This method enables efficient scheduling and execution of automated call tests on user devices, facilitating the collection of network performance data without manual intervention.
[00176] The present disclosure provides technical advancement related to automated network performance testing and optimization. This advancement
addresses the limitations of existing solutions by introducing a comprehensive, user equipment-based system for real-time network performance monitoring and analysis. The disclosure involves a sophisticated work order management and execution system that leverages cloud messaging and background processes on user devices, which offers significant improvements in the scale, frequency, and geographic coverage of network testing. By implementing automated call tests that run independently of user interaction, the disclosed invention enhances the ability to collect real-world performance data across diverse network conditions and locations, resulting in more accurate and actionable insights for network optimization. The system's capability to schedule, execute, and analyze both short and long-duration tests across multiple devices simultaneously represents a significant leap forward in network performance assessment. This not only streamlines the process of identifying and addressing network issues but also enables proactive optimization strategies based on comprehensive, real-time data. The integration of advanced data processing techniques, including KPI threshold comparisons, trend analysis, and automated alert generation, further amplifies the system's value in maintaining and improving network quality. This technical advancement contributes to more efficient network operations, improved user experiences, and accelerated optimization of network resources across diverse geographical areas.
[00177] While considerable emphasis has been placed herein on the preferred embodiments, it will be appreciated that many embodiments can be made and that many changes can be made in the preferred embodiments without departing from the principles of the disclosure. These and other changes in the preferred embodiments of the disclosure will be apparent to those skilled in the art from the disclosure herein, whereby it is to be distinctly understood that the foregoing descriptive matter is to be implemented merely as illustrative of the disclosure and not as a limitation.
ADVANTAGES OF THE PRESENT DISCLOSURE
[00178] The present disclosure provides a system and method for scheduling and executing automated call tests on user equipment without manual intervention. This automation significantly reduces the need for human resources in network testing and allows for more frequent and comprehensive performance assessments.
[00179] The present disclosure enables the collection of real-time network Key Performance Indicators (KPIs) directly from user devices. This approach provides more accurate and representative data of actual user experiences compared to traditional network testing methods.
[00180] The present disclosure facilitates benchmarking against other operators, allowing network providers to assess their performance relative to competitors. This comparative data is crucial for identifying areas of improvement and maintaining a competitive edge in the telecommunications market.
[00181] The present disclosure improves the consumer experience by providing a user-friendly dashboard on a web portal. This dashboard allows for easy visualization of network performance data, enabling technical and non-technical users to understand and act on network quality information.
[00182] The present disclosure offers a flexible scheduling system to manage multiple work orders across various devices and geographical areas. This capability ensures comprehensive network coverage and allows for targeted testing in specific locations or time periods.
[00183] The present disclosure incorporates short and long call tests, providing insights into rapid connection performance and sustained call quality. This dual approach offers a more nuanced understanding of network behavior under different usage scenarios.
[00184] The present disclosure includes advanced data processing capabilities, such as KPI threshold comparisons, trend analysis, and automated alert
generation. These features enable proactive network management and rapid response to emerging issues.
[00185] The present disclosure allows for customization of test parameters, enabling network operators to focus on specific aspects of performance or adapt tests to particular network configurations or technologies.
Claims
1. A system (102) for scheduling an automated call test on a user equipment (108), the system (102) comprising: a memory (204); one or more processors (202), wherein the one or more processors (202) are configured to execute instructions stored in the memory (204) to: create, by a work order management module (212), a work order for the automated call test in a coverage platform; send, by a communication module (214), a push notification to one of a plurality of user equipments (108) selected on a device identifier (ID) stored in a database (208); and execute, by a data processing module (216), the scheduled automated call test on the selected user equipment (108) by: initiating the automated call test at a time specified in the created work order, and performing the automated call test as defined in the push notification.
2. The system (102) of claim 1, wherein the push notification includes a script defining procedures for the automated call test, a scheduled date and time for executing the automated call test, and wherein the user equipment (108) is configured to execute the automated call test based on the script, scheduled date and time received in the push notification.
3. The system (102) of claim 1, wherein the one or more processors (202) are further configured to: receive, by the data processing module (216), collected data comprising a plurality of key performance indicators (KPIs) from the automated call test executed by the user equipment (108), wherein the plurality of KPIs comprises a call setup success rate (CSSR), an evolved
radio access bearer (E-RAB) drop rate, an interference level, a handover success rate and failure rate, codec details for a specified geographical area, and a traffic capacity; and record, by a database management module (218), the received collected data of the automated call test.
4. The system (102) of claim 1, wherein the defined automated call test is one of a short call test or a long call test, wherein the short call test is designed to measure rapid connection and disconnection performance, and the long call test is designed to evaluate sustained connection quality and stability.
5. The system (102) of claim 1, wherein the system further comprises a user interface module (220) configured to provide a user interface for: scheduling the automated call test with customizable parameters; viewing results of the executed call test; accessing historical performance data; and configuring alerts for defined KPIs.
6. The system (102) of claim 1 , wherein the work order management module (212) is further configured to: create one or more work orders for different types of automated call tests, including voice calls and data sessions; assign priorities to the one or more work orders based on network performance urgency; and manage a distributed network of speed test servers for conducting the automated call test.
7. A method (500) for scheduling an automated call test on a user equipment (108), the method (500) comprising: creating (502), by a work order management module (212), a work order for the automated call test in a coverage platform;
sending (504), by a communication module (214), a push notification to one of a plurality of user equipments (108) selected on a device identifier (ID) stored in a database (208); and executing (506), by a data processing module (216), the scheduled automated call test on the selected user equipment (108) by: initiating the automated call test at a time specified in the work order, and performing the automated call test as defined in the push notification.
8. The method (500) of claim 7, wherein the push notification includes a script defining procedures for the automated call test, a scheduled date and time for executing the automated call test, and wherein the user equipment (108) executes the automated call test based on the script, scheduled date and time received in the push notification.
9. The method (500) of claim 7, further comprising: receiving, by the data processing module (216), collected data comprising a plurality of key performance indicators (KPIs) from the automated call test executed by the user equipment (108), wherein the plurality of KPIs comprises a call setup success rate (CSSR), an evolved radio access bearer (E- RAB) drop rate, an interference level, a handover success rate and failure rate, codec details for a specified geographical area, and a traffic capacity; and recording, by a database management module (218), the received collected data of the automated call test.
10. The method (500) of claim 7, wherein the defined automated call test is one of a short call test or a long call test, wherein the short call test is designed to measure rapid connection and disconnection performance, and the long call test is designed to evaluate sustained connection quality and stability.
11. The method (500) of claim 7, further comprising:
creating, by the work order management module (212), one or more work orders for different types of automated call tests, including voice calls and data sessions; assigning, by the work order management module (212), priorities to the one or more work orders based on network performance urgency; and managing, by the work order management module (212), a distributed network of speed test servers for conducting the automated call tests.
12. A User Equipment (UE) (108) for facilitating an automated call test, the UE (108) is configured to: receive, from a communication module (214) of a system (102), a push notification comprising instructions for the automated call test, wherein the UE (108) is selected on a device identifier (ID) stored in a database (208); and execute, by a data processing module (216), the scheduled automated call test on the selected user equipment (108) by: initiating the automated call test at a time specified in a work order created for the automated call test in a coverage platform, and performing the automated call test as defined in the push notification.
13. A non-transitory computer-readable storage medium storing computerexecutable instructions that, when executed by one or more processors, cause the one or more processors to perform a method for scheduling an automated call test on a user equipment (108), the method comprising: creating, by a work order management module (212), a work order for the automated call test in a coverage platform; sending, by a communication module (214), via a fire-based cloud messaging (FCM) server (114), a push notification to one of a plurality of user equipments (108) selected on a device identifier (ID) stored in a database (208); and
executing, by a data processing module (216), the scheduled automated call test on the selected user equipment (108) by: initiating the automated call test at a time specified in the work order, and performing the automated call test as defined in the push notification.
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| IN202421022025 | 2024-03-22 | ||
| IN202421022025 | 2024-03-22 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2025196790A1 true WO2025196790A1 (en) | 2025-09-25 |
Family
ID=97138666
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/IN2025/050128 Pending WO2025196790A1 (en) | 2024-03-22 | 2025-02-03 | System and method for scheduling an automated call test on a user device |
Country Status (1)
| Country | Link |
|---|---|
| WO (1) | WO2025196790A1 (en) |
Citations (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| KR20150049343A (en) * | 2013-10-30 | 2015-05-08 | 주식회사 이노와이어리스 | method for collecting log data of automatic call test |
| WO2016070935A1 (en) * | 2014-11-07 | 2016-05-12 | Nokia Solutions And Networks Oy | Network-controlled terminal data call performance testing |
-
2025
- 2025-02-03 WO PCT/IN2025/050128 patent/WO2025196790A1/en active Pending
Patent Citations (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| KR20150049343A (en) * | 2013-10-30 | 2015-05-08 | 주식회사 이노와이어리스 | method for collecting log data of automatic call test |
| WO2016070935A1 (en) * | 2014-11-07 | 2016-05-12 | Nokia Solutions And Networks Oy | Network-controlled terminal data call performance testing |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US10674388B2 (en) | Wireless communication data analysis and reporting | |
| US12114192B2 (en) | System and method for distributed network performance management | |
| US11805006B2 (en) | Network fault detection and quality of service improvement systems and methods | |
| US9432865B1 (en) | Wireless cell tower performance analysis system and method | |
| ES2866946T3 (en) | Network advisor based on artificial intelligence | |
| US9730085B2 (en) | Method and apparatus for managing wireless probe devices | |
| US11018958B2 (en) | Communication network quality of experience extrapolation and diagnosis | |
| CN111817868B (en) | Method and device for positioning network quality abnormity | |
| US9497646B2 (en) | Performance evaluation of services and applications on devices in live wireless environments | |
| US10548036B2 (en) | Fault monitoring by assessing spatial distribution of queries in a utility supply network | |
| US20190200244A1 (en) | Generating recommendations for achieving optimal cellular connectivity based on connectivity details and current and predicted future events | |
| US11082323B2 (en) | Fault monitoring in a utility supply network | |
| US20220295326A1 (en) | Grid reference system wireless network anomaly detection and visualization | |
| US11026108B2 (en) | Fault monitoring in a utility supply network | |
| Nguyen et al. | Absence: Usage-based failure detection in mobile networks | |
| US11025502B2 (en) | Systems and methods for using machine learning techniques to remediate network conditions | |
| WO2023045931A1 (en) | Network performance abnormality analysis method and apparatus, and readable storage medium | |
| US10721707B2 (en) | Characterization of a geographical location in a wireless network | |
| WO2025196790A1 (en) | System and method for scheduling an automated call test on a user device | |
| Frias et al. | Measuring Mobile Broadband Challenges and Implications for Policymaking | |
| GB2546119A (en) | Network monitoring by exploiting query density | |
| WO2025022430A1 (en) | System and processor-implemented method for monitoring subscriber session logs data, and scheduled report generation | |
| WO2025196839A1 (en) | System and method for managing network performance | |
| WO2025196797A1 (en) | System and method for site visualization and analysis using interface | |
| WO2025074385A1 (en) | A provisioning system for monitoring dynamic slice load distribution and a method thereof |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 25773765 Country of ref document: EP Kind code of ref document: A1 |