WO2025083709A1 - System and method for handling calls based on thread affinity - Google Patents
System and method for handling calls based on thread affinity Download PDFInfo
- Publication number
- WO2025083709A1 WO2025083709A1 PCT/IN2024/052075 IN2024052075W WO2025083709A1 WO 2025083709 A1 WO2025083709 A1 WO 2025083709A1 IN 2024052075 W IN2024052075 W IN 2024052075W WO 2025083709 A1 WO2025083709 A1 WO 2025083709A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- call request
- application thread
- network
- thread
- processing engine
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5005—Allocation of resources, e.g. of the central processing unit [CPU] to service a request
- G06F9/5027—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
- G06F9/5033—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering data affinity
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/48—Program initiating; Program switching, e.g. by interrupt
- G06F9/4806—Task transfer initiation or dispatching
- G06F9/4843—Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
- G06F9/4881—Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5005—Allocation of resources, e.g. of the central processing unit [CPU] to service a request
- G06F9/5027—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
- G06F9/505—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering the load
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L65/00—Network arrangements, protocols or services for supporting real-time applications in data packet communication
- H04L65/10—Architectures or entities
- H04L65/1016—IP multimedia subsystem [IMS]
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L65/00—Network arrangements, protocols or services for supporting real-time applications in data packet communication
- H04L65/1066—Session management
- H04L65/1069—Session establishment or de-establishment
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/14—Session management
- H04L67/146—Markers for unambiguous identification of a particular session, e.g. session cookie or URL-encoding
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2209/00—Indexing scheme relating to G06F9/00
- G06F2209/50—Indexing scheme relating to G06F9/50
- G06F2209/5011—Pool
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2209/00—Indexing scheme relating to G06F9/00
- G06F2209/50—Indexing scheme relating to G06F9/50
- G06F2209/5018—Thread allocation
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2209/00—Indexing scheme relating to G06F9/00
- G06F2209/50—Indexing scheme relating to G06F9/50
- G06F2209/503—Resource availability
Definitions
- the present disclosure generally relates to the field of wireless communication systems. More particularly, the present disclosure relates to a system and a method for handling call requests in a network based on thread affinity.
- the expression 'thread affinity' used hereinafter in the specification refers to a scheduling strategy where a thread is preferentially assigned to run on a specific processor or core within a multi-core system.
- the thread affinity allows efficient use of computational resources and improves overall system performance.
- Thread affinity is an optimization technique in multi-core processors, where a thread is preferentially scheduled to run on the same core for extended periods.
- the practice enhances performance primarily through improved cache locality, as the processor’s cache hierarchy (LI, L2, L3) retains frequently accessed data for quick retrieval, minimizing latency when the thread accesses the data.
- the processor's cache hierarchy consists of three levels LI, L2 and L3.
- the LI level is the fastest and smallest and is dedicated to individual cores
- the L2 level is larger and slightly slower, either corespecific or shared
- L3 level is largest and slowest, shared among multiple cores.
- the expression 'lock contention' used hereinafter in the specification refers to a situation where two or more processes are competing for access to a shared resource, such as a file or a network connection. This can lead to delays, errors, and decreased productivity.
- SIP Session Initiation Protocol
- VoIP voice over IP
- video conferencing video conferencing
- multimedia communication applications other multimedia communication applications.
- IMS Internet Protocol
- IP Internet Protocol
- IMS is a standardized architecture used to deliver IP -based multimedia services, such as voice, video, and messaging, over broadband networks.
- MNP Mobile Number Portability
- OCS Online Charging System
- the expression ‘DRA’ used hereinafter in the specification refers to a Diameter Routing Agent.
- the DRA is a network component used in Diameter-based systems to manage and route Diameter messages between nodes in a telecommunications network.
- Diameter is a protocol used for authentication, authorization, and accounting (AAA) in network environments.
- CRBT Caller Ringback Tones.
- the CRBT are personalized audio tones that a caller hears while waiting for his call to be answered, replacing the standard ringing sound with music, messages, or other content chosen by a call recipient.
- MRF Media Resource Function
- EMS used hereinafter in the specification refers to an Element Management System.
- the EMS is a network management system that focuses on the configuration, monitoring, and maintenance of individual network elements or devices.
- OSS used hereinafter in the specification refers to an Operational Support System.
- the OSS is a comprehensive framework used for managing, controlling, and optimizing telecommunications network operations and services, including provisioning, fault management, and performance monitoring.
- the expression ‘BSS’ used hereinafter in the specification refers to a Business Support System.
- the BSS is a set of software applications and tools used for managing and supporting business processes in telecommunications, such as billing, customer relationship management, and service provisioning.
- Load Balancer used hereinafter in the specification refers to a device or software application that distributes network or application traffic across multiple servers to ensure no single server becomes overwhelmed.
- TAS Telephony Application Server
- the TAS is a platform that provides telephony services and application support, enabling advanced call processing, messaging, and integration with communication networks.
- Record-Route header used hereinafter in the specification refers to a SIP header used to manage the routing of requests within a call session.
- the record-route header instructs intermediate network elements (like proxies) to store the route information.
- the recordroute header ensures that all subsequent requests in the session follow the same path, allowing for consistent handling of the call.
- the record-route header ensures that all subsequent requests within a call session are routed through the same network elements as the initial request, maintaining consistent call handling.
- the expression ‘Round-robin approach’ used hereinafter in the specification is a scheduling method where requests or tasks are assigned to a pool of resources (such as threads or servers) in a cyclic order. Each resource receives an equal opportunity to handle a task before the next one in the sequence is chosen.
- application thread is a concurrent execution unit within a program that performs tasks like handling HTTP (Hypertext Transfer Protocol) requests in a web server or managing background operations in a desktop app.
- Session initiation protocol is a signaling protocol that is used for setting up multimedia session.
- SIP is the core protocol in Internet Protocol (IP) multimedia subsystem (IMS) architecture.
- IP Internet Protocol
- IMS multimedia subsystem
- Race conditions arise when multiple threads attempt to access and modify shared resources concurrently, without proper synchronization. This can lead to unpredictable and erroneous behavior, as threads may interfere with each other, resulting in data corruption or inconsistent states. Managing race conditions requires intricate and often cumbersome synchronization mechanisms to ensure that threads do not conflict, adding complexity to the system’s design and implementation.
- Lock contention occurs when multiple threads compete for access to the same resources or critical sections of code. This competition can create performance bottlenecks, where threads are forced to wait for locks to be released before they can proceed. The increased wait times and the overhead associated with acquiring and releasing locks can significantly degrade system performance, leading to increased latency and reduced throughput.
- An objective of the present disclosure is to provide a system and a method for handling call requests based on thread affinity in a network that avoids multiple thread race conditions during the call handling.
- Another objective of the present disclosure is to provide a system and a method for handling call requests based on thread affinity in a network that reduces lock contention problems in Central Processing Unit (CPU) scheduling.
- CPU Central Processing Unit
- Yet another objective of the present disclosure is to provide a system and a method for handling call requests based on thread affinity in a network that optimizes CPU resources by binding all responses associated with the client transaction ID to the application thread.
- Still another objective of the present disclosure is to provide a system and a method for handling call requests based on thread affinity in a network that optimizes CPU resources by binding all requests and responses associated with the session or call to the application thread.
- the present disclosure relates to a method for handling a call request in a network.
- the method includes receiving, by a processing engine, the call request from a user.
- the method includes generating, by the processing engine, a client transaction identifier (ID) associated with the received call request.
- the method includes selecting, by the processing engine, an application thread associated with the received call request.
- the method includes extracting, by the processing engine, an application thread identifier (ID) associated with the selected application thread.
- the method includes binding, by the processing engine, the extracted application thread ID with the generated client transaction ID to generate a binded information.
- the method includes communicating, by the processing engine, the received call request containing the binded information along with the selected application thread to an application server for performing one or more operations in the network.
- the method further includes inserting, by the processing engine, the application thread ID in a record-route header of the received call request during an initial setup of a call session associated with the received call request.
- the application thread is selected using a round-robin approach.
- the method further comprising binding, by the processing engine, one or more requests associated with the received call request to the selected application thread using the application thread ID inserted in the record-route header.
- the present disclosure relates to a system for handling a call request in a network.
- the system includes a memory, and a processing engine configured to execute a set of instructions stored in the memory to receive the call request from a user.
- the processing engine is configured to generate a client transaction identifier (ID) associated with the received call request.
- the processing engine is configured to select an application thread associated with the received call request.
- the processing engine is configured to extract an application thread identifier (ID) associated with the selected application thread.
- the processing engine is configured to bind the extracted application thread ID with the generated client transaction ID to generate a binded information.
- the processing engine is configured to communicate the received call request containing the binded information along with the selected application thread to an application server for performing one or more operations in the network.
- the present disclosure relates to a user equipment (UE) communicatively coupled with a network.
- the coupling comprises steps of receiving, by the network, a connection request from the UE, sending, by the network, an acknowledgment of the connection request to the UE and transmitting a plurality of signals in response to the connection request.
- the call request in the network is handled by a method that includes receiving, by a processing engine, the call request from a user.
- the method includes generating, by the processing engine, a client transaction identifier (ID) associated with the received call request.
- the method includes selecting, by the processing engine, an application thread associated with the received call request.
- the method includes extracting, by the processing engine, an application thread identifier (ID) associated with the selected application thread.
- the method includes binding, by the processing engine, the extracted application thread ID with the generated client transaction ID to generate a binded information.
- the method includes communicating, by the processing engine, the received call request containing the binded information along with the selected application thread to an application server for performing one or more operations in the network.
- the present disclosure relates to a computer program product comprising a non-transitory computer-readable medium comprising instructions that, when executed by one or more processors, cause the one or more processors to perform a method for handling a call request in the network.
- the method includes receiving, by a processing engine, the call request from a user.
- the method includes generating, by the processing engine, a client transaction identifier (ID) associated with the received call request.
- the method includes selecting, by the processing engine, an application thread associated with the received call request.
- the method includes extracting, by the processing engine, an application thread identifier (ID) associated with the selected application thread.
- the method includes binding, by the processing engine, the extracted application thread ID with the generated client transaction ID to generate a binded information.
- the method includes communicating, by the processing engine, the received call request containing the binded information along with the selected application thread to an application server for performing one or more operations in the network.
- FIG. 1 illustrates an exemplary network architecture for implementing a system for handling a call request in a network, in accordance with an embodiment of the present disclosure.
- FIG. 2 illustrates an exemplary block diagram of the system, in accordance with an embodiment of the present disclosure.
- FIG. 3 illustrates an exemplary system architecture of the system, in accordance with an embodiment of the present disclosure.
- FIG. 4 illustrates an exemplary flow diagram of thread selection, in accordance with an embodiment of the present disclosure.
- FIG. 5 illustrates another exemplary flow diagram of a method for handling the call request in the network, in accordance with an embodiment of the present disclosure.
- FIG. 6 illustrates an example computer system in which or with which the embodiments of the present disclosure may be implemented.
- UEs User Equipments
- IP Internet protocol
- IMS Internet Multimedia Subsystem
- MNP Mobile number portability
- OCS Online charging system
- DAA Diameter routing agent
- MRF Media resource function
- EMS Element management system
- OSS Operational support system
- BSS Business support system
- TAS Telephony Application Server
- individual embodiments may be described as a process that is depicted as a flowchart, a flow diagram, a data flow diagram, a structure diagram, or a block diagram. Although a flowchart may describe the operations as a sequential process, many of the operations can be performed in parallel or concurrently. In addition, the order of the operations may be re-arranged. A process is terminated when its operations are completed but could have additional steps not included in a figure. A process may correspond to a method, a function, a procedure, a subroutine, a subprogram, etc. When a process corresponds to a function, its termination can correspond to a return of the function to the calling function or the main function.
- exemplary and/or “demonstrative” is used herein to mean serving as an example, instance, or illustration.
- the subject matter disclosed herein is not limited by such examples.
- any aspect or design described herein as “exemplary” and/or “demonstrative” is not necessarily to be construed as preferred or advantageous over other aspects or designs, nor is it meant to preclude equivalent exemplary structures and techniques known to those of ordinary skill in the art.
- the terms “includes,” “has,” “contains,” and other similar words are used in either the detailed description or the claims, such terms are intended to be inclusive like the term “comprising” as an open transition word without precluding any additional or other elements.
- an "electronic device”, or “portable electronic device”, or “user device” or “communication device” or “user equipment” or “device” refers to any electrical, electronic, electromechanical and computing device.
- the user device is capable of receiving and/or transmitting one or parameters, performing function/s, communicating with other user devices and transmitting data to the other user devices.
- the user equipment may have a processor, a display, a memory, a battery and an input-means such as a hard keypad and/or a soft keypad.
- the user equipment may be capable of operating on any radio access technology including but not limited to IP-enabled communication, Zig Bee, Bluetooth, Bluetooth Low Energy, Near Field Communication, Z-Wave, Wi-Fi, Wi-Fi direct, etc.
- the user equipment may include, but not limited to, a mobile phone, smartphone, virtual reality (VR) devices, augmented reality (AR) devices, laptop, a general-purpose computer, desktop, personal digital assistant, tablet computer, mainframe computer, or any other device as may be obvious to a person skilled in the art for implementation of the features of the present disclosure.
- VR virtual reality
- AR augmented reality
- the user device may also comprise a "processor” or “processing unit” includes processing unit, wherein the processor refers to any logic circuitry for processing instructions.
- the processor may be a general -purpose processor, a special purpose processor, a conventional processor, a digital signal processor, a plurality of microprocessors, one or more microprocessors in association with a DSP core, a controller, a microcontroller, Application Specific Integrated Circuits, Field Programmable Gate Array circuits, any other type of integrated circuits, etc.
- the processor may perform signal coding data processing, input/output processing, and/or any other functionality that enables the working of the system according to the present disclosure. More specifically, the processor is a hardware processor.
- Radio Access Technology refers to the technology used by mobile devices/ user equipment (UE) to connect to a cellular network. It refers to the specific protocol and standards that govern the way devices communicate with base stations, which are responsible for providing the wireless connection. Further, each RAT has its own set of protocols and standards for communication, which define the frequency bands, modulation techniques, and other parameters used for transmitting and receiving data. Examples of RATs include GSM (Global System for Mobile Communications), CDMA (Code Division Multiple Access), UMTS (Universal Mobile Telecommunications System), LTE (Long-Term Evolution), and 5G. The choice of RAT depends on a variety of factors, including the network infrastructure, the available spectrum, and the mobile device's/device's capabilities.
- Session initiation protocol enables voice and video communication over the internet.
- SIP handles the signaling and control of multimedia sessions.
- SIP uses a text-based message format that can be extended and customized to suit different needs and scenarios.
- the SIP message includes a request line or a status line, followed by a set of headers and an optional message body.
- the request line or status line indicates a method, a universal resource identifier (URI), and a version of the protocol.
- the set of headers provides additional information about the sender, the receiver, the session, and the message.
- the message body can contain session description protocol (SDP) or other data types.
- SDP defines the media formats, codecs, and parameters for each session.
- a user equipment of a user B receives an INVITE message. After receiving the invite message, the user equipment of the user B rings.
- SIP a response titled as "180 Ringing" is configured to notify the calling party that the call has been initiated and assure the calling party that the receiving party has received the INVITE message.
- Multiple SIP messages are exchanged between the user A and the user B during a signaling plane, and during a data plane. During the signaling plane, once all the messages are successfully transferred from the user A to the user B party, then a call is established between the user A to the user B. As these multiple SIP messages are being exchanged in the signaling plane, it is the responsibility of the Telephony Application Server (TAS) to pass on successfully all the messages from the user A to the user B.
- TAS Telephony Application Server
- All the SIP messages are independent, and when the invite message is received by the TAS, the TAS executes a service logic as per the user management module (UMM) configured logic.
- UMM user management module
- the UMM is a tool used to manage user accounts, permissions, and access to services and resources within the network.
- the TAS may send the INVITE message to the user B. Post the processing of the INVITE message, the TAS may receive other messages from other users intended to connect over the network.
- the TAS may receive a plurality of messages in parallel.
- the TAS is a multi-threaded application.
- these multiple messages are scattered among multiple threads, thereby a number of resources are used, resulting in a costly and complex arrangement. All these messages are related to the same dialogue or same call, so there are some shared resources as well that may be shared among various threads. Due to the requirements for sharing the resources by various threads, a race condition would be created among the threads for resources, leading to a state of lock contention. To overcome the lock contention, locking and unlocking the resources may need to be performed, and the same call is scattered to multiple threads.
- the present disclosure aims to overcome the above-mentioned and other existing problems in this field of technology by binding all the messages related to a specific call to the same application thread. Based on utilizing the same application thread for all the messages corresponding to the specific call, the provisioning unit and the method can utilize the resources (CPU resources) effectively and efficiently. This approach optimizes network performance and resource utilization, ensuring smooth network operation. [0069]
- the present disclosure enhances the TAS architecture, which may handle each call with a single application call processing thread. It also avoids multiple race condition scenarios and reduces lock contention, thereby optimizing central processing unit (CPU) resource usage and providing a stable architecture.
- FIG. 1 illustrates an exemplary network architecture (100) for implementing a system (108) for handling a call request in a network (106), in accordance with an embodiment of the present disclosure.
- the network architecture (100) may include one or more user equipments (UEs) (104-1, 104-2... 104-N) associated with one or more users (102-1, 102-2... 102-N) in an environment.
- UEs user equipments
- a person of ordinary skill in the art will understand that one or more users (102-1, 102-2... 102-N) may collectively referred to as the users (102).
- UEs UE-1, 104-2... 104-N
- UE UEs
- the UE (104) may include smart devices operating in a smart environment, for example, an Internet of Things (loT) system.
- the UE (104) may include, but is not limited to, smartphones, smart watches, smart sensors (e.g., mechanical, thermal, electrical, magnetic, etc.), networked appliances, networked peripheral devices, networked lighting system, communication devices, networked vehicle accessories, networked vehicular devices, smart accessories, tablets, smart television (TV), computers, smart security system, smart home system, other devices for monitoring or interacting with or for the users (102) and/or entities, or any combination thereof.
- smartphones such an embodiment, the UE (104) may include, but is not limited to, smartphones, smart watches, smart sensors (e.g., mechanical, thermal, electrical, magnetic, etc.), networked appliances, networked peripheral devices, networked lighting system, communication devices, networked vehicle accessories, networked vehicular devices, smart accessories, tablets, smart television (TV), computers, smart security system, smart home system, other devices for monitoring or interacting with or
- the UE (104) may include, but is not limited to, intelligent, multisensing, network-connected devices, which may integrate seamlessly with each other and/or with a central server or a cloud-computing system or any other device that is network-connected.
- the UE (104) may include, but not limited to, a handheld wireless communication device (e.g., a mobile phone, a smartphone, a phablet device, and so on), a wearable computer device (e.g., a headmounted display computer device, a head-mounted camera device, a wristwatch computer device, and so on), a Global Positioning System (GPS) device, a laptop computer, a tablet computer, or another type of portable computer, a media playing device, a portable gaming system, and/or any other type of computer device with wireless communication capabilities, and the like.
- a handheld wireless communication device e.g., a mobile phone, a smartphone, a phablet device, and so on
- a wearable computer device e.g., a headmounted display computer device, a head-mounted camera device, a wristwatch computer device, and so on
- GPS Global Positioning System
- the UE (104) may include, but are not limited to, any electrical, electronic, electromechanical, or equipment, or a combination of one or more of the above devices, such as virtual reality (VR) devices, augmented reality (AR) devices, laptop, a general-purpose computer, desktop, personal digital assistant, tablet computer, mainframe computer, or any other computing device, wherein the UE (104) may include one or more in-built or externally coupled accessories including, but not limited to, a visual aid device such as a camera, an audio aid, a microphone, a keyboard, and input devices for receiving input from the user (102) or the entity such as touchpad, touch-enabled screen, electronic pen, and the like.
- a visual aid device such as a camera, an audio aid, a microphone, a keyboard, and input devices for receiving input from the user (102) or the entity such as touchpad, touch-enabled screen, electronic pen, and the like.
- the UE (104) may not be restricted to the mentioned devices and various other devices may be used.
- the UE (104) may communicate with the system (108) through the network (106) for sending or receiving various types of data.
- the network (106) may include at least one of a 5G network, 6G network, or the like.
- the network (106) may enable the UE (104) to communicate with other devices in the network architecture (100) and/or with the system (108).
- the network (106) may include a wireless card or some other transceiver connection to facilitate this communication.
- the network (106) may be implemented as, or include any of a variety of different communication technologies such as a wide area network (WAN), a local area network (LAN), a wireless network, a mobile network, a Virtual Private Network (VPN), the Internet, the Public Switched Telephone Network (PSTN), or the like.
- WAN wide area network
- LAN local area network
- VPN Virtual Private Network
- PSTN Public Switched Telephone Network
- the network (106) may include, by way of example but not limitation, at least a portion of one or more networks having one or more nodes that transmit, receive, forward, generate, buffer, store, route, switch, process, or a combination thereof, etc. one or more messages, packets, signals, waves, voltage or current levels, some combination thereof, or so forth.
- the network (106) may also include, by way of example but not limitation, one or more of a radio access network (RAN), a wireless network, a wired network, an internet, an intranet, a public network, a private network, a packet-switched network, a circuit- switched network, an ad hoc network, an infrastructure network, a cable network, a cellular network, a satellite network, a fiber optic network, or some combination thereof.
- RAN radio access network
- wireless network a wireless network
- a wired network an internet, an intranet, a public network, a private network, a packet-switched network, a circuit- switched network, an ad hoc network, an infrastructure network, a cable network, a cellular network, a satellite network, a fiber optic network, or some combination thereof.
- the UE (104) is communicatively coupled with the network (106).
- the network (106) may receive a connection request from the UE (104).
- the network (106) may send an acknowledgment of the connection request to the UE (104).
- the UE (104) may transmit a plurality of signals in response to the connection request.
- FIG. 1 shows exemplary components of the network architecture (100), in other embodiments, the network architecture (100) may include fewer components, different components, differently arranged components, or additional functional components than depicted in FIG. 1. Additionally, or alternatively, one or more components of the network architecture (100) may perform functions described as being performed by one or more other components of the network architecture (100).
- FIG. 2 illustrates an exemplary block diagram (200) of the system (108), in accordance with an embodiment of the present disclosure.
- the system (108) may include one or more processor(s) (202).
- the one or more processor(s) (202) may be implemented as one or more microprocessors, microcomputers, microcontrollers, digital signal processors, central processing units, logic circuitries, and/or any devices that process data based on operational instructions.
- the one or more processor(s) (202) may be configured to fetch and execute computer-readable instructions stored in a memory (204) of the system (108).
- the memory (204) may be configured to store one or more computer-readable instructions or routines in a non- transitory computer readable storage medium, which may be fetched and executed to create or share data packets over a network service.
- the memory (204) may include any non-transitory storage device including, for example, volatile memory such as random-access memory (RAM), or non-volatile memory such as erasable programmable read only memory (EPROM), flash memory, and the like.
- the system (108) may include an interface(s) (206).
- the interface(s) (206) may include a variety of interfaces, for example, interfaces for data input and output devices (I/O), storage devices, and the like.
- the interface(s) (206) may facilitate communication through the system (108).
- the interface(s) (206) may also provide a communication pathway for one or more components of the system (108). Examples of such components include, but are not limited to, a processing engine (208) and a database (210).
- the processing engine (208) may be implemented as a combination of hardware and programming (for example, programmable instructions) to implement one or more functionalities of the processing engine (208).
- the processing engine (208) may include a receiving unit (212) and a provisioning unit (214).
- the programming for the processing engine (208) may be processor-executable instructions stored on a non-transitory machine -readable storage medium and the hardware for the processing engine (208) may comprise a processing resource (for example, one or more processors), to execute such instructions.
- the machine- readable storage medium may store instructions that, when executed by the processing resource, implement the processing engine (208).
- system may comprise the machine-readable storage medium storing the instructions and the processing resource to execute the instructions, or the machine-readable storage medium may be separate but accessible to the system and the processing resource.
- processing engine (208) may be implemented by electronic circuitry.
- the database (210) includes data that may be either stored or generated as a result of functionalities implemented by any of the components of the processor (202) or the processing engine (208).
- the processing engine (208) is configured to receive, via the receiving unit (212), a call request from the user (102).
- the call request may include a voice over IP (VoIP) call request, a video call request, a conference call request, a callback request, or a call forwarding request.
- the processing engine is configured to accept and process the call request received from the user in accordance with a specified network protocol.
- the specified network protocol may be a Session Initiation Protocol (SIP), a Real-Time Transport Protocol (RTP), a Transmission Control Protocol (TCP), a Hypertext Transfer Protocol (HTTP).
- the call request may include a request line that specifies the method (e.g., INVITE for SIP calls or POST for HTTP requests), a resource or endpoint being requested, and a protocol version.
- the call request may include one or more headers that provide additional metadata, such as the source and destination of the call request, and necessary routing information.
- a payload which is optional, may be included within the body of the call request, having details such as user information, call parameters, or media content. For example, when the system (such as a telephony system) receives a new call initiation request, such as a SIP INVITE message, the processing engine is configured to perform a series of steps to process the received call.
- the process may start with the reception of the SIP INVITE message from the user, which is a critical component in initiating a communication session.
- the SIP INVITE is a message in the Session Initiation Protocol (SIP), used to initiate a call or multimedia session between the users.
- SIP Session Initiation Protocol
- the SIP INVITE message may be sent by the user (e.g., through a Voice over Internet Protocol (VoIP) phone) to the system to request the establishment of the call.
- VoIP Voice over Internet Protocol
- the message contains essential information such as the caller's and callee's identifiers, and media capabilities.
- the SIP INVITE message arrives at the processing engine, the message signifies the beginning of a new communication session and triggers the system to process the request according to the defined call handling procedures.
- the provisioning unit (214) is configured to generate a client transaction identifier (ID) associated with the received call request. For example, if the call request is identified as "CALL-REQ-2024-001", the system may generate a corresponding client transaction ID such as "TXN-2024-001-A1B2C3", ensuring that each call request may be tracked and referenced throughout the session.
- the client transaction ID not only aids in organizing and managing active calls but also facilitates troubleshooting and session management by allowing network elements to reference specific transactions. For example, when the user request, such as the SIP INVITE message in a VoIP system or an HTTP GET request in a web server, is received, the system initiates a structured process to handle and forward the request.
- the HTTP GET request retrieves data from a specified resource on a server using the HTTP without altering the server’s state.
- the system creates a client transaction, which acts as a container for managing and tracking the request throughout its lifecycle.
- the client transaction includes a unique identifier and context information that captures details about the request’s origin, destination, and state.
- the application thread is stored in the memory.
- the memory is configured to maintain a repository of pre-initialized application threads, which can be rapidly accessed and allocated for incoming call requests.
- the provisioning unit (214) may be configured to generate the application threads. Upon system initialization or during runtime, the provisioning unit (214) may create a specified number of application threads based on the anticipated load. The specified number of application threads are maintained in a ready state and are allocated to handle incoming call requests as they are received. When the call request is processed, an available thread is selected from a pool of application threads, and its state is updated to indicate that it is actively handling the call request.
- the provisioning unit (214) is configured to select an application thread associated with the received call request.
- the provisioning unit is designed to efficiently manage incoming call requests by selecting an application thread to handle each request.
- the selection process involves identifying and choosing the application thread to handle or process the received call request based on current load, availability, and attributes required by the request, ensuring that the call request is managed efficiently according to the application's requirements and the context of the call request, capabilities, thread priority, or compatibility with certain types of call requests. For example, a thread with specialized handling for a specific task or protocol might be selected if the call request demands such handling.
- the application thread may be selected using a roundrobin (RR) scheduling approach.
- RR roundrobin
- the system may employ the RR scheduling approach to select an available thread from a thread pool. For instance, if multiple call initiation requests arrive simultaneously, the provisioning unit selects the next available thread in a round-robin fashion to evenly distribute the workload. The selected thread is dedicated to processing the specific call request, ensuring that each call request is handled promptly and systematically. This approach helps optimize resource utilization and maintains consistent performance across the system.
- the provisioning unit (214) manages incoming SIP INVITE requests using the round-robin scheduling approach to select the application threads from the pool. For instance, when three SIP INVITE requests arrive simultaneously from the users attempting to initiate calls, the system first checks the availability of threads in the pool. If Threads A and B are busy, provisioning unit (214) selects Thread C to handle the first call request, Thread D for the second call request, and Thread E for the third call request. Each thread is dedicated to processing its respective call request, ensuring that all call requests are addressed promptly and efficiently. Therefore, the provisioning unit (214) optimizes resource utilization and maintains consistent performance, allowing the system to handle multiple call requests without delay.
- the provisioning unit (214) is configured to extract an application thread identifier (ID) associated with the selected application thread. For example, in the VoIP system handling the SIP INVITE messages, once the provisioning unit (214) selects the thread to process the call request, the provisioning unit (214) retrieves the thread’s ID to track which thread is responsible for that particular call session. The thread ID is used to ensure that all subsequent interactions and responses related to the call request are handled by the same thread, maintaining consistency and enabling efficient processing of the request.
- the extracting techniques may include querying a thread registry, using thread management application programming interface(s) (APIs), or accessing thread metadata to retrieve the application thread ID.
- APIs thread management application programming interface
- the provisioning unit (214) queries a thread registry to retrieve the corresponding thread ID.
- the registry responds with the thread's name, ID (e.g., THREAD-ID-004), and status, confirming that Thread D is active.
- ID e.g., THREAD-ID-004
- the thread ID is then stored for future reference, ensuring that all subsequent interactions related to User l ’s call — such as additional SIP messages and session updates — are managed consistently by Thread D.
- the system can effectively track the call session, optimizing resource utilization and enhancing overall performance.
- the provisioning unit (214) is configured to bind the extracted application thread ID with the generated client transaction ID to generate a binded information.
- the combination of the thread ID and client transaction ID constitutes the binded information.
- the binded information consists of a structured set of data for transaction tracking in multi-threaded applications.
- the binded information combines the application thread ID and the client transaction ID, creating a unique reference that links a specific thread's operations to a call request. The association enables effective tracking of how each call request is processed by different threads and facilitates performance monitoring through associated timestamps that indicate when the transaction was initiated and completed.
- the binded information helps in identifying the current state of the call request, whether it is pending, in progress, or completed.
- the binded information may contains contextual data relevant to the call request, such as user IDs, session identifiers. This binded information effectively links the thread responsible for handling the call request with the specific client transaction it is managing. For example, in the VoIP system, when the SIP INVITE request is received, the provisioning unit (214) assigns a thread to process the request and creates a client transaction ID for it. By binding the thread ID with the client transaction ID, the system ensures that all subsequent interactions, including responses and state updates, are consistently managed by the same thread, thereby maintaining coherence and efficiency in processing the call request.
- the provisioning unit (214) is configured to communicate the received call request containing the binded information along with the selected application thread to an application server (e.g., a telephony application server (TAS)) for performing one or more operations in the network.
- an application server e.g., a telephony application server (TAS)
- TAS telephony application server
- the provisioning unit (214) communicates the received call request, which contains the binded information (thread ID and client transaction ID), along with details about the selected application thread, to the application server.
- the application server uses the information to perform one or more operations in the network, such as routing the call request, managing session states, or processing the call request.
- performing one or more operations may include dispatching one or more response messages associated with the client transaction ID to the selected application thread during a call session associated with the call request.
- the application server will dispatch SIP response messages such as 180 Ringing, 200 OK, or 486 Busy Here to the client.
- the response messages are dispatched to the thread that was assigned to handle the original INVITE request, ensuring that all communication remains consistent and properly managed.
- the response messages are directed to the specific application thread identified by the thread ID, which was linked with the client transaction ID.
- the application server dispatches information to the selected application thread that may include the status of the call request (e.g., success or failure), the client transaction ID to correlate the response with the original request, and details pertinent to the call session.
- the response message also contains headers with additional metadata and a payload with the actual response content or data, ensuring that the thread can accurately handle and process the response within the context of the ongoing call session.
- the application server ensures that all communication and state updates for the call session are consistently managed by the same thread, maintaining coherence and reliability in processing the ongoing call.
- the approach helps in efficiently managing the lifecycle of the call session, from initiation through to completion, by ensuring that responses and subsequent actions are handled seamlessly within the designated thread.
- the provisioning unit (214) is configured to insert the application thread ID in a record-route header of the received call request during an initial setup of a call session associated with the received call request.
- the recordroute header is a component of the call request that specifies the network elements that should handle subsequent requests within a call session.
- the call request may include various fields are essential for managing SIP communications such as a From header, a From header, a Call-ID header, a CSeq (Sequence) header or a Contact header.
- the From header identifies the caller, while the To header indicates the recipient of the call.
- the Call-ID header uniquely identifies the session, ensuring that all participants can reference the same call.
- the CSeq (Sequence) header helps manage the order of messages by providing a sequence number for each request.
- the Contact header specifies how the caller can be reached for future requests.
- the provisioning unit (214) is configured to insert the application thread ID into the record-route header of the call request.
- the insertion is crucial for maintaining consistent and efficient call handling throughout the session.
- the provisioning unit ensures that all subsequent SIP or call requests related to this call are routed through the same network elements and the same application thread. For instance, in the VoIP system, when the SIP INVITE request is sent, the application thread ID is included in the record-route header. This ensures that all follow-up SIP messages, such as responses or additional INVITE requests, will also carry this header, allowing the system to route these messages to the same thread.
- the subsequent call requests received from the network include the values from the record-route header of the initial call request. Since the header contains the thread ID or other routing information, which allows the system to route these follow-up requests to the same application thread that handled the original request. By referencing this routing information, the application server can ensure that all related requests are processed consistently by the designated thread, maintaining coherent call management and ensuring that the call session remains stable and efficiently handled.
- the provisioning unit (214) is configured to bind the one or more requests associated with the received call request to the selected application thread using the application thread ID inserted in the record-route header.
- the call request may trigger various subsequent call requests such as data retrieval, notifications, logging, dependency checks, error handling, and external API calls that need to be handled efficiently.
- the provisioning unit extracts the application thread ID from the record-route header, which serves as a unique identifier for the specific thread that is responsible for processing the subsequent requests.
- the provisioning unit ensures that all related operations are executed within the same thread context. The approach streamlines processing by minimizing context switching and resource contention and also allows for coherent management of state and data across all related requests.
- FIG. 3 illustrates an exemplary system architecture (300) of the system (108), in accordance with an embodiment of the present disclosure.
- the system architecture (300) includes the user equipment (UE) (302) and an IMS network (304).
- the system architecture (300) may include a number of components (modules) such as an operational support system (OSS)/ business support system (BSS) (318), an element management system (EMS) (316), a media resource function (MRF) (314), a caller ring back tones (CRBT) service (312), a diameter routing agent (DRA) (310), an online charging system (OCS) (308), a mobile number portability (MNP) module (306), a provisioning server (324), and a telephony application server (TAS) (322).
- modules such as an operational support system (OSS)/ business support system (BSS) (318), an element management system (EMS) (316), a media resource function (MRF) (314), a caller ring back tones (CRBT) service (312), a diameter routing agent (DRA) (310), an online charging system (OCS) (308), a mobile number portability
- the UE (302) is connected to the IMS network (304) via the Session Initiation Protocol (SIP) to manage and control multimedia communication sessions.
- SIP Session Initiation Protocol
- the OSS (318) is configured to manage network operations and maintenance.
- the BSS is configured to handle billing, customer management, and revenue assurance.
- the OSS/BSS (318) is essential for telecom service providers to manage their operations, deliver services to customers, and generate revenue.
- the provisioning server (324) is connected to the OSS/BSS (318) via a load balancer (320).
- the load balancer (320) may be a F5 module.
- the load balancer (320) is connected to the OSS/BSS (318) via RESTful APIs (REST).
- the EMS (316) includes various systems and applications for managing various network elements (NEs) on a network element-management layer (NEL).
- the EMS (316) is configured to manage one or more of a specific type of telecommunications network element.
- the EMS (316) manages the functions and capabilities within each NE but does not manage the traffic between different NEs in the network.
- the EMS (316) provides a foundation to implement OSS architectures that enable service providers to meet customer needs for rapid deployment of new services, as well as meeting stringent quality of service (QoS) requirements.
- the EMS (316) is connected to the TAS (322), provisioning server (324) and the OSS/BSS (318) via RESTful APIs (Rest).
- the MRF (314) is configured to provide virtualization of networks to its network providers.
- the MRF (314) provides media services like announcements, tones, and conferencing for VoLTE, Wi-Fi calling, and fixed VoIP solutions.
- the MRF (314) is connected to the TAS (322) via the SIP and Media Server Markup Language (MSML) to facilitate the control and management of media resources during call sessions.
- MSML Media Server Markup Language
- the CRBT (312) service is configured to replace a standard audio clip with a clip selected by the user.
- CRBT (312) is a customizable ringtone or music that a subscriber may subscribe to replace the default ring back tone when the subscriber is called.
- the CRBT (312) service can be supported by different mobile network infrastructures including the circuit- switched GSM networks and IP multimedia networks such as IMS. By utilizing the CRBT (312) service, telecom companies can improve customer satisfaction and loyalty.
- the CRBT (312) and the TAS (322) are connected via the SIP to manage and control the delivery of ring-back tones to callers during call setup.
- the DRA (310) is a functional element in a 3G or 4G (such as LTE) network that provides real-time routing capabilities to ensure that messages are routed among the correct elements in a network.
- the DRA (310) and the Telephony Application Server (TAS) are connected via Diameter protocol to manage and route authentication, authorization, and accounting (AAA) messages for telephony services.
- AAA authentication, authorization, and accounting
- the OCS (308) is a centralized platform that allows a service provider to charge a user for services in real-time.
- the OCS (308) handles the subscriber's account balance, rating, charging transaction control and correlation. With the OCS (308), the telecom operator ensures that credit limits are enforced, and resources are authorized on a per transaction basis.
- the OCS (308) and the DRA (310) are connected via the Diameter protocol to facilitate the exchange of real-time charging information and manage accounting messages for telecommunications services.
- the MNP module (306) is configured to allow users to switch their mobile phone number between different mobile network providers while retaining their existing number.
- the MNP module (306) module allows customers to change their provider without having to change their phone number, making it easier to switch to a better plan or service.
- the MNP module (306) and the TAS (322) are connected via the SIP to manage and route calls effectively, ensuring proper handling of calls to ported numbers within the network.
- the provisioning server (324) is configured to customize a standard SIP header(s) to fulfill the objective of the present disclosure.
- the provisioning server (324) is embedded with the TAS (322).
- the provisioning server (324), in communication with the TAS (322), is configured to receive the call requests from a user via the UE (302).
- the provisioning server (324), in communication with the TAS (322), is configured to select an application thread for processing the received call request. After selecting the application thread, the TAS (322) is configured to bind the selected application thread ID to a client transaction ID corresponding to the user and is configured to generate a secured data packet.
- the TAS (322) is configured to insert the application thread ID in the record-route header associated with the call request.
- the record-route header in SIP is used to specify a route that a SIP request should take through the network.
- the provisioning server (324) is configured to forward the call request (SIP request) along with the generated secured data packet to the TAS (322).
- the TAS (322) is configured to extract and store the application thread ID corresponding to the call request.
- the TAS (322) is configured to extract the responses associated with the client transaction ID during a call session. Based on the extracted client transaction ID from the responses, the TAS (322) is configured to determine the application thread details. After determining the application thread, the TAS (322) is configured to dispatch all responses associated with the client transaction ID to the application thread (stored in the database (210).
- FIG. 4 illustrates an exemplary flow diagram (400) of thread selection, in accordance with an embodiment of the present disclosure.
- the TAS (322) is configured to store all the responses corresponding to the call in an individual thread.
- the TAS (322) in communication with the provisioning server (324), receives the call 1 request.
- the call 1 request includes two parts: a REQ message, and a RESP message.
- the REQ message is used to initiate the call, while the RESP message is used to respond to the call request.
- the TAS (322) receives the responses (for example, REQ/RESP) associated with this established call.
- the REQ/RESP response is used for sending and receiving messages in SIP, including requests for resources such as calls, chat sessions, and messaging.
- the TAS (322) is configured to assign all the responses corresponding to call 1 to the THREAD 1. In a similar manner, a plurality of responses associated with a call 2 is assigned to the THREAD 4.
- the TAS (322) utilizes THREAD 1, THREAD 2, THREAD 3, THREAD 4, THREAD 5, and THREAD n to handle and process individual call requests efficiently. For example, consider a scenario where User A initiates a call to User B, generating a Call 1 Request that includes a REQ message ("User A calls User B") and a corresponding RESP message ("Call request received for User A to User B").
- TAS assigns all related responses for this call to THREAD 1.
- THREAD 1 processes various REQ/RESP messages, such as "User B is ringing,” “User B accepted the call,” and "User A's call status updated to connected.”
- User C calls User D, prompting a Call 2 Request with similar REQ and RESP messages.
- the TAS assigns the call to THREAD 4, which handles its responses, including "User D is busy" and "User C's call status updated to missed.”
- the approach allows the TAS to manage multiple calls efficiently by utilizing dedicated threads for each call, ensuring that all call requests and responses are processed independently, optimizing resource usage, and enhancing overall system responsiveness.
- FIG. 5 illustrates another exemplary flow diagram of a method (500) for handling the call requests in the network (106), in accordance with an embodiment of the present disclosure.
- Step (502) includes receiving, by the processing engine (208), the call request from a user.
- the processing engine (208) is configured to receive, via the receiving unit (212), the call request from the user.
- Step (504) includes generating, by the processing engine (208), a client transaction identifier (ID) associated with the received call request.
- ID client transaction identifier
- the system initiates a structured process to handle and forward the request.
- the HTTP GET request retrieves data from a specified resource on a server using HTTP without altering the server’s state.
- the system creates a client transaction ID, which acts as a container for managing and tracking the request throughout its lifecycle.
- Step (506) includes selecting, by the processing engine (208), an application thread associated with the received call request.
- the provisioning unit is designed to efficiently manage incoming call requests by selecting an application thread to handle each request.
- the selection process involves identifying and choosing the application thread(s) to handle or process the received call request based on current load, availability, and attributes required by the call request.
- Step (508) includes extracting, by the processing engine (208), an application thread identifier (ID) associated with the selected application thread.
- the provisioning unit (214) is configured to extract the application thread identifier (ID) associated with the selected application thread. For example, in the VoIP system handling the SIP INVITE messages, once the provisioning unit selects a thread to process the call request, it retrieves the thread’s ID to keep track of which thread is responsible for that particular call session.
- Step (510) includes binding, by the processing engine (208), the extracted application thread ID with the generated client transaction ID to generate a binded information.
- Step (512) includes communicating, by the processing engine (208), the received call request containing the binded information along with the selected application thread to an application server (for example TAS) for performing one or more operations in the network (106).
- the provisioning unit (214) is configured to communicate the received call request containing the binded information along with the selected application thread to an application server (e.g., a telephony application server (TAS)) for performing one or more operations in the network.
- TAS telephony application server
- the provisioning unit communicates the received call request, which contains the binded information (thread ID and client transaction ID), along with details about the selected application thread, to the application server.
- the one or more operations includes dispatching, by the application server, one or more response messages associated with the client transaction ID to the selected application thread during the call session associated with the received call request.
- the method further comprising inserting the application thread ID in a record-route header of the received call request during an initial setup of the call request.
- the application thread is selected using a round-robin approach.
- the method further comprising binding one or more requests associated with the received call request to the selected application thread using the application thread ID inserted in the record-route header.
- the present invention discloses a user equipment (UE) (104) communicatively coupled with a network (106).
- the coupling comprises steps of receiving, by the network (106), a connection request from the UE (104), sending, by the network, an acknowledgment of the connection request to the UE (104) and transmitting a plurality of signals in response to the connection request.
- the call request in the network (106) are handled by a method (500) that includes receiving (502), by a processing engine (208), a call request from a user.
- the method (500) includes generating (504), by the processing engine (208), a client transaction identifier (ID) associated with the received call request.
- ID client transaction identifier
- the method (500) includes selecting (506), by the processing engine (208), an application thread associated with the received call request.
- the method (500) includes extracting (508), by the processing engine (208), an application thread identifier (ID) associated with the selected application thread.
- the method (500) includes binding (510), by the processing engine (208), the extracted application thread ID with the generated client transaction ID to generate a binded information.
- the method (500) includes communicating (512), by the processing engine (208), the received call request containing the binded information along with the selected application thread to an application server for performing one or more operations in the network (106).
- the present disclosure relates to a computer program product comprising a non-transitory computer-readable medium comprising instructions that, when executed by one or more processors, cause the one or more processors to perform a method (500) for handling a call request in a network (106).
- the method (500) includes receiving (502), by a processing engine (208), a call request from a user.
- the method (500) includes generating (504), by the processing engine (208), a client transaction identifier (ID) associated with the received call request.
- the method (500) includes selecting (506), by the processing engine (208), an application thread associated with the received call request.
- the method (500) includes extracting (508), by the processing engine (208), an application thread identifier (ID) associated with the selected application thread.
- the method (500) includes binding (510), by the processing engine (208), the extracted application thread ID with the generated client transaction ID to generate a binded information.
- the method (500) includes communicating (512), by the processing engine (208), the received call request containing the binded information along with the selected application thread to an application server for performing one or more operations in the network (106).
- FIG. 6 illustrates an exemplary computer system (600) in which or with which embodiments of the present disclosure may be implemented.
- the computer system may include an external storage device (610), a bus (620), a main memory (630), a read-only memory (640), a mass storage device (650), communication port(s) (660), and a processor (670).
- the processor (670) may include various modules associated with embodiments of the present disclosure.
- the communication port(s) (660) may be any of an RS -232 port for use with a modem-based dialup connection, a 10/100 Ethernet port, a Gigabit or 10 Gigabit port using copper or fiber, a serial port, a parallel port, or other existing or future ports.
- the communication port(s) (660) may be chosen depending on a network, such a Local Area Network (LAN), Wide Area Network (WAN), or any network to which the computer system connects.
- LAN Local Area Network
- WAN Wide Area Network
- the main memory (630) may be random access memory (RAM), or any other dynamic storage device commonly known in the art.
- the read-only memory (640) may be any static storage device(s) e.g., but not limited to, a Programmable Read Only Memory (PROM) chips for storing static information e.g., start-up or Basic Input/Output System (BIOS) instructions for the processor (670).
- the mass storage device (650) may be any current or future mass storage solution, which can be used to store information and/or instructions.
- Exemplary mass storage device includes, but is not limited to, Parallel Advanced Technology Attachment (PATA) or Serial Advanced Technology Attachment (SATA) hard disk drives or solid-state drives (internal or external, e.g., having Universal Serial Bus (USB) and/or Firewire interfaces), one or more optical discs, Redundant Array of Independent Disks (RAID) storage, e.g., an array of disks.
- PATA Parallel Advanced Technology Attachment
- SATA Serial Advanced Technology Attachment
- SSD solid-state drives
- USB Universal Serial Bus
- RAID Redundant Array of Independent Disks
- the bus (620) communicatively couples the processor (670) with the other memory, storage, and communication blocks.
- the bus (620) may be, e.g., a Peripheral Component Interconnect (PCI)/PCI Extended (PCI-X) bus, Small Computer System Interface (SCSI), Universal Serial Bus (USB), or the like, for connecting expansion cards, drives, and other subsystems as well as other buses, such a front side bus (FSB), which connects the processor 670 to the computer system.
- PCI Peripheral Component Interconnect
- PCI-X PCI Extended
- SCSI Small Computer System Interface
- USB Universal Serial Bus
- operator and administrative interfaces e.g., a display, keyboard, joystick, and a cursor control device
- the bus (620) may also be coupled to the bus (620) to support direct operator interaction with the computer system.
- Other operator and administrative interfaces can be provided through network connections connected through the communication port(s) (660).
- Components described above are meant only to exemplify various possibilities. In no way should the aforementioned exemplary computer system limit the scope of the present disclosure.
- the present disclosure offers a significant technical advantage over conventional techniques by utilizing a single application call processing thread to handle each call.
- the present disclosure effectively prevents multiple race condition scenarios and reduces lock contention, issues commonly encountered in conventional multi-threaded architectures.
- the present disclosure optimizes CPU resource usage and enhances overall performance by avoiding the complexities and overhead associated with thread synchronization.
- Utilizing a single application call processing thread to handle each call offers significant benefits in terms of efficiency and system management.
- the present disclosure reduces context switching, as the system no longer needs to constantly switch between multiple threads, leading to lower latency and quicker response times. Further, maintaining a dedicated thread for each call simplifies state management, allowing the thread to consistently track the call's lifecycle without the complications that arise from coordinating data across multiple threads.
- the present disclosure enhances resource utilization by minimizing contention but also facilitates easier debugging and monitoring, as operators can follow the processing flow without the complexity of managing multiple thread states. Additionally, the use of standard SIP headers in a customized manner to support this architecture ensures compatibility with existing protocols while streamlining call management. Consequently, the present disclosure provides a stable and efficient architecture, improving both system reliability and scalability.
Landscapes
- Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Theoretical Computer Science (AREA)
- Multimedia (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Business, Economics & Management (AREA)
- General Business, Economics & Management (AREA)
- Telephonic Communication Services (AREA)
Abstract
The present disclosure relates to a system (108) and a method (500) for handling a call request in a network (106) The method (500) includes receiving (502), the call 5 request from a user (102). The method (500) includes generating (504) a client transaction identifier (ID) associated with the received call request. The method (500) includes selecting (506) an application thread and extracting (508) an application thread identifier (ID) associated with the selected application thread. The method (500) includes binding (510) the extracted application thread ID with the generated client 10 transaction ID to generate a binded information. The method (500) includes communicating (512) the received call request containing the binded information along with the selected application thread to an application server (322).
Description
SYSTEM AND METHOD FOR HANDLING CALLS BASED ON THREAD AFFINITY
RESERVATION OF RIGHTS
[0001] A portion of the disclosure of this patent document contains material, which is subject to intellectual property rights such as, but are not limited to, copyright, design, trademark, Integrated Circuit (IC) layout design, and/or trade dress protection, belonging to Jio Platforms Limited (JPL) or its affiliates (hereinafter referred as owner). The owner has no objection to the facsimile reproduction by anyone of the patent document or the patent disclosure, as it appears in the Patent and Trademark Office patent files or records, but otherwise reserves all rights whatsoever. All rights to such intellectual property are fully reserved by the owner.
TECHNICAL FIELD
[0002] The present disclosure generally relates to the field of wireless communication systems. More particularly, the present disclosure relates to a system and a method for handling call requests in a network based on thread affinity.
DEFINITION
[0003] As used in the present disclosure, the following terms are generally intended to have the meaning as set forth below, except to the extent that the context in which they are used to indicate otherwise.
[0004] The expression 'thread affinity' used hereinafter in the specification refers to a scheduling strategy where a thread is preferentially assigned to run on a specific processor or core within a multi-core system.. The thread affinity allows efficient use of computational resources and improves overall system performance. Thread affinity is an optimization technique in multi-core processors, where a thread is preferentially scheduled to run on the same core for extended periods. The practice
enhances performance primarily through improved cache locality, as the processor’s cache hierarchy (LI, L2, L3) retains frequently accessed data for quick retrieval, minimizing latency when the thread accesses the data. The processor's cache hierarchy consists of three levels LI, L2 and L3. The LI level is the fastest and smallest and is dedicated to individual cores, The L2 level is larger and slightly slower, either corespecific or shared, and L3 level is largest and slowest, shared among multiple cores. By reducing the frequency of thread migration between cores, thread affinity decreases context switching overhead, thus allowing for more efficient central processing unit (CPU) utilization and a smoother execution flow. Further, the consistency in execution fosters better resource management, as the operating system can allocate memory and processing resources more effectively, leading to optimized access patterns and reduced contention.
[0005] The expression 'lock contention' used hereinafter in the specification refers to a situation where two or more processes are competing for access to a shared resource, such as a file or a network connection. This can lead to delays, errors, and decreased productivity.
[0006] The expression ‘SIP’ used hereinafter in the specification refers to a Session Initiation Protocol. The SIP is a signaling protocol used to establish, maintain, and terminate real-time communication sessions over IP networks. SIP is widely utilized in voice over IP (VoIP), video conferencing, and other multimedia communication applications.
[0007] The expression ‘IMS’ used hereinafter in the specification refers to an Internet Protocol (IP) Multimedia Subsystem. IMS is a standardized architecture used to deliver IP -based multimedia services, such as voice, video, and messaging, over broadband networks.
[0008] The expression ‘MNP’ used hereinafter in the specification refers to a Mobile Number Portability. The MNP is a service that allows mobile phone users to
retain their existing phone numbers when switching from one mobile network provider to another network provider.
[0009] The expression ‘OCS’ used hereinafter in the specification refers to an Online Charging System. The OCS is a real-time system that manages and processes billing, charging, and account balances for telecommunications services as these services are used by the user.
[0010] The expression ‘DRA’ used hereinafter in the specification refers to a Diameter Routing Agent. The DRA is a network component used in Diameter-based systems to manage and route Diameter messages between nodes in a telecommunications network. Diameter is a protocol used for authentication, authorization, and accounting (AAA) in network environments.
[0011] The expression ‘CRBT’ used hereinafter in the specification refers to Caller Ringback Tones. The CRBT are personalized audio tones that a caller hears while waiting for his call to be answered, replacing the standard ringing sound with music, messages, or other content chosen by a call recipient.
[0012] The expression ‘MRF’ used hereinafter in the specification refers to a Media Resource Function. The MRF is a network component that provides media processing capabilities, such as conferencing, playback, and recording, within multimedia communication systems.
[0013] The expression ‘EMS’ used hereinafter in the specification refers to an Element Management System. The EMS is a network management system that focuses on the configuration, monitoring, and maintenance of individual network elements or devices.
[0014] The expression ‘OSS’ used hereinafter in the specification refers to an Operational Support System. The OSS is a comprehensive framework used for
managing, controlling, and optimizing telecommunications network operations and services, including provisioning, fault management, and performance monitoring.
[0015] The expression ‘BSS’ used hereinafter in the specification refers to a Business Support System. The BSS is a set of software applications and tools used for managing and supporting business processes in telecommunications, such as billing, customer relationship management, and service provisioning.
[0016] The expression ‘Load Balancer’ used hereinafter in the specification refers to a device or software application that distributes network or application traffic across multiple servers to ensure no single server becomes overwhelmed.
[0017] The expression ‘TAS’ used hereinafter in the specification refers to a Telephony Application Server. The TAS is a platform that provides telephony services and application support, enabling advanced call processing, messaging, and integration with communication networks.
[0018] The expression ‘Record-Route header’ used hereinafter in the specification refers to a SIP header used to manage the routing of requests within a call session. When included in an initial request, the record-route header instructs intermediate network elements (like proxies) to store the route information. The recordroute header ensures that all subsequent requests in the session follow the same path, allowing for consistent handling of the call. The record-route header ensures that all subsequent requests within a call session are routed through the same network elements as the initial request, maintaining consistent call handling.
[0019] The expression ‘Round-robin approach’ used hereinafter in the specification is a scheduling method where requests or tasks are assigned to a pool of resources (such as threads or servers) in a cyclic order. Each resource receives an equal opportunity to handle a task before the next one in the sequence is chosen.
[0020] The expression ‘application thread’ is a concurrent execution unit within a program that performs tasks like handling HTTP (Hypertext Transfer Protocol) requests in a web server or managing background operations in a desktop app.
[0021] These definitions are in addition to those expressed in the art.
BACKGROUND
[0022] The following description of related art is intended to provide background information pertaining to the field of the disclosure. This section may include certain aspects of the art that may be related to various features of the present disclosure. However, it should be appreciated that this section be used only to enhance the understanding of the reader with respect to the present disclosure, and not as admissions of prior art.
[0023] Session initiation protocol (SIP) is a signaling protocol that is used for setting up multimedia session. SIP is the core protocol in Internet Protocol (IP) multimedia subsystem (IMS) architecture. The SIP utilizes a plurality of messages for establishing and managing a call. In a communication network, when a call is initiated by a user A (calling party), then a user B (receiving party) receives an invite message. After receiving the invite message, multiple SIP messages will get exchanged between the user A and the user B . Once all the messages are successfully transferred from the user A to the user B party, then the call is established between the user A to the user B. As the multiple SIP messages are being exchanged in a signaling plane, it is the responsibility of the Telephony Application Server (TAS) to pass on successfully all the messages from the user A to the user B.
[0024] In conventional telephony systems, the management of concurrent calls typically relies on complex multi-threaded architectures. The systems are designed to handle multiple calls simultaneously, utilizing numerous threads to process different
calls in parallel. While this multi -threaded approach allows for high levels of concurrency and the ability to manage numerous simultaneous interactions, it also introduces several significant challenges.
[0025] One of the primary issues faced by the conventional telephony systems is the occurrence of race conditions. Race conditions arise when multiple threads attempt to access and modify shared resources concurrently, without proper synchronization. This can lead to unpredictable and erroneous behavior, as threads may interfere with each other, resulting in data corruption or inconsistent states. Managing race conditions requires intricate and often cumbersome synchronization mechanisms to ensure that threads do not conflict, adding complexity to the system’s design and implementation.
[0026] Additionally, conventional multi-threaded systems are prone to lock contention. Lock contention occurs when multiple threads compete for access to the same resources or critical sections of code. This competition can create performance bottlenecks, where threads are forced to wait for locks to be released before they can proceed. The increased wait times and the overhead associated with acquiring and releasing locks can significantly degrade system performance, leading to increased latency and reduced throughput.
[0027] The combined effects of race conditions and lock contention can impact the overall stability and efficiency of conventional telephony systems. These issues not only complicate the design and maintenance of the system but also pose challenges in ensuring consistent and reliable performance under varying loads. As a result, while conventional multi-threaded architectures provide the capability to handle multiple concurrent calls, they also require careful management and optimization to address the inherent complexities and potential pitfalls associated with these approaches.
[0028] There is, therefore, a need in the art to provide a method and a system that can mitigate the disadvantages of the prior art and that may handle call flow over a network server for effective and efficient utilization of network resources.
OBJECTIVES
[0029] Some of the objectives of the present disclosure, which at least one embodiment herein satisfies, are as follows:
[0030] An objective of the present disclosure is to provide a system and a method for handling call requests based on thread affinity in a network that avoids multiple thread race conditions during the call handling.
[0031] Another objective of the present disclosure is to provide a system and a method for handling call requests based on thread affinity in a network that reduces lock contention problems in Central Processing Unit (CPU) scheduling.
[0032] Yet another objective of the present disclosure is to provide a system and a method for handling call requests based on thread affinity in a network that optimizes CPU resources by binding all responses associated with the client transaction ID to the application thread.
[0033] Still another objective of the present disclosure is to provide a system and a method for handling call requests based on thread affinity in a network that optimizes CPU resources by binding all requests and responses associated with the session or call to the application thread.
[0034] Other objectives and advantages of the present disclosure will be more apparent from the following description, which is not intended to limit the scope of the present disclosure.
SUMMARY
[0035] In an exemplary embodiment, the present disclosure relates to a method for handling a call request in a network. The method includes receiving, by a processing engine, the call request from a user. The method includes generating, by the processing engine, a client transaction identifier (ID) associated with the received call request. The method includes selecting, by the processing engine, an application thread associated with the received call request. The method includes extracting, by the processing engine, an application thread identifier (ID) associated with the selected application thread. The method includes binding, by the processing engine, the extracted application thread ID with the generated client transaction ID to generate a binded information. The method includes communicating, by the processing engine, the received call request containing the binded information along with the selected application thread to an application server for performing one or more operations in the network.
[0036] In an embodiment, the method further includes inserting, by the processing engine, the application thread ID in a record-route header of the received call request during an initial setup of a call session associated with the received call request.
[0037] In an embodiment, the performing the one or more operations includes dispatching, by the application server, one or more response messages associated with the client transaction ID to the selected application thread during the call session associated with the received call request.
[0038] In an embodiment, the application thread is selected using a round-robin approach.
[0039] In an embodiment, the method further comprising binding, by the processing engine, one or more requests associated with the received call request to the selected application thread using the application thread ID inserted in the record-route header.
[0040] In an exemplary embodiment, the present disclosure relates to a system for handling a call request in a network. The system includes a memory, and a processing engine configured to execute a set of instructions stored in the memory to receive the call request from a user. The processing engine is configured to generate a client transaction identifier (ID) associated with the received call request. The processing engine is configured to select an application thread associated with the received call request. The processing engine is configured to extract an application thread identifier (ID) associated with the selected application thread. The processing engine is configured to bind the extracted application thread ID with the generated client transaction ID to generate a binded information. The processing engine is configured to communicate the received call request containing the binded information along with the selected application thread to an application server for performing one or more operations in the network.
[0041] In an exemplary embodiment, the present disclosure relates to a user equipment (UE) communicatively coupled with a network. The coupling comprises steps of receiving, by the network, a connection request from the UE, sending, by the network, an acknowledgment of the connection request to the UE and transmitting a plurality of signals in response to the connection request. The call request in the network is handled by a method that includes receiving, by a processing engine, the call request from a user. The method includes generating, by the processing engine, a client transaction identifier (ID) associated with the received call request. The method includes selecting, by the processing engine, an application thread associated with the received call request. The method includes extracting, by the processing engine, an application thread identifier (ID) associated with the selected application thread. The method includes binding, by the processing engine, the extracted application thread ID with the generated client transaction ID to generate a binded information. The method includes communicating, by the processing engine, the received call request containing
the binded information along with the selected application thread to an application server for performing one or more operations in the network.
[0042] In another exemplary embodiment, the present disclosure relates to a computer program product comprising a non-transitory computer-readable medium comprising instructions that, when executed by one or more processors, cause the one or more processors to perform a method for handling a call request in the network. The method includes receiving, by a processing engine, the call request from a user. The method includes generating, by the processing engine, a client transaction identifier (ID) associated with the received call request. The method includes selecting, by the processing engine, an application thread associated with the received call request. The method includes extracting, by the processing engine, an application thread identifier (ID) associated with the selected application thread. The method includes binding, by the processing engine, the extracted application thread ID with the generated client transaction ID to generate a binded information. The method includes communicating, by the processing engine, the received call request containing the binded information along with the selected application thread to an application server for performing one or more operations in the network.
BRIEF DESCRIPTION OF THE ACCOMPANYING DRAWING
[0043] The accompanying drawings, which are incorporated herein, and constitute a part of this disclosure, illustrate exemplary embodiments of the disclosed methods and systems in which like reference numerals refer to the same parts throughout the different drawings. Components in the drawings are not necessarily to scale, emphasis instead being placed upon clearly illustrating the principles of the present disclosure. Some drawings may indicate the components using block diagrams and may not represent the internal circuitry of each component. It will be appreciated by those skilled in the art that disclosure of such drawings includes disclosure of
electrical components, electronic components or circuitry commonly used to implement such components.
[0044] FIG. 1 illustrates an exemplary network architecture for implementing a system for handling a call request in a network, in accordance with an embodiment of the present disclosure.
[0045] FIG. 2 illustrates an exemplary block diagram of the system, in accordance with an embodiment of the present disclosure.
[0046] FIG. 3 illustrates an exemplary system architecture of the system, in accordance with an embodiment of the present disclosure.
[0047] FIG. 4 illustrates an exemplary flow diagram of thread selection, in accordance with an embodiment of the present disclosure.
[0048] FIG. 5 illustrates another exemplary flow diagram of a method for handling the call request in the network, in accordance with an embodiment of the present disclosure.
[0049] FIG. 6 illustrates an example computer system in which or with which the embodiments of the present disclosure may be implemented.
[0050] The foregoing shall be more apparent from the following more detailed description of the disclosure.
LIST OF REFERENCE NUMERALS
100 - Network architecture
102 - User(s)
104 -User Equipments (UEs)
106 - Network
108 - System
200 - Block diagram
202 - Processor(s) 204 - Memory
206 -Interface(s)
208 - Processing engine
210 - Database
212 - Receiving unit 214 - Provisioning unit
300 - System Architecture
302 - User equipment (UE)
304 - Internet protocol (IP) Multimedia Subsystem (IMS)
306 - Mobile number portability (MNP) 308 - Online charging system (OCS)
310 - Diameter routing agent (DRA)
312 - Caller ring back tones (CRBT)
314 - Media resource function (MRF)
316 - Element management system (EMS)
318 - Operational support system (OSS) / Business support system (BSS)
320 - Load balancer
322 - Telephony Application Server (TAS)
324 - Provisioning server
400, 500 - Flow Diagram
600 -Computer system
610 - External Storage Device
620 - Bus
630 - Main Memory
640 - Read Only Memory
650 - Mass Storage Device
660 - Communication Port
670 - Processor
DETAILED DESCRIPTION
[0051] In the following description, for the purposes of explanation, various specific details are set forth in order to provide a thorough understanding of embodiments of the present disclosure. It will be apparent, however, that embodiments of the present disclosure may be practiced without these specific details. Several features described hereafter can each be used independently of one another or with any
combination of other features. An individual feature may not address any of the problems discussed above or might address only some of the problems discussed above. Some of the problems discussed above might not be fully addressed by any of the features described herein. Example embodiments of the present disclosure are described below, as illustrated in various drawings in which like reference numerals refer to the same parts throughout the different drawings.
[0052] The ensuing description provides exemplary embodiments only, and is not intended to limit the scope, applicability, or configuration of the disclosure. Rather, the ensuing description of the exemplary embodiments will provide those skilled in the art with an enabling description for implementing an exemplary embodiment. It should be understood that various changes may be made in the function and arrangement of elements without departing from the spirit and scope of the disclosure as set forth.
[0053] Specific details are given in the following description to provide a thorough understanding of the embodiments. However, it will be understood by one of ordinary skill in the art that the embodiments may be practiced without these specific details. For example, circuits, systems, networks, processes, and other components may be shown as components in block diagram form in order not to obscure the embodiments in unnecessary detail. In other instances, well-known circuits, processes, algorithms, structures, and techniques may be shown without unnecessary detail in order to avoid obscuring the embodiments.
[0054] Also, it is noted that individual embodiments may be described as a process that is depicted as a flowchart, a flow diagram, a data flow diagram, a structure diagram, or a block diagram. Although a flowchart may describe the operations as a sequential process, many of the operations can be performed in parallel or concurrently. In addition, the order of the operations may be re-arranged. A process is terminated when its operations are completed but could have additional steps not included in a figure. A process may correspond to a method, a function, a procedure, a subroutine, a
subprogram, etc. When a process corresponds to a function, its termination can correspond to a return of the function to the calling function or the main function.
[0055] The word "exemplary" and/or "demonstrative" is used herein to mean serving as an example, instance, or illustration. For the avoidance of doubt, the subject matter disclosed herein is not limited by such examples. In addition, any aspect or design described herein as "exemplary" and/or "demonstrative" is not necessarily to be construed as preferred or advantageous over other aspects or designs, nor is it meant to preclude equivalent exemplary structures and techniques known to those of ordinary skill in the art. Furthermore, to the extent that the terms "includes,” "has,” "contains," and other similar words are used in either the detailed description or the claims, such terms are intended to be inclusive like the term "comprising" as an open transition word without precluding any additional or other elements.
[0056] Reference throughout this specification to "one embodiment" or "an embodiment" or "an instance" or "one instance" means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the present disclosure. Thus, the appearances of the phrases "in one embodiment" or "in an embodiment" in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments.
[0057] The terminology used herein is to describe particular embodiments only and is not intended to be limiting the disclosure. As used herein, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context indicates otherwise. It will be further understood that the terms "comprises" and/or "comprising," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements,
components, and/or groups thereof. As used herein, the term "and/or" includes any combinations of one or more of the associated listed items. It should be noted that the terms "mobile device", "user equipment", "user device", "communication device", "device" and similar terms are used interchangeably for the purpose of describing the invention. These terms are not intended to limit the scope of the invention or imply any specific functionality or limitations on the described embodiments. The use of these terms is solely for convenience and clarity of description. The invention is not limited to any particular type of device or equipment, and it should be understood that other equivalent terms or variations thereof may be used interchangeably without departing from the scope of the invention as defined herein.
[0058] As used herein, an "electronic device", or "portable electronic device", or "user device" or "communication device" or "user equipment" or "device" refers to any electrical, electronic, electromechanical and computing device. The user device is capable of receiving and/or transmitting one or parameters, performing function/s, communicating with other user devices and transmitting data to the other user devices. The user equipment may have a processor, a display, a memory, a battery and an input-means such as a hard keypad and/or a soft keypad. The user equipment may be capable of operating on any radio access technology including but not limited to IP-enabled communication, Zig Bee, Bluetooth, Bluetooth Low Energy, Near Field Communication, Z-Wave, Wi-Fi, Wi-Fi direct, etc. For instance, the user equipment may include, but not limited to, a mobile phone, smartphone, virtual reality (VR) devices, augmented reality (AR) devices, laptop, a general-purpose computer, desktop, personal digital assistant, tablet computer, mainframe computer, or any other device as may be obvious to a person skilled in the art for implementation of the features of the present disclosure.
[0059] Further, the user device may also comprise a "processor" or "processing unit" includes processing unit, wherein the processor refers to any logic circuitry for processing instructions. The processor may be a general -purpose processor, a special
purpose processor, a conventional processor, a digital signal processor, a plurality of microprocessors, one or more microprocessors in association with a DSP core, a controller, a microcontroller, Application Specific Integrated Circuits, Field Programmable Gate Array circuits, any other type of integrated circuits, etc. The processor may perform signal coding data processing, input/output processing, and/or any other functionality that enables the working of the system according to the present disclosure. More specifically, the processor is a hardware processor.
[0060] As portable electronic devices and wireless technologies continue to improve and grow in popularity, the advancing wireless technologies for data transfer are also expected to evolve and replace the older generations of technologies. In the field of wireless data communications, the dynamic advancement of various generations of cellular technology is also seen. The development, in this respect, has been incremental in the order of second generation (2G), third generation (3G), fourth generation (4G), and now fifth generation (5G), and more such generations are expected to continue in the forthcoming time.
[0061] Radio Access Technology (RAT) refers to the technology used by mobile devices/ user equipment (UE) to connect to a cellular network. It refers to the specific protocol and standards that govern the way devices communicate with base stations, which are responsible for providing the wireless connection. Further, each RAT has its own set of protocols and standards for communication, which define the frequency bands, modulation techniques, and other parameters used for transmitting and receiving data. Examples of RATs include GSM (Global System for Mobile Communications), CDMA (Code Division Multiple Access), UMTS (Universal Mobile Telecommunications System), LTE (Long-Term Evolution), and 5G. The choice of RAT depends on a variety of factors, including the network infrastructure, the available spectrum, and the mobile device's/device's capabilities. Mobile devices often support multiple RATs, allowing them to connect to different types of networks and provide optimal performance based on the available network resources.
[0062] While considerable emphasis has been placed herein on the components and component parts of the preferred embodiments, it will be appreciated that many embodiments can be made and that many changes can be made in the preferred embodiments without departing from the principles of the disclosure. These and other changes in the preferred embodiment as well as other embodiments of the disclosure will be apparent to those skilled in the art from the disclosure herein, whereby it is to be distinctly understood that the foregoing descriptive matter is to be interpreted merely as illustrative of the disclosure and not as a limitation.
[0063] Session initiation protocol (SIP) enables voice and video communication over the internet. SIP handles the signaling and control of multimedia sessions. SIP uses a text-based message format that can be extended and customized to suit different needs and scenarios. The SIP message includes a request line or a status line, followed by a set of headers and an optional message body. The request line or status line indicates a method, a universal resource identifier (URI), and a version of the protocol. The set of headers provides additional information about the sender, the receiver, the session, and the message. The message body can contain session description protocol (SDP) or other data types. SDP defines the media formats, codecs, and parameters for each session.
[0064] For example, when a call is initiated by a user A (calling party) through a user equipment, a user equipment of a user B (receiving party) receives an INVITE message. After receiving the invite message, the user equipment of the user B rings. In SIP, a response titled as "180 Ringing" is configured to notify the calling party that the call has been initiated and assure the calling party that the receiving party has received the INVITE message. Multiple SIP messages are exchanged between the user A and the user B during a signaling plane, and during a data plane. During the signaling plane, once all the messages are successfully transferred from the user A to the user B party, then a call is established between the user A to the user B. As these multiple SIP messages are being exchanged in the signaling plane, it is the responsibility of the
Telephony Application Server (TAS) to pass on successfully all the messages from the user A to the user B.
[0065] All the SIP messages are independent, and when the invite message is received by the TAS, the TAS executes a service logic as per the user management module (UMM) configured logic. The UMM is a tool used to manage user accounts, permissions, and access to services and resources within the network. The TAS may send the INVITE message to the user B. Post the processing of the INVITE message, the TAS may receive other messages from other users intended to connect over the network. In an example, the TAS may receive a plurality of messages in parallel. In an aspect, the TAS is a multi-threaded application.
[0066] In the present scenario, these multiple messages are scattered among multiple threads, thereby a number of resources are used, resulting in a costly and complex arrangement. All these messages are related to the same dialogue or same call, so there are some shared resources as well that may be shared among various threads. Due to the requirements for sharing the resources by various threads, a race condition would be created among the threads for resources, leading to a state of lock contention. To overcome the lock contention, locking and unlocking the resources may need to be performed, and the same call is scattered to multiple threads.
[0067] Accordingly, there is a need for systems and methods that enhance call processing with improved performance based on thread affinity.
[0068] The present disclosure aims to overcome the above-mentioned and other existing problems in this field of technology by binding all the messages related to a specific call to the same application thread. Based on utilizing the same application thread for all the messages corresponding to the specific call, the provisioning unit and the method can utilize the resources (CPU resources) effectively and efficiently. This approach optimizes network performance and resource utilization, ensuring smooth network operation.
[0069] The present disclosure enhances the TAS architecture, which may handle each call with a single application call processing thread. It also avoids multiple race condition scenarios and reduces lock contention, thereby optimizing central processing unit (CPU) resource usage and providing a stable architecture.
[0070] Hereinafter, exemplary embodiments of the present disclosure will be described with reference to the accompanying drawings.
[0071] The various embodiments throughout the disclosure will be explained in more detail with reference to FIG. 1 - FIG. 6.
[0072] FIG. 1 illustrates an exemplary network architecture (100) for implementing a system (108) for handling a call request in a network (106), in accordance with an embodiment of the present disclosure.
[0073] As illustrated in FIG. 1, the network architecture (100) may include one or more user equipments (UEs) (104-1, 104-2... 104-N) associated with one or more users (102-1, 102-2... 102-N) in an environment. A person of ordinary skill in the art will understand that one or more users (102-1, 102-2... 102-N) may collectively referred to as the users (102). Similarly, a person of ordinary skill in the art will understand that one or more UEs (104-1, 104-2... 104-N) may be collectively referred to as the UE (104). Although only three UE (104) are depicted in FIG. 1, however, any number of the UE (104) may be included without departing from the scope of the ongoing description.
[0074] In an embodiment, the UE (104) may include smart devices operating in a smart environment, for example, an Internet of Things (loT) system. In such an embodiment, the UE (104) may include, but is not limited to, smartphones, smart watches, smart sensors (e.g., mechanical, thermal, electrical, magnetic, etc.), networked appliances, networked peripheral devices, networked lighting system, communication devices, networked vehicle accessories, networked vehicular devices,
smart accessories, tablets, smart television (TV), computers, smart security system, smart home system, other devices for monitoring or interacting with or for the users (102) and/or entities, or any combination thereof. A person of ordinary skill in the art will appreciate that the UE (104) may include, but is not limited to, intelligent, multisensing, network-connected devices, which may integrate seamlessly with each other and/or with a central server or a cloud-computing system or any other device that is network-connected.
[0075] Additionally, in some embodiments, the UE (104) may include, but not limited to, a handheld wireless communication device (e.g., a mobile phone, a smartphone, a phablet device, and so on), a wearable computer device (e.g., a headmounted display computer device, a head-mounted camera device, a wristwatch computer device, and so on), a Global Positioning System (GPS) device, a laptop computer, a tablet computer, or another type of portable computer, a media playing device, a portable gaming system, and/or any other type of computer device with wireless communication capabilities, and the like. In an embodiment, the UE (104) may include, but are not limited to, any electrical, electronic, electromechanical, or equipment, or a combination of one or more of the above devices, such as virtual reality (VR) devices, augmented reality (AR) devices, laptop, a general-purpose computer, desktop, personal digital assistant, tablet computer, mainframe computer, or any other computing device, wherein the UE (104) may include one or more in-built or externally coupled accessories including, but not limited to, a visual aid device such as a camera, an audio aid, a microphone, a keyboard, and input devices for receiving input from the user (102) or the entity such as touchpad, touch-enabled screen, electronic pen, and the like. A person of ordinary skill in the art will appreciate that the UE (104) may not be restricted to the mentioned devices and various other devices may be used.
[0076] Referring to FIG. 1, the UE (104) may communicate with the system (108) through the network (106) for sending or receiving various types of data. In an embodiment, the network (106) may include at least one of a 5G network, 6G network,
or the like. The network (106) may enable the UE (104) to communicate with other devices in the network architecture (100) and/or with the system (108). The network (106) may include a wireless card or some other transceiver connection to facilitate this communication. In another embodiment, the network (106) may be implemented as, or include any of a variety of different communication technologies such as a wide area network (WAN), a local area network (LAN), a wireless network, a mobile network, a Virtual Private Network (VPN), the Internet, the Public Switched Telephone Network (PSTN), or the like.
[0077] In an embodiment, the network (106) may include, by way of example but not limitation, at least a portion of one or more networks having one or more nodes that transmit, receive, forward, generate, buffer, store, route, switch, process, or a combination thereof, etc. one or more messages, packets, signals, waves, voltage or current levels, some combination thereof, or so forth. The network (106) may also include, by way of example but not limitation, one or more of a radio access network (RAN), a wireless network, a wired network, an internet, an intranet, a public network, a private network, a packet-switched network, a circuit- switched network, an ad hoc network, an infrastructure network, a cable network, a cellular network, a satellite network, a fiber optic network, or some combination thereof.
[0078] In an embodiment, the UE (104) is communicatively coupled with the network (106). The network (106) may receive a connection request from the UE (104). The network (106) may send an acknowledgment of the connection request to the UE (104). The UE (104) may transmit a plurality of signals in response to the connection request.
[0079] Although FIG. 1 shows exemplary components of the network architecture (100), in other embodiments, the network architecture (100) may include fewer components, different components, differently arranged components, or additional functional components than depicted in FIG. 1. Additionally, or
alternatively, one or more components of the network architecture (100) may perform functions described as being performed by one or more other components of the network architecture (100).
[0080] FIG. 2 illustrates an exemplary block diagram (200) of the system (108), in accordance with an embodiment of the present disclosure.
[0081] Referring to FIG. 2, in an embodiment, the system (108) may include one or more processor(s) (202). The one or more processor(s) (202) may be implemented as one or more microprocessors, microcomputers, microcontrollers, digital signal processors, central processing units, logic circuitries, and/or any devices that process data based on operational instructions. Among other capabilities, the one or more processor(s) (202) may be configured to fetch and execute computer-readable instructions stored in a memory (204) of the system (108). The memory (204) may be configured to store one or more computer-readable instructions or routines in a non- transitory computer readable storage medium, which may be fetched and executed to create or share data packets over a network service. The memory (204) may include any non-transitory storage device including, for example, volatile memory such as random-access memory (RAM), or non-volatile memory such as erasable programmable read only memory (EPROM), flash memory, and the like.
[0082] In an embodiment, the system (108) may include an interface(s) (206). The interface(s) (206) may include a variety of interfaces, for example, interfaces for data input and output devices (I/O), storage devices, and the like. The interface(s) (206) may facilitate communication through the system (108). The interface(s) (206) may also provide a communication pathway for one or more components of the system (108). Examples of such components include, but are not limited to, a processing engine (208) and a database (210).
[0083] The processing engine (208) may be implemented as a combination of hardware and programming (for example, programmable instructions) to implement
one or more functionalities of the processing engine (208). In an embodiment, the processing engine (208) may include a receiving unit (212) and a provisioning unit (214). In the examples described herein, such combinations of hardware and programming may be implemented in several different ways. For example, the programming for the processing engine (208) may be processor-executable instructions stored on a non-transitory machine -readable storage medium and the hardware for the processing engine (208) may comprise a processing resource (for example, one or more processors), to execute such instructions. In the present examples, the machine- readable storage medium may store instructions that, when executed by the processing resource, implement the processing engine (208). In such examples, the system may comprise the machine-readable storage medium storing the instructions and the processing resource to execute the instructions, or the machine-readable storage medium may be separate but accessible to the system and the processing resource. In other examples, the processing engine (208) may be implemented by electronic circuitry.
[0084] In an embodiment, the database (210) includes data that may be either stored or generated as a result of functionalities implemented by any of the components of the processor (202) or the processing engine (208).
[0085] In an embodiment, the processing engine (208) is configured to receive, via the receiving unit (212), a call request from the user (102). In an aspect, the call request may include a voice over IP (VoIP) call request, a video call request, a conference call request, a callback request, or a call forwarding request. The processing engine is configured to accept and process the call request received from the user in accordance with a specified network protocol. The specified network protocol may be a Session Initiation Protocol (SIP), a Real-Time Transport Protocol (RTP), a Transmission Control Protocol (TCP), a Hypertext Transfer Protocol (HTTP). The call request may include a request line that specifies the method (e.g., INVITE for SIP calls or POST for HTTP requests), a resource or endpoint being requested, and a protocol
version. The call request may include one or more headers that provide additional metadata, such as the source and destination of the call request, and necessary routing information. A payload, which is optional, may be included within the body of the call request, having details such as user information, call parameters, or media content. For example, when the system (such as a telephony system) receives a new call initiation request, such as a SIP INVITE message, the processing engine is configured to perform a series of steps to process the received call. The process may start with the reception of the SIP INVITE message from the user, which is a critical component in initiating a communication session. The SIP INVITE is a message in the Session Initiation Protocol (SIP), used to initiate a call or multimedia session between the users. The SIP INVITE message may be sent by the user (e.g., through a Voice over Internet Protocol (VoIP) phone) to the system to request the establishment of the call. The message contains essential information such as the caller's and callee's identifiers, and media capabilities. When the SIP INVITE message arrives at the processing engine, the message signifies the beginning of a new communication session and triggers the system to process the request according to the defined call handling procedures.
[0086] In an embodiment, the provisioning unit (214) is configured to generate a client transaction identifier (ID) associated with the received call request. For example, if the call request is identified as "CALL-REQ-2024-001", the system may generate a corresponding client transaction ID such as "TXN-2024-001-A1B2C3", ensuring that each call request may be tracked and referenced throughout the session. The client transaction ID not only aids in organizing and managing active calls but also facilitates troubleshooting and session management by allowing network elements to reference specific transactions. For example, when the user request, such as the SIP INVITE message in a VoIP system or an HTTP GET request in a web server, is received, the system initiates a structured process to handle and forward the request. The HTTP GET request retrieves data from a specified resource on a server using the HTTP without altering the server’s state. Initially, the system creates a client
transaction, which acts as a container for managing and tracking the request throughout its lifecycle. The client transaction includes a unique identifier and context information that captures details about the request’s origin, destination, and state.
[0087] In an embodiment, the application thread is stored in the memory. The memory is configured to maintain a repository of pre-initialized application threads, which can be rapidly accessed and allocated for incoming call requests. In an aspect, the provisioning unit (214) may be configured to generate the application threads. Upon system initialization or during runtime, the provisioning unit (214) may create a specified number of application threads based on the anticipated load. The specified number of application threads are maintained in a ready state and are allocated to handle incoming call requests as they are received. When the call request is processed, an available thread is selected from a pool of application threads, and its state is updated to indicate that it is actively handling the call request.
[0088] In an embodiment, the provisioning unit (214) is configured to select an application thread associated with the received call request. The provisioning unit is designed to efficiently manage incoming call requests by selecting an application thread to handle each request. In an embodiment, the selection process involves identifying and choosing the application thread to handle or process the received call request based on current load, availability, and attributes required by the request, ensuring that the call request is managed efficiently according to the application's requirements and the context of the call request, capabilities, thread priority, or compatibility with certain types of call requests. For example, a thread with specialized handling for a specific task or protocol might be selected if the call request demands such handling. In an embodiment, the application thread may be selected using a roundrobin (RR) scheduling approach. For each new call request received, such as the SIP INVITE in the VoIP system or the HTTP GET request on the web server, the system may employ the RR scheduling approach to select an available thread from a thread pool. For instance, if multiple call initiation requests arrive simultaneously, the
provisioning unit selects the next available thread in a round-robin fashion to evenly distribute the workload. The selected thread is dedicated to processing the specific call request, ensuring that each call request is handled promptly and systematically. This approach helps optimize resource utilization and maintains consistent performance across the system.
[0089] In an operative aspect, the provisioning unit (214) manages incoming SIP INVITE requests using the round-robin scheduling approach to select the application threads from the pool. For instance, when three SIP INVITE requests arrive simultaneously from the users attempting to initiate calls, the system first checks the availability of threads in the pool. If Threads A and B are busy, provisioning unit (214) selects Thread C to handle the first call request, Thread D for the second call request, and Thread E for the third call request. Each thread is dedicated to processing its respective call request, ensuring that all call requests are addressed promptly and efficiently. Therefore, the provisioning unit (214) optimizes resource utilization and maintains consistent performance, allowing the system to handle multiple call requests without delay.
[0090] In an embodiment, the provisioning unit (214) is configured to extract an application thread identifier (ID) associated with the selected application thread. For example, in the VoIP system handling the SIP INVITE messages, once the provisioning unit (214) selects the thread to process the call request, the provisioning unit (214) retrieves the thread’s ID to track which thread is responsible for that particular call session. The thread ID is used to ensure that all subsequent interactions and responses related to the call request are handled by the same thread, maintaining consistency and enabling efficient processing of the request. In an example, the extracting techniques may include querying a thread registry, using thread management application programming interface(s) (APIs), or accessing thread metadata to retrieve the application thread ID. For example, once the provisioning unit (214) selects Thread D for the call request from User 1, the provisioning unit (214) queries a thread registry
to retrieve the corresponding thread ID. The registry responds with the thread's name, ID (e.g., THREAD-ID-004), and status, confirming that Thread D is active. The thread ID is then stored for future reference, ensuring that all subsequent interactions related to User l ’s call — such as additional SIP messages and session updates — are managed consistently by Thread D. By using the thread ID, the system can effectively track the call session, optimizing resource utilization and enhancing overall performance.
[0091] In an embodiment, the provisioning unit (214) is configured to bind the extracted application thread ID with the generated client transaction ID to generate a binded information. The combination of the thread ID and client transaction ID constitutes the binded information. The binded information consists of a structured set of data for transaction tracking in multi-threaded applications. The binded information combines the application thread ID and the client transaction ID, creating a unique reference that links a specific thread's operations to a call request. The association enables effective tracking of how each call request is processed by different threads and facilitates performance monitoring through associated timestamps that indicate when the transaction was initiated and completed. Further, the inclusion of status information in the binded information helps in identifying the current state of the call request, whether it is pending, in progress, or completed. The binded information may contains contextual data relevant to the call request, such as user IDs, session identifiers. This binded information effectively links the thread responsible for handling the call request with the specific client transaction it is managing. For example, in the VoIP system, when the SIP INVITE request is received, the provisioning unit (214) assigns a thread to process the request and creates a client transaction ID for it. By binding the thread ID with the client transaction ID, the system ensures that all subsequent interactions, including responses and state updates, are consistently managed by the same thread, thereby maintaining coherence and efficiency in processing the call request.
[0092] In an embodiment, the provisioning unit (214) is configured to communicate the received call request containing the binded information along with the selected application thread to an application server (e.g., a telephony application server (TAS)) for performing one or more operations in the network. For example, once the binding of the thread ID and client transaction ID is complete, the provisioning unit (214) communicates the received call request, which contains the binded information (thread ID and client transaction ID), along with details about the selected application thread, to the application server. The application server uses the information to perform one or more operations in the network, such as routing the call request, managing session states, or processing the call request. In an embodiment, performing one or more operations may include dispatching one or more response messages associated with the client transaction ID to the selected application thread during a call session associated with the call request.
[0093] For example, in the VoIP system, if the SIP INVITE request was received and processed, the application server will dispatch SIP response messages such as 180 Ringing, 200 OK, or 486 Busy Here to the client. The response messages are dispatched to the thread that was assigned to handle the original INVITE request, ensuring that all communication remains consistent and properly managed. The response messages are directed to the specific application thread identified by the thread ID, which was linked with the client transaction ID. In the response message, the application server dispatches information to the selected application thread that may include the status of the call request (e.g., success or failure), the client transaction ID to correlate the response with the original request, and details pertinent to the call session. The response message also contains headers with additional metadata and a payload with the actual response content or data, ensuring that the thread can accurately handle and process the response within the context of the ongoing call session. Thus, the application server ensures that all communication and state updates for the call session are consistently managed by the same thread, maintaining coherence and
reliability in processing the ongoing call. The approach helps in efficiently managing the lifecycle of the call session, from initiation through to completion, by ensuring that responses and subsequent actions are handled seamlessly within the designated thread.
[0094] In an embodiment, the provisioning unit (214) is configured to insert the application thread ID in a record-route header of the received call request during an initial setup of a call session associated with the received call request. The recordroute header is a component of the call request that specifies the network elements that should handle subsequent requests within a call session. Alongside the record-route header, the call request may include various fields are essential for managing SIP communications such as a From header, a From header, a Call-ID header, a CSeq (Sequence) header or a Contact header. The From header identifies the caller, while the To header indicates the recipient of the call. The Call-ID header uniquely identifies the session, ensuring that all participants can reference the same call. Additionally, the CSeq (Sequence) header helps manage the order of messages by providing a sequence number for each request. The Contact header specifies how the caller can be reached for future requests.
[0095] For example, during the initial setup of the call request, the provisioning unit (214) is configured to insert the application thread ID into the record-route header of the call request. The insertion is crucial for maintaining consistent and efficient call handling throughout the session. By embedding the thread ID in the record-route header, the provisioning unit ensures that all subsequent SIP or call requests related to this call are routed through the same network elements and the same application thread. For instance, in the VoIP system, when the SIP INVITE request is sent, the application thread ID is included in the record-route header. This ensures that all follow-up SIP messages, such as responses or additional INVITE requests, will also carry this header, allowing the system to route these messages to the same thread. Thus, the subsequent call requests received from the network include the values from the record-route header of the initial call request. Since the header contains the thread ID or other routing
information, which allows the system to route these follow-up requests to the same application thread that handled the original request. By referencing this routing information, the application server can ensure that all related requests are processed consistently by the designated thread, maintaining coherent call management and ensuring that the call session remains stable and efficiently handled.
[0096] In an embodiment, the provisioning unit (214) is configured to bind the one or more requests associated with the received call request to the selected application thread using the application thread ID inserted in the record-route header. When the call request is received, it may trigger various subsequent call requests such as data retrieval, notifications, logging, dependency checks, error handling, and external API calls that need to be handled efficiently. The provisioning unit extracts the application thread ID from the record-route header, which serves as a unique identifier for the specific thread that is responsible for processing the subsequent requests. By binding the subsequent requests to the selected application thread, the provisioning unit ensures that all related operations are executed within the same thread context. The approach streamlines processing by minimizing context switching and resource contention and also allows for coherent management of state and data across all related requests.
[0097] FIG. 3 illustrates an exemplary system architecture (300) of the system (108), in accordance with an embodiment of the present disclosure.
[0098] As shown in FIG. 3, the system architecture (300) includes the user equipment (UE) (302) and an IMS network (304). The system architecture (300) may include a number of components (modules) such as an operational support system (OSS)/ business support system (BSS) (318), an element management system (EMS) (316), a media resource function (MRF) (314), a caller ring back tones (CRBT) service (312), a diameter routing agent (DRA) (310), an online charging system (OCS) (308),
a mobile number portability (MNP) module (306), a provisioning server (324), and a telephony application server (TAS) (322).
[0099] The UE (302) is connected to the IMS network (304) via the Session Initiation Protocol (SIP) to manage and control multimedia communication sessions.
[00100] The OSS (318) is configured to manage network operations and maintenance. The BSS is configured to handle billing, customer management, and revenue assurance. The OSS/BSS (318) is essential for telecom service providers to manage their operations, deliver services to customers, and generate revenue. In an aspect, the provisioning server (324) is connected to the OSS/BSS (318) via a load balancer (320). In an embodiment the load balancer (320) may be a F5 module. In an example, the load balancer (320) is connected to the OSS/BSS (318) via RESTful APIs (REST).
[00101] The EMS (316) includes various systems and applications for managing various network elements (NEs) on a network element-management layer (NEL). The EMS (316) is configured to manage one or more of a specific type of telecommunications network element. The EMS (316) manages the functions and capabilities within each NE but does not manage the traffic between different NEs in the network. The EMS (316) provides a foundation to implement OSS architectures that enable service providers to meet customer needs for rapid deployment of new services, as well as meeting stringent quality of service (QoS) requirements. The EMS (316) is connected to the TAS (322), provisioning server (324) and the OSS/BSS (318) via RESTful APIs (Rest).
[00102] The MRF (314) is configured to provide virtualization of networks to its network providers. The MRF (314) provides media services like announcements, tones, and conferencing for VoLTE, Wi-Fi calling, and fixed VoIP solutions. The MRF (314) is connected to the TAS (322) via the SIP and Media Server Markup Language
(MSML) to facilitate the control and management of media resources during call sessions.
[00103] The CRBT (312) service is configured to replace a standard audio clip with a clip selected by the user. Thus, CRBT (312) is a customizable ringtone or music that a subscriber may subscribe to replace the default ring back tone when the subscriber is called. The CRBT (312) service can be supported by different mobile network infrastructures including the circuit- switched GSM networks and IP multimedia networks such as IMS. By utilizing the CRBT (312) service, telecom companies can improve customer satisfaction and loyalty. The CRBT (312) and the TAS (322) are connected via the SIP to manage and control the delivery of ring-back tones to callers during call setup.
[00104] The DRA (310) is a functional element in a 3G or 4G (such as LTE) network that provides real-time routing capabilities to ensure that messages are routed among the correct elements in a network. The DRA (310) and the Telephony Application Server (TAS) are connected via Diameter protocol to manage and route authentication, authorization, and accounting (AAA) messages for telephony services.
[00105] The OCS (308) is a centralized platform that allows a service provider to charge a user for services in real-time. The OCS (308) handles the subscriber's account balance, rating, charging transaction control and correlation. With the OCS (308), the telecom operator ensures that credit limits are enforced, and resources are authorized on a per transaction basis. The OCS (308) and the DRA (310) are connected via the Diameter protocol to facilitate the exchange of real-time charging information and manage accounting messages for telecommunications services.
[00106] The MNP module (306) is configured to allow users to switch their mobile phone number between different mobile network providers while retaining their existing number. The MNP module (306) module allows customers to change their provider without having to change their phone number, making it easier to switch to a
better plan or service. The MNP module (306) and the TAS (322) are connected via the SIP to manage and route calls effectively, ensuring proper handling of calls to ported numbers within the network.
[00107] In an operative aspect, the provisioning server (324) is configured to customize a standard SIP header(s) to fulfill the objective of the present disclosure. In an example, the provisioning server (324) is embedded with the TAS (322). The provisioning server (324), in communication with the TAS (322), is configured to receive the call requests from a user via the UE (302). The provisioning server (324), in communication with the TAS (322), is configured to select an application thread for processing the received call request. After selecting the application thread, the TAS (322) is configured to bind the selected application thread ID to a client transaction ID corresponding to the user and is configured to generate a secured data packet. During an initial call request (during the signaling phase), the TAS (322) is configured to insert the application thread ID in the record-route header associated with the call request. The record-route header in SIP is used to specify a route that a SIP request should take through the network. Further, the provisioning server (324) is configured to forward the call request (SIP request) along with the generated secured data packet to the TAS (322). In an aspect, the TAS (322) is configured to extract and store the application thread ID corresponding to the call request. After establishing a call, the TAS (322) is configured to extract the responses associated with the client transaction ID during a call session. Based on the extracted client transaction ID from the responses, the TAS (322) is configured to determine the application thread details. After determining the application thread, the TAS (322) is configured to dispatch all responses associated with the client transaction ID to the application thread (stored in the database (210).
[00108] FIG. 4 illustrates an exemplary flow diagram (400) of thread selection, in accordance with an embodiment of the present disclosure.
[00109] As shown in FIG. 4, the TAS (322) is configured to store all the responses corresponding to the call in an individual thread. For example, the TAS (322), in communication with the provisioning server (324), receives the call 1 request. The call 1 request includes two parts: a REQ message, and a RESP message. The REQ message is used to initiate the call, while the RESP message is used to respond to the call request. Further, the TAS (322), receives the responses (for example, REQ/RESP) associated with this established call. The REQ/RESP response is used for sending and receiving messages in SIP, including requests for resources such as calls, chat sessions, and messaging. The TAS (322) is configured to assign all the responses corresponding to call 1 to the THREAD 1. In a similar manner, a plurality of responses associated with a call 2 is assigned to the THREAD 4. The TAS (322) utilizes THREAD 1, THREAD 2, THREAD 3, THREAD 4, THREAD 5, and THREAD n to handle and process individual call requests efficiently. For example, consider a scenario where User A initiates a call to User B, generating a Call 1 Request that includes a REQ message ("User A calls User B") and a corresponding RESP message ("Call request received for User A to User B"). The TAS assigns all related responses for this call to THREAD 1. As the call progresses, THREAD 1 processes various REQ/RESP messages, such as "User B is ringing," "User B accepted the call," and "User A's call status updated to connected." Simultaneously, User C calls User D, prompting a Call 2 Request with similar REQ and RESP messages. The TAS assigns the call to THREAD 4, which handles its responses, including "User D is busy" and "User C's call status updated to missed." The approach allows the TAS to manage multiple calls efficiently by utilizing dedicated threads for each call, ensuring that all call requests and responses are processed independently, optimizing resource usage, and enhancing overall system responsiveness.
[00110] FIG. 5 illustrates another exemplary flow diagram of a method (500) for handling the call requests in the network (106), in accordance with an embodiment of the present disclosure.
[00111] Step (502) includes receiving, by the processing engine (208), the call request from a user. In an embodiment, the processing engine (208) is configured to receive, via the receiving unit (212), the call request from the user.
[00112] Step (504) includes generating, by the processing engine (208), a client transaction identifier (ID) associated with the received call request. For example, when the user request, such as the SIP INVITE message in a VoIP system or a Hypertext Transfer Protocol (HTTP) GET request in a web server, is received, the system initiates a structured process to handle and forward the request. The HTTP GET request retrieves data from a specified resource on a server using HTTP without altering the server’s state. Initially, the system creates a client transaction ID, which acts as a container for managing and tracking the request throughout its lifecycle.
[00113] Step (506) includes selecting, by the processing engine (208), an application thread associated with the received call request. The provisioning unit is designed to efficiently manage incoming call requests by selecting an application thread to handle each request. In an embodiment, the selection process involves identifying and choosing the application thread(s) to handle or process the received call request based on current load, availability, and attributes required by the call request.
[00114] Step (508) includes extracting, by the processing engine (208), an application thread identifier (ID) associated with the selected application thread. In an embodiment, the provisioning unit (214) is configured to extract the application thread identifier (ID) associated with the selected application thread. For example, in the VoIP system handling the SIP INVITE messages, once the provisioning unit selects a thread to process the call request, it retrieves the thread’s ID to keep track of which thread is responsible for that particular call session.
[00115] Step (510) includes binding, by the processing engine (208), the extracted application thread ID with the generated client transaction ID to generate a binded information.
[00116] Step (512) includes communicating, by the processing engine (208), the received call request containing the binded information along with the selected application thread to an application server (for example TAS) for performing one or more operations in the network (106). In an embodiment, the provisioning unit (214) is configured to communicate the received call request containing the binded information along with the selected application thread to an application server (e.g., a telephony application server (TAS)) for performing one or more operations in the network. For example, once the binding of the thread ID and client transaction ID is complete, the provisioning unit communicates the received call request, which contains the binded information (thread ID and client transaction ID), along with details about the selected application thread, to the application server. In an embodiment, the one or more operations includes dispatching, by the application server, one or more response messages associated with the client transaction ID to the selected application thread during the call session associated with the received call request.
[00117] In an embodiment, the method further comprising inserting the application thread ID in a record-route header of the received call request during an initial setup of the call request.
[00118] In an embodiment, the application thread is selected using a round-robin approach.
[00119] In an embodiment, the method further comprising binding one or more requests associated with the received call request to the selected application thread using the application thread ID inserted in the record-route header.
[00120] In an exemplary embodiment, the present invention discloses a user equipment (UE) (104) communicatively coupled with a network (106). The coupling comprises steps of receiving, by the network (106), a connection request from the UE (104), sending, by the network, an acknowledgment of the connection request to the UE (104) and transmitting a plurality of signals in response to the connection request.
The call request in the network (106) are handled by a method (500) that includes receiving (502), by a processing engine (208), a call request from a user. The method (500) includes generating (504), by the processing engine (208), a client transaction identifier (ID) associated with the received call request. The method (500) includes selecting (506), by the processing engine (208), an application thread associated with the received call request. The method (500) includes extracting (508), by the processing engine (208), an application thread identifier (ID) associated with the selected application thread. The method (500) includes binding (510), by the processing engine (208), the extracted application thread ID with the generated client transaction ID to generate a binded information. The method (500) includes communicating (512), by the processing engine (208), the received call request containing the binded information along with the selected application thread to an application server for performing one or more operations in the network (106).
[00121] In an exemplary embodiment, the present disclosure relates to a computer program product comprising a non-transitory computer-readable medium comprising instructions that, when executed by one or more processors, cause the one or more processors to perform a method (500) for handling a call request in a network (106). The method (500) includes receiving (502), by a processing engine (208), a call request from a user. The method (500) includes generating (504), by the processing engine (208), a client transaction identifier (ID) associated with the received call request. The method (500) includes selecting (506), by the processing engine (208), an application thread associated with the received call request. The method (500) includes extracting (508), by the processing engine (208), an application thread identifier (ID) associated with the selected application thread. The method (500) includes binding (510), by the processing engine (208), the extracted application thread ID with the generated client transaction ID to generate a binded information. The method (500) includes communicating (512), by the processing engine (208), the received call
request containing the binded information along with the selected application thread to an application server for performing one or more operations in the network (106).
[00122] FIG. 6 illustrates an exemplary computer system (600) in which or with which embodiments of the present disclosure may be implemented.
[00123] As shown in FIG. 6, the computer system may include an external storage device (610), a bus (620), a main memory (630), a read-only memory (640), a mass storage device (650), communication port(s) (660), and a processor (670). A person skilled in the art will appreciate that the computer system may include more than one processor and communication ports. The processor (670) may include various modules associated with embodiments of the present disclosure. The communication port(s) (660) may be any of an RS -232 port for use with a modem-based dialup connection, a 10/100 Ethernet port, a Gigabit or 10 Gigabit port using copper or fiber, a serial port, a parallel port, or other existing or future ports. The communication port(s) (660) may be chosen depending on a network, such a Local Area Network (LAN), Wide Area Network (WAN), or any network to which the computer system connects.
[00124] The main memory (630) may be random access memory (RAM), or any other dynamic storage device commonly known in the art. The read-only memory (640) may be any static storage device(s) e.g., but not limited to, a Programmable Read Only Memory (PROM) chips for storing static information e.g., start-up or Basic Input/Output System (BIOS) instructions for the processor (670). The mass storage device (650) may be any current or future mass storage solution, which can be used to store information and/or instructions. Exemplary mass storage device (650) includes, but is not limited to, Parallel Advanced Technology Attachment (PATA) or Serial Advanced Technology Attachment (SATA) hard disk drives or solid-state drives (internal or external, e.g., having Universal Serial Bus (USB) and/or Firewire interfaces), one or more optical discs, Redundant Array of Independent Disks (RAID) storage, e.g., an array of disks.
[00125] The bus (620) communicatively couples the processor (670) with the other memory, storage, and communication blocks. The bus (620) may be, e.g., a Peripheral Component Interconnect (PCI)/PCI Extended (PCI-X) bus, Small Computer System Interface (SCSI), Universal Serial Bus (USB), or the like, for connecting expansion cards, drives, and other subsystems as well as other buses, such a front side bus (FSB), which connects the processor 670 to the computer system.
[00126] Optionally, operator and administrative interfaces, e.g., a display, keyboard, joystick, and a cursor control device, may also be coupled to the bus (620) to support direct operator interaction with the computer system. Other operator and administrative interfaces can be provided through network connections connected through the communication port(s) (660). Components described above are meant only to exemplify various possibilities. In no way should the aforementioned exemplary computer system limit the scope of the present disclosure.
[00127] The present disclosure offers a significant technical advantage over conventional techniques by utilizing a single application call processing thread to handle each call. The present disclosure effectively prevents multiple race condition scenarios and reduces lock contention, issues commonly encountered in conventional multi-threaded architectures. The present disclosure optimizes CPU resource usage and enhances overall performance by avoiding the complexities and overhead associated with thread synchronization. Utilizing a single application call processing thread to handle each call offers significant benefits in terms of efficiency and system management. The present disclosure reduces context switching, as the system no longer needs to constantly switch between multiple threads, leading to lower latency and quicker response times. Further, maintaining a dedicated thread for each call simplifies state management, allowing the thread to consistently track the call's lifecycle without the complications that arise from coordinating data across multiple threads. Thus, the present disclosure enhances resource utilization by minimizing contention but also facilitates easier debugging and monitoring, as operators can follow the processing
flow without the complexity of managing multiple thread states. Additionally, the use of standard SIP headers in a customized manner to support this architecture ensures compatibility with existing protocols while streamlining call management. Consequently, the present disclosure provides a stable and efficient architecture, improving both system reliability and scalability.
[00128] While the foregoing describes various embodiments of the invention, other and further embodiments of the invention may be devised without departing from the basic scope thereof. The scope of the invention is determined by the claims that follow. The invention is not limited to the described embodiments, versions or examples, which are included to enable a person having ordinary skill in the art to make and use the invention when combined with information and knowledge available to the person having ordinary skill in the art.
TECHNICAL ADVANTAGES
[00129] The present disclosure described herein above has several technical advantages including, but not limited to, the realization of a system and a method for handling call requests in a network based on thread affinity that:
1. avoids multiple threads race conditions during the call handling;
2. reduces lock contention problems that occur in Central Processing Unit (CPU) scheduling; and
3. optimizes CPU resources by binding all responses associated with the client transaction ID to the application thread.
Claims
1. A method (500) for handling a call request in a network (106), the method (500) comprising: receiving (502), by a processing engine (208), the call request from a user (102); generating (504), by the processing engine (208), a client transaction identifier (ID) associated with the received call request; selecting (506), by the processing engine (208), an application thread associated with the received call request; extracting (508), by the processing engine (208), an application thread identifier (ID) associated with the selected application thread; binding (510), by the processing engine (208), the extracted application thread ID with the generated client transaction ID to generate a binded information; and communicating (512), by the processing engine (208), the received call request containing the binded information along with the selected application thread to an application server (322) for performing one or more operations in the network (106).
2. The method (500) as claimed in claim 1, further comprising inserting, by the processing engine (208), the application thread ID in a record-route header of the received call request during an initial setup of a call session associated with the received call request.
3. The method (500) as claimed in claim 1, wherein performing the one or more operations comprising:
dispatching, by the application server (322), one or more response messages associated with the client transaction ID to the selected application thread during the call session associated with the received call request.
4. The method (500) as claimed in claim 1, wherein the application thread is selected using a round-robin approach.
5. The method (500) as claimed in claim 2, further comprising binding, by the processing engine (208), one or more requests associated with the received call request to the selected application thread using the application thread ID inserted in the record-route header.
6. A system (108) for handling a call request in a network (106), the system (108) comprising: a memory (204); and a processing engine (208) configured to execute a set of instructions stored in the memory (204) to: receive the call request from a user (102); generate a client transaction identifier (ID) associated with the received call request; select an application thread associated with the received call request; extract an application thread identifier (ID) associated with the selected application thread; bind the extracted application thread ID with the generated client transaction ID to generate a binded information; and
communicate the received call request containing the binded information along with the selected application thread to an application server (322) for performing one or more operations in the network ( 106).
7. The system (108) as claimed in claim 6, is further configured to insert the application thread ID in a record-route header of the received call request during an initial setup of a call session associated with the received call request.
8. The system (108) as claimed in claim 6, wherein for performing the one or more operations, the application server (322) is configured to: dispatch one or more response messages associated with the client transaction ID to the selected application thread during the call session associated with the received call request.
9. The system (108) as claimed in claim 6, wherein the application thread is selected using a round-robin approach.
10. The system (108) as claimed in claim 7, further configured to bind one or more requests associated with the received call request to the selected application thread using the application thread ID inserted in the record-route header.
11. A user equipment (UE) (104) communicatively coupled to a network (106), the coupling comprises steps of: receiving, by the network (106), a connection request from the UE (104); sending, by the network (106), an acknowledgment of the connection request to the UE (104); and
transmitting a plurality of signals in response to the connection request, wherein a call request in the network (106) is handled by a method (500) as claimed in claim 1.
12. A computer program product comprising a non -transitory computer-readable medium comprising instructions that, when executed by one or more processors, cause the one or more processors to perform a method (500) for handling a call request in a network (106), the method (500) comprising: receiving (502), by a processing engine (208), the call request from a user (102); generating (504), by the processing engine (208), a client transaction identifier (ID) associated with the received call request; selecting (506), by the processing engine (208), an application thread associated with the received call request; extracting (508), by the processing engine (208), an application thread identifier (ID) associated with the selected application thread; binding (510), by the processing engine (208), the extracted application thread ID with the generated client transaction ID to generate a binded information; and communicating (512), by the processing engine (208), the received call request containing the binded information along with the selected application thread to an application server (322) for performing one or more operations in the network (106).
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| IN202321070190 | 2023-10-16 | ||
| IN202321070190 | 2023-10-16 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2025083709A1 true WO2025083709A1 (en) | 2025-04-24 |
Family
ID=95447927
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/IN2024/052075 Pending WO2025083709A1 (en) | 2023-10-16 | 2024-10-16 | System and method for handling calls based on thread affinity |
Country Status (1)
| Country | Link |
|---|---|
| WO (1) | WO2025083709A1 (en) |
Citations (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2023109475A1 (en) * | 2021-12-17 | 2023-06-22 | 华为技术有限公司 | Calling processing method, system, and device |
-
2024
- 2024-10-16 WO PCT/IN2024/052075 patent/WO2025083709A1/en active Pending
Patent Citations (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2023109475A1 (en) * | 2021-12-17 | 2023-06-22 | 华为技术有限公司 | Calling processing method, system, and device |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US11637876B2 (en) | System and method for integrating session initiation protocol communication in a telecommunications platform | |
| CN101297537B (en) | Telephony and web services coordination | |
| US7870265B2 (en) | System and method for managing communications sessions in a network | |
| US9654515B2 (en) | Service oriented architecture-based SCIM platform | |
| US9247062B2 (en) | System and method for queuing a communication session | |
| US9479400B2 (en) | Servlet API and method for XMPP protocol | |
| US9894128B2 (en) | Selective transcoding | |
| KR100985612B1 (en) | Computer-implemented method of automatically coordinating a multimedia communication session of a dynamic multiparty, apparatus and computer readable medium for automatically coordinating a multimedia communication session of a dynamic multiparty | |
| US8612568B2 (en) | Method, system and network server for recording use of network service capability by applications | |
| CN1662003B (en) | A method for realizing personal business customization of session initiation protocol application server | |
| WO2012113331A1 (en) | Service triggering method and system in ims network, computer program and storage medium | |
| WO2025083709A1 (en) | System and method for handling calls based on thread affinity | |
| US20130013760A1 (en) | Method for Coordinating the Provision of a Composite Services | |
| CN101453450B (en) | Client-based IMS service realization method and its device and system | |
| WO2025094201A2 (en) | System and method for managing service requests in a network | |
| Bessler et al. | An orchestrated execution environment for hybrid services | |
| WO2007134338A2 (en) | Hitless application upgrade for sip server architecture | |
| WO2025099774A1 (en) | System and method to rollout a toll-free number | |
| EP1858218B1 (en) | Method and entities for providing call enrichment of voice calls and semantic combination of several service sessions to a virtual combined service session | |
| Femminella et al. | Enhancing java call control with media server control functions | |
| Femminella et al. | Introduction of Media Gateway Control functions in Java Call Control | |
| WO2025079098A1 (en) | Systems and methods for managing application context in a network | |
| WO2025008951A1 (en) | Method and system for memory management in a communication network | |
| Bo et al. | Design of services-orientated multimedia conference process management | |
| Bessler¹ et al. | Hybrid Services |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| DPE1 | Request for preliminary examination filed after expiration of 19th month from priority date (pct application filed from 20040101) | ||
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 24879324 Country of ref document: EP Kind code of ref document: A1 |