[go: up one dir, main page]

HK1238065A1 - Optimizing capacity expansion in a mobile network - Google Patents

Optimizing capacity expansion in a mobile network Download PDF

Info

Publication number
HK1238065A1
HK1238065A1 HK17111695.6A HK17111695A HK1238065A1 HK 1238065 A1 HK1238065 A1 HK 1238065A1 HK 17111695 A HK17111695 A HK 17111695A HK 1238065 A1 HK1238065 A1 HK 1238065A1
Authority
HK
Hong Kong
Prior art keywords
user
mobile
usage
mobile network
network
Prior art date
Application number
HK17111695.6A
Other languages
Chinese (zh)
Other versions
HK1238065B (en
Inventor
菲利普.多
Original Assignee
阿弗梅德网络公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 阿弗梅德网络公司 filed Critical 阿弗梅德网络公司
Publication of HK1238065A1 publication Critical patent/HK1238065A1/en
Publication of HK1238065B publication Critical patent/HK1238065B/en

Links

Description

Optimizing capacity expansion in mobile networks
Cross Reference to Related Applications
This application claims priority from U.S. application No. 61/986,462 entitled "Optimizing Capacity Expansion using nfv-Based plants," filed 4, 30, 2014, the contents of which are incorporated herein by reference in their entirety.
Technical Field
Embodiments of the present invention generally relate to computerized methods and apparatus for optimizing capacity expansion in mobile networks.
Background
Conventional approaches to providing resources in mobile networks include adding additional physical infrastructure when the resources are fully loaded. The physical device is designed to have a fixed capacity ratio. Once a particular dimension (e.g., throughput, signaling activity, session capacity) is exhausted, mobile network operators have no choice but to join more devices, even if all other dimensions are underutilized. This results in increased capital and operational expenses.
Disclosure of Invention
In certain embodiments, systems and methods for optimizing the capacity of network devices in a mobile network are disclosed. In some embodiments, a computing device receives a user identification corresponding to a characteristic of a mobile network user and a user attribute corresponding to at least one characteristic of mobile network usage by the mobile network user. In some embodiments, the computing device generates a usage prediction based on the user representation and the user attributes, the usage prediction including information corresponding to an expected future data usage of the mobile network user, the expected future mobile network usage corresponding to the at least one mobile resource. In certain embodiments, a computing device sends the usage prediction to a Serving Gateway (SGW) such that the SGW routes mobile network users to a legacy packet data network gateway (PGW) and a Network Function Virtualization (NFV) PGW based on the usage prediction, the legacy PGW including a fixed capacity for the at least one mobile resource, and the NFVPGW including a configurable capacity for the at least one mobile resource.
In some embodiments, the at least one characteristic of mobile network usage by the mobile network user includes an amount of prior mobile network usage, time associated with mobile network usage, location of a mobile device corresponding to the mobile user, an amount of time spent roaming by the mobile device, make and model of the mobile device, applications installed on the mobile device, operating system and firmware version of the mobile device, subscription plan, remaining quota, and demographic information. In some embodiments, the at least one characteristic of the mobile network user comprises a mobile device ID or a phone number. In some embodiments, wherein receiving the user attribute further comprises receiving the user attribute from at least one of a Home Subscriber Server (HSS), a Mobility Management Entity (MME), a charging system, and a System Architecture Evolution (SAE) gateway. In some embodiments, the mobile resources include at least one of signaling activity, throughput, session occupancy, encryption, and transcoding.
These and other capabilities of the disclosed subject matter will be more fully understood after a perusal of the following figures, detailed description, and claims. It is to be understood that the phraseology and terminology employed herein are for the purpose of description and should not be regarded as limiting.
Drawings
Various objects, features and advantages of the disclosed subject matter can be more fully appreciated with reference to the following detailed description of the disclosed subject matter when considered in connection with the following drawings, in which like reference numerals refer to like elements.
Figure 1 is a schematic diagram illustrating requirements incurred by a mobile network user on a mobile network in accordance with certain embodiments of the present disclosure.
Fig. 2 is a schematic view showing a conventional method of expansion using a conventional apparatus.
Fig. 3 is a schematic diagram illustrating a method of expansion using legacy devices and NFV-based devices, in accordance with certain embodiments of the present disclosure.
Fig. 4 is a system diagram illustrating a user in conjunction with a mobile network, in accordance with certain embodiments of the present disclosure.
FIG. 5 is a schematic diagram illustrating a usage prediction engine, according to some embodiments of the present disclosure.
Fig. 6 is a system diagram of a mobile network, in accordance with certain embodiments of the present disclosure.
Fig. 7 is a system diagram illustrating capacity optimization in a mobile network according to some embodiments of the invention.
Detailed Description
Mobile networks may include mobile users with widely differing usage characteristics. For example, some mobile users are dense and consume large amounts of data, increasing the total amount of data throughput that the network needs to support. Other users may be very signaling intensive and complete a large number of connections (e.g., using a "chatty" mobile application that sends updates frequently) but only transfer a small amount of data. Even if it uses only a small amount of data, it increases the amount of signal processing required by the network. Some other users may be relatively idle in both data throughput and signaling dimensions (e.g., networked power meter readers), but are numerous and occupy a large amount of session capacity simply signing them onto the network. To accommodate all types of users, network operators need to deploy enough networking equipment to cover the worst case of all these dimensions (e.g., throughput, signaling activity, session capacity, and possibly others). Since traditional networking equipment is designed with a fixed capacity ratio (supporting X number of users, Y amount of signaling, and Z amount of data throughput), covering the worst case of one dimension will result in under-utilization of the other dimension. For example, a legacy platform deployed in a network may be hitting 100% of the session capacity, but utilizing only 20% of the throughput capacity. Even if there is excess throughput capacity, new equipment needs to be installed to increase the number of users supported. This increases both capital and operating costs.
Previously, the user base was separated into separate applications. For example, the regular consumer is separated from the machine-to-machine device. Devices can be categorized such that the devices have similar requirements. Devices with similar requirements may be assigned to equipment with different performance characteristics, often from different manufacturers. Even with this approach, the problem of optimizing capacity is not solved due to at least: (1) the wide variety of users does not guarantee that the users within the group have similar usage requirements. Devices serving one category of users may still be underutilized in some dimensions; (2) capital and operational expenses will increase as operators now need to potentially handle equipment from multiple suppliers (which may or may not work well); (3) as demand from different user groups changes over time, operators will need to re-partition users and reallocate network resources, which can be time consuming and expensive.
Preferred embodiments of the present disclosure include the use of Network Function Virtualization (NFV) on platforms with different capabilities and cost characteristics to handle the demands introduced by different types of users. Configured differently (both in terms of hardware and software), different NFV-based platforms may have different strengths and weaknesses. For example, an NFV-based platform may be designed such that it can accommodate a large number of users (e.g., by using a server with a large amount of memory), but with limited throughput and signaling capabilities. Another NFV-based platform may be designed such that it can handle a large amount of throughput (e.g., by using a dedicated network adapter card). Another NFV-based platform may be designed such that it can handle a large amount of signaling (e.g., with a high power CPU). These NFV-based platforms with different characteristics can be put together in a network to meet different demands introduced by different types of users. To maximize efficiency and minimize cost, users of different characteristics are directed to servers with matching advantages so that each server is best utilized. In this way, the advantages of the traditional and NFV-based platforms complement each other's weaknesses. Network equipment can therefore be better utilized, resulting in lower overall capital and operating costs.
Preferred embodiments of the present disclosure include a function to classify and direct mobile users or subscribers to different network devices based on past and predicted future usage characteristics to match the capacity characteristics of the network devices. There is no need to separate users into different groups (e.g., separate users into different access point names or APNs). The network may appear seamless to the end user and thus there is little change in user experience. Operators may utilize NFV to deploy platforms with different cost and performance characteristics. NFV is suitable for such applications because it allows the same network function to run on different hardware platforms. These hardware platforms range from high-end blade server chassis to low-cost server chassis, which offer different performance and capacity.
In some cases, additional capabilities may be obtained by building servers with dedicated hardware (such as chips for hardware encryption) to support certain user groups.
The preferred embodiments of the present disclosure can be used as a greenfield solution (e.g., a new network consisting of NFV-based platforms only), or to supplement an existing legacy network that runs out of capacity. In the latter case, the NFV server can be designed to specifically alleviate the bottleneck of legacy devices and make the utilization of all performance dimensions more balanced. In the following, techniques are described to determine the stagnation points of existing legacy platforms, build NFV-based platforms to mitigate these stagnation points, predict and identify the usage characteristics of users, and direct them to NFV-based platforms that can best handle demand.
The preferred embodiments of the present disclosure leverage existing legacy devices with fixed capacity ratios with a network function virtualization based platform with different capabilities and cost characteristics. Legacy and NFV-based platforms can operate seamlessly as a single network. In certain embodiments, the NFV-based platform is designed to complement the weaknesses of the traditional platform, such that when they work together, it reduces the chance of overloading a certain capacity dimension, and will better utilize all network nodes in the population.
In some embodiments, to best utilize the different capabilities of traditional and NFV-based platforms, when a user attempts to access a network, the user's usage characteristics are predicted based on a number of factors including the user's past usage patterns. The user is then directed to the network node that can best handle the user's needs.
Figure 1 is a schematic diagram illustrating requirements incurred by a mobile network user on a mobile network in accordance with certain embodiments of the present disclosure. Fig. 1 shows a mobile device 101, signaling activities 102, throughput 103, session occupancy 104, and other dimensions 105.
As shown in fig. 1, a mobile network user 101 incurs demand for the mobile network in many different dimensions. The user generates signaling activities 102 when registering and deregistering with the network, when roaming the network, and the like. The user generates a demand for the throughput dimension 103 as he or she browses web pages or sends status updates. When the user attaches to the network, he or she also occupies one or more of the session spaces 104. Finally, there is a need for other dimensions 105, for example if the user requires encryption or image/video transcoding services. Not all users behave the same. Data heavy users consume large amounts of data and increase the demand on the throughput dimension. Other users may be very signaling intensive and complete a large number of connections (e.g., using "chatty" applications that send a large number of updates frequently) but transmit only a small amount of data. Some users may mention the need for a signaling dimension in the network. Some other users may be relatively idle in both data throughput and signaling dimensions (e.g., networked power meter readers), but are numerous and require a large amount of session capacity to simply keep them signed onto the network.
Network operators often have to install more network devices to handle the above-mentioned overall demand in different dimensions. Since traditional networking devices are designed to support a fixed capacity ratio (supporting X number of users, Y amount of signaling, and Z amount of data throughput), covering the worst case of one dimension often results in under-utilization of the other dimension. For example, a network node may be hitting 100% of the session capacity, but utilizing only 50% of the throughput capacity. Even if there is excess throughput capacity, new legacy equipment is installed to increase the number of users supported. Installing new legacy equipment can increase both capital and operating costs.
Fig. 2 is a schematic view showing a conventional method of expansion using a conventional apparatus. Fig. 2 shows a network operator reaching a maximum session capacity of a first legacy platform 201, a capacity of the first legacy platform 202 after a 2x expansion 210, and a capacity of a second legacy platform 203 after the 2x expansion 210.
As shown at 201, a network operator reaches maximum capacity in a first legacy platform. The first legacy platform has a maximum throughput of 100 units and a maximum of 100 sessions. 50 of the 100 units of throughput are used and 100 of the 100 sessions are used. When a network operator expects a doubling of demand (e.g., 100 units of throughput and 200 sessions), the network operator has to find a way to increase capacity. To double capacity 210, the network operator installs a second legacy platform. In some embodiments, the operator may determine the capacity usage of the platform by monitoring the peak usage level of the device (e.g., monitoring usage during busy hours). The platform may specify a maximum value for each dimension (e.g., a throughput of 50Gbps at 1 million sessions, such as a CPU limit of 80%). For example, to determine session usage, an operator may use a statistics counter to see how many sessions are used during busy hours. As another example, an operator may determine the amount of throughput by measuring the throughput at a particular CPU limit during busy hours. The operator can determine capacity by measuring the amount of CPU usage during busy hours. In both the first legacy platform 202 and the second legacy platform 203, 50 out of 100 units of throughput are used, and 100 out of 100 sessions are used. After expansion, both legacy platforms are still constrained by the session dimension. The capacity ratio (e.g., equal capacity for sessions and throughput) in the expanded legacy platform does not match the needs of the user (e.g., a large number of sessions but not as much throughput).
Rather, the preferred embodiments of the present invention require an understanding of the reasons for bottlenecks for existing legacy platforms, current and future usage patterns of users, and building NFV-based platforms to complement legacy platforms so that all capacity dimensions can be better utilized.
Fig. 3 is a schematic diagram illustrating a method of expansion using legacy devices and NFV-based devices, in accordance with certain embodiments of the present disclosure. Fig. 3 shows a network operator reaching a maximum session capacity of the first legacy platform 201, a capacity of the first legacy platform 302 after a 2x expansion 310, and a capacity of the second NFV-based platform 303 after the 2x expansion 310. Although fig. 3 illustrates expansion in two dimensions (e.g., session and throughput), similar techniques may be applied to any number of dimensions.
As described above, the operator has reached capacity in the first legacy platform where 50 of the 100 units of throughput are used and 100 of the 100 sessions are used. When a network operator anticipates a doubling of demand (e.g., 100 units of throughput and 200 sessions), the network operator doubles capacity 210 by installing NFV-based platform 303. As shown after the expansion 310, the combination of the first legacy platform 302 with the NFV-based platform 303 takes into account the user's current and future usage patterns. For example, if 20% of the users are using 80% of the throughput, it means that among 100 users:
● 20 heavy users are using 40 units of throughput; and is
80 light users are using 10 units of throughput.
When the demand doubles, there are a total of 200 users, among which
● 40 heavy users use 80 units of throughput; and is
● 160 light users use 20 units of throughput.
An NFV-based platform can be built to support 200 users, but only 40 units of throughput, perhaps at a fraction of the cost compared to a traditional platform. This can be done due to the flexible nature of the NFV solution-the platform can be built with a large amount of memory to support more sessions, but only a moderately powerful CPU for throughput processing to reduce cost. 160 light users may be directed to NFV-based platform 303 while heavy users may be directed to legacy platform 302. If the legacy platform cost is $1M and the NFV platform cost is $0.2M, the cost for doubling capacity would be:
● $2M if only a legacy platform is used (e.g., as shown in FIG. 2); and
● $1.2M, if an NFV-based platform is added to a legacy platform (e.g.,
as shown in fig. 3).
Using NFV-based platforms can save $0.8M or only 40% of the cost of using legacy platforms. As illustrated in fig. 2 and 3, conventional platforms have high throughput capacity but insufficient session capacity. NFV-based platforms supplement traditional platforms by providing high session capacity but low throughput capacity to keep costs low. There are many different ways to construct NFV-based platforms to complement traditional platforms. The operator can decide the cost and performance balance of different components (such as memory, CPU or other dedicated chips) and how future requirements will change. Using an NFV-based platform allows operators to analyze demand from different users, build NFV-based platforms with the ability to complement legacy platforms, and direct users appropriately to different platforms to achieve optimal use of capacity across all platforms.
In certain embodiments, the systems and methods described herein guide and categorize users based on past and anticipated needs. Predicting user capacity demand can help balance capacity usage on both legacy and NFV-based platforms.
Fig. 4 is a system diagram illustrating a user in conjunction with a mobile network, in accordance with certain embodiments of the present disclosure. Fig. 4 shows a mobile network subscriber 401, a classifier 402, a usage prediction engine 403, a legacy network platform, an NFV based platform 405 and a mobile network 406.
Mobile network user 401 may include a mobile network subscriber that accesses a mobile network via one or more mobile network devices (e.g., smart phone, laptop, tablet). As described in more detail below, mobile network 406 includes a plurality of network devices. Briefly, network devices in mobile network 406 may route and analyze user traffic.
As user 401 signs onto network 406, classifier 402 queries usage prediction engine 403 to predict resource usage patterns. Classifier 402 is a component that obtains information (e.g., mobile device identifiers) from a user and his/her equipment, consults usage engine 403, and makes a determination as to which platform to place the user on in mobile network 406. The classifier 402 may be implemented as a separate component or as part of some network device in the mobile network (e.g., on a load balancer). Usage prediction engine 403, described in more detail below, is a component that obtains user identifications and other attributes associated with users and predicts future network resource usage by the users. Based on the results from usage prediction engine 403, user 401 is directed to providing services by legacy network platform 404 or NFV-based platform 405. As described above, the classifier also receives input from legacy and NFV-based platforms corresponding to its available capacity level and its capabilities (e.g., encryption, video transcoding).
In some embodiments, users may be directed to legacy platforms or NFV-based platforms based on characteristics of the users or platforms. For example, a user may be directed when the user joins the network (e.g., when the user powers on the phone in the morning). In addition, existing users may also be proactively migrated from one system to another if the loading of the existing system reaches a certain threshold or the characteristics of the user change significantly.
FIG. 5 is a schematic diagram illustrating a usage prediction engine, according to some embodiments of the present disclosure. Fig. 5 shows user identification 501, usage prediction engine 502, usage prediction 503, past usage patterns and trends 504, time information 505, brand and model for location 506, past mobility patterns 507, mobile device 508, installed applications 509, Operating System (OS) and firmware versions 501, subscription plans 511, remaining fits 512, and demographic information 513.
The user identification 501 and the user attributes 504 and 513 are received using the prediction engine 502. As described in more detail below, user prediction engine 502 predicts future usage requirements of user 503 based on the inputs. The user identity 501 corresponds to information about the user's mobile device (e.g., International Mobile Equipment Identity (IMEI)). User attributes may be collected from various components in the mobile network, as described in more detail in fig. 6.
User attributes 504 and 513 include, but are not limited to:
(1) past usage patterns and trends of user 504-data heavy users are likely to be data heavy in the future.
(2) Time of day, day of week, and day of year 505-time information provides prompts as to what services the user uses on the mobile device. The occurrence of any large event (e.g., Super Bowl) may be helpful in predicting the usage patterns of the user.
(3) The user's location 506-similar to the temporal location, the geographic location information may be helpful in predicting usage patterns. For example, if a user is located in a city where there are more cell sites of smaller size, it is likely that the user will experience a higher amount of handover events as he/she moves back and forth between cell sites. Whereas if the user is located in a suburban area, the cell site is likely to cover a larger area and the chance of handover will be smaller.
(4) Past mobility pattern 507-users who have roamed heavily in the past are likely to roam heavily in the future.
(5) The make and model of the mobile device 508-sometimes a particular type of mobile device has vastly different resource usage. For example, a user with a touch screen phone will use more data services than a user with a feature phone without touch screen support.
(6) Installed mobile applications 509-some mobile applications are more "chatty" than others and trigger many more connections.
(7) The OS and firmware versions of mobile device 510-with different OS versions, the requirements may be different. For example, the messenger application on Apple iOS 8 supports speech and levels in addition to text. That will likely translate into higher throughput usage.
(8) The user's subscription plan 511-for example, a user with a low data cap will use less data than a user with a large data cap.
(9) The remaining limit 512 for the current charging period-e.g., a user for a low remaining limit is likely to be more constrained in bandwidth usage than a user who is left with many limits.
(10) The user's demographic profile 513-e.g., usage behavior is likely to vary significantly between teenage users versus adult users. A teenager user is likely to consume more data via his social activities, while an elderly user may use more voice calls than data in his/her daily activities.
Fig. 6 is a system diagram of a mobile network, in accordance with certain embodiments of the present disclosure. Fig. 6 shows a Home Subscriber Server (HSS)601, a Mobility Management Entity (MME)602, a charging system 603, an eNodeB604, a System Architecture Evolution (SAE) gateway 605, and an analysis server 606. All of the elements shown in fig. 6 may be conventional or virtual.
In some embodiments, some operators may have an analysis server 606 to collect and analyze usage statistics about users. This information can be fed directly into the prediction engine. In other embodiments, the prediction engine contains the analysis capabilities of the analysis server 606, and the two components are categorized into one unit.
A Home Subscriber Server (HSS)601 contains information about the mobility of the user. The mobility information may be periodically fed into the analysis server 606 to calculate the user's past mobility patterns.
A Mobility Management Entity (MME)602 tracks the current location of the device and may send the location information to an analysis server 606 for further processing.
The billing system contains the user's subscription plan, remaining quota, and other billing related information. The billing information may be fed to the analytics server 606 for use in usage trend determination.
SAE gateway 605 can check all traffic to and from the user. By using Deep Packet Inspection (DPI) techniques, usage information can be extracted from data traffic, including device make and model, installed and recently used applications, OS and firmware versions, and the like. In certain embodiments, DPI data is fed into analysis server 606 for further analysis before being used by the prediction engine.
In some embodiments, the usage trend of the user changes slowly over time. When usage trends change slowly, the usage prediction engine does not need to update its predictions for the user in real time. For example, predictions for a particular user may be updated once per week, and different intervals may be used for different users. In certain embodiments, the usage trend changes more rapidly. For example, when certain events occur, predictions may be triggered to update as needed. For example, if the user switches to another subscription plan or a new phone, the prediction may be updated immediately.
When a new subscriber joins the network, there will be no more usage history from which to build predictions. Initially, the new user may be considered as an "average" user with an average throughput and signaling load. Alternatively, the prediction may be based on the limited amount of information available. For example, if the new subscriber is a teenager, he/she is likely to have more conversational applications, such as Facebook, Instagram or Snapchat, which will incur more signaling load. If instead the new subscriber is a business account that has signed up for sharing (warming), he/she is likely to be a more heavy data user. The prediction update frequency for new users may be higher so that predictions may be quickly fused based on newly acquired factors. At this stage, the user may be put on the legacy system or the NFV system. Once the user is classified, it can then move between legacy systems and NFV systems to achieve optimal use of network resources.
Fig. 7 is a system diagram illustrating capacity optimization in a mobile network according to some embodiments of the invention. Fig. 7 shows a prediction engine 701, a Serving Gateway (SGW)702, a legacy packet data network gateway (PGW)703 and an NFV-based PGW 704.
When the subscriber places a call, the phone will attempt to establish a session with the mobile network. The request will finally be sent to the SGW701 and the SGW701 selects the PGW 703, 704 to direct the user session to the target. One of the PGW nodes includes legacy devices 703 and the other PGW node includes a NFV-based platform. Normally, the SGW selects the PGW based only on the Access Point Name (APN). The APN identifies the Packet Data Network (PDN) with which the mobile data user wants to communicate and is assigned to the user when the user activates his subscription plan. In a preferred embodiment, the SGW consults the prediction engine to determine the best location to direct the user session to the target based on the characteristics of the user. For example, the classifier/prediction engine may provide an API based on Simple Object Access Protocol (SOAP) or representational state transfer (REST), where the SGW may make calls to obtain information about where to establish a session. Once the SGW decides to establish a session on, for example, an NFV-based PGW, all future signaling and data traffic related to that subscriber will be handled by the selected PGW.
The subject matter described herein can be implemented in digital electronic circuitry, or in computer software, firmware, or hardware, including the structural components disclosed in this specification and structural equivalents thereof, or in combinations of them. The subject matter described herein can be implemented as one or more computer program products, such as one or more computer programs tangibly embodied in an information carrier, e.g., in a machine-readable storage device, or in a propagated signal, for execution by, or to control the operation of, data processing apparatus, e.g., a programmable processor, a computer, or multiple computers. A computer program (also known as a program, software application, or code) can be written in any form of programming language, including compiled or interpreted languages, and it can be deployed in any form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment. A computer program does not necessarily correspond to a file. A program can be stored in a portion of a file that holds other programs or data, in a single file dedicated to the program in question, or in multiple coordinated files (e.g., files that store one or more modules, sub programs, or portions of code). A computer program can be deployed to be executed on one computer or on multiple computers that are located at one site or distributed across multiple sites and interconnected by a communication network.
The processes and logic flows described in this specification, including the method steps of the subject matter described herein, can be performed by one or more programmable processors executing one or more computer programs to perform functions of the subject matter described herein by operating on input data and generating output. The processes and logic flows can also be performed by, and apparatus of the subject matter described herein can also be implemented as, special purpose logic circuitry, e.g., an FPGA (field programmable gate array) or an ASIC (application-specific integrated circuit).
Processors suitable for the execution of a computer program include, by way of example, both general and special purpose microprocessors, and any one or more processors of any kind of digital computer. Generally, a processor will receive instructions and data from a read-only memory or a random access memory or both. The essential elements of a computer are a processor for executing instructions and one or more memory devices for storing instructions and data. Generally, a computer will also include, or be operatively coupled to receive data from or transfer data to, or both, one or more mass storage devices for storing data, e.g., magnetic, magneto-optical disks, or optical disks. Information carriers suitable for embodying computer program instructions and data include all forms of non-volatile memory, including by way of example semiconductor memory devices (e.g., EPROM, EEPROM, and flash memory devices); magnetic disks (e.g., internal hard disks or removable disks); magneto-optical disks; and optical disks (e.g., CD and DVD disks). The processor and the memory can be supplemented by, or incorporated in, special purpose logic circuitry.
To provide for interaction with a user, the subject matter described herein can be implemented on a computer having a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to the user and a keyboard and a pointing device (e.g., a mouse or a trackball) by which the user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with the user. For example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback) and input from the user can be received in any manner, including but not limited to acoustic, voice, or tactile input.
The subject matter described herein can be implemented in a computing system that includes a back-end component (e.g., as a data server), a middleware component (e.g., an application server), or a front-end component (e.g., a client computer having a graphical user interface or a Web browser through which a user can interact with an implementation of the subject matter described herein), or any combination of such back-end, middleware, and front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include a local area network ("LAN") and a wide area network ("WAN"), such as the Internet.
It is to be understood that the disclosed subject matter is not limited in its application to the details of construction and the arrangement of components set forth in the following description or illustrated in the drawings. The disclosed subject matter is capable of other embodiments and of being practiced and carried out in various ways. Also, it is to be understood that the phraseology and terminology employed herein is for the purpose of description and should not be regarded as limiting.
As such, those skilled in the art will appreciate that the conception upon which this disclosure is based may readily be utilized as a basis for the designing of other structures, methods and systems for carrying out the several purposes of the disclosed subject matter. It is important, therefore, that the claims be regarded as including such equivalent constructions insofar as they do not depart from the spirit and scope of the disclosed subject matter.
While the disclosed subject matter has been described and illustrated in the foregoing exemplary embodiments, it is understood that the present disclosure has been made only by way of example, and that numerous changes in the details of the embodiments of the disclosed subject matter may be made without departing from the spirit and scope of the disclosed subject matter, which is limited only by the following claims.

Claims (10)

1. A computerized method of optimizing capacity of a network device in a mobile network, the computerized method comprising:
receiving, by a computing device, a user identification and a user attribute, the user identification corresponding to a characteristic of a mobile network user, the user attribute corresponding to at least one characteristic of mobile network usage by the mobile network user;
generating, by a computing device, a usage prediction based on a user representation and user attributes, the usage prediction including information corresponding to an expected future data usage of a mobile network user, the expected future mobile network usage corresponding to at least one mobile resource; and
sending, by a computing device, a usage prediction to a Serving Gateway (SGW), such that the SGW routes mobile network users to a legacy packet data network gateway (PGW) and a Network Function Virtualization (NFV) PGW based on the usage prediction, the legacy PGW comprising a fixed capacity for the at least one mobile resource and the NFVPGW comprising a configurable capacity for the at least one mobile resource.
2. The computerized method of claim 1, wherein the at least one characteristic of mobile network usage by a mobile network user includes an amount of prior mobile network usage, time associated with mobile network usage, location of a mobile device corresponding to the mobile user, amount of time spent roaming by the mobile device, make and model of the mobile device, applications installed on the mobile device, operating system and firmware version of the mobile device, subscription plan, remaining quota, and demographic information.
3. The computerized method of claim 1, wherein the at least one characteristic of a mobile network user comprises a mobile device ID or a phone number.
4. The computerized method of claim 1, wherein receiving user attributes further comprises receiving user attributes from at least one of a Home Subscriber Server (HSS), a Mobility Management Entity (MME), a charging system, and a System Architecture Evolution (SAE) gateway.
5. The computerized method of claim 1, wherein the mobile resources comprise at least one of signaling activity, throughput, session occupancy, encryption, and transcoding.
6. A system for optimizing capacity of a network device in a mobile network, the system comprising:
a processor; and
a memory coupled to the processor and comprising computer readable instructions that, when executed by the processor, cause the processor to:
receiving a user identification corresponding to a characteristic of a mobile network user and a user attribute corresponding to at least one characteristic of mobile network usage by the mobile network user;
generating a usage prediction based on the user representation and the user attributes, the usage prediction comprising information corresponding to an expected future data usage of a mobile network user, the expected future mobile network usage corresponding to at least one mobile resource; and
sending the usage prediction to a Serving Gateway (SGW), such that the SGW routes mobile network users to a legacy packet data network gateway (PGW) and a Network Function Virtualization (NFV) PGW based on the usage prediction, the legacy PGW including a fixed capacity for the at least one mobile resource, and the NFV PGW including a configurable capacity for the at least one mobile resource.
7. The system of claim 6, wherein the at least one characteristic of mobile network usage by a mobile network subscriber includes an amount of prior mobile network usage, time associated with mobile network usage, location of a mobile device corresponding to the mobile subscriber, amount of time spent roaming by the mobile device, make and model of the mobile device, applications installed on the mobile device, operating system and firmware version of the mobile device, subscription plan, remaining quota, and demographic information.
8. The system of claim 6, wherein the at least one characteristic of a mobile network user comprises a mobile device ID or a phone number.
9. The system of claim 6, wherein the processor is further caused to receive user attributes from at least one of a Home Subscriber Server (HSS), a Mobility Management Entity (MME), a charging system, and a System Architecture Evolution (SAE) gateway.
10. The system of claim 6, wherein the mobile resources include at least one of signaling activity, throughput, session occupancy, encryption, and transcoding.
HK17111695.6A 2014-04-30 2015-04-30 Optimizing capacity expansion in a mobile network HK1238065B (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US61/986,462 2014-04-30

Publications (2)

Publication Number Publication Date
HK1238065A1 true HK1238065A1 (en) 2018-04-20
HK1238065B HK1238065B (en) 2021-06-11

Family

ID=

Similar Documents

Publication Publication Date Title
CN106688257B (en) Optimizing capacity expansion in mobile networks
US11310731B1 (en) Updating policy and radio access network decisions based on dispersion analytics of cellular systems
Baresi et al. Empowering low-latency applications through a serverless edge computing architecture
US10942786B2 (en) Network management
US11637761B2 (en) Systems and methods to deploy cloud-native microservices for communication services on scale
CN114980034B (en) Method, device, network equipment and terminal for implementing native computing power service
EP3429137B1 (en) Slice allocating method
WO2018016043A1 (en) Resource management apparatus, resource management method and program
CN114691351A (en) Information processing method, device, equipment and storage medium
US11606683B2 (en) First node, second node, third node and methods performed thereby for handling roaming information
EP3203686A1 (en) Virtualized network function interworking
US11496550B2 (en) Method, system, and computer program product for deploying application
Tian et al. Data driven resource allocation for NFV-based Internet of Things
Vassilakis et al. Efficient radio resource allocation in SDN/NFV based mobile cellular networks under the complete sharing policy
HK1238065A1 (en) Optimizing capacity expansion in a mobile network
HK1238065B (en) Optimizing capacity expansion in a mobile network
US20250344233A1 (en) Resource allocation for provisioning systems in wireless communication networks
Thakkar et al. Dynamic Microservice Provisioning in 5G Networks Using Edge–Cloud Continuum
Isyaku et al. Mobile Device Influence on SDN Controller Performance in IoT-Managed Software-Defined Wireless Networks
Mannweiler et al. Context-awareness for heterogeneous access management
Tiwari et al. NWDAF in 5G: Architecture, Use Cases, and Evolution Across 3GPP Releases
JP6246677B2 (en) COMMUNICATION SYSTEM, CONTROL DEVICE, AND PROCESSING DEVICE SWITCHING METHOD
Karvounas et al. Evaluation of signalling load in control channels for the cognitive management of opportunistic networks
Allayiotis et al. Technical considerations towards mobile user QoE enhancement via Cloud interaction
HK1241581A (en) Virtualized network function interworking