[go: up one dir, main page]

HK1183753A - Distributing events to large numbers of devices - Google Patents

Distributing events to large numbers of devices Download PDF

Info

Publication number
HK1183753A
HK1183753A HK13111009.1A HK13111009A HK1183753A HK 1183753 A HK1183753 A HK 1183753A HK 13111009 A HK13111009 A HK 13111009A HK 1183753 A HK1183753 A HK 1183753A
Authority
HK
Hong Kong
Prior art keywords
event
delivery
distribution
list
computer
Prior art date
Application number
HK13111009.1A
Other languages
Chinese (zh)
Inventor
C.F.瓦斯特斯
Original Assignee
微软技术许可有限责任公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 微软技术许可有限责任公司 filed Critical 微软技术许可有限责任公司
Publication of HK1183753A publication Critical patent/HK1183753A/en

Links

Description

Distributing events to a large number of devices
Technical Field
The invention relates to an event distribution method and system.
Background
Background and related Art
Computers and computing systems have affected almost every aspect of modern life. Computers are commonly involved in work, leisure, health care, transportation, entertainment, home administration, and the like.
Furthermore, computing system functionality may also be enhanced by the computing system's ability to interconnect to other computing systems via network connections. The network connection may include, but is not limited to, a connection via a wired or wireless ethernet, a cellular connection, or even a computer-to-computer connection through a serial, parallel, USB, or other connection. These connections allow the computing system to access services on other computing systems and quickly and efficiently receive application data from the other computing systems.
Many computers are intended to be used through direct user interaction with the computer. In this way, the computer has input hardware and a software user interface to facilitate user interaction. For example, modern general purpose computers may include a keyboard, mouse, touchpad, camera, etc. for allowing a user to input data to the computer. In addition, various software user interfaces are available.
Examples of software user interfaces include graphical user interfaces, text command line based user interfaces, function key or hot key user interfaces, and the like.
It is assumed that developers build mobile applications on top of iOS, Android, Windows Phone, Windows, etc., which focus on conveying news of general interest, information and facts about world events or fans about football, american football, hockey, baseball leagues or teams so that they remain up to date. For any of these applications (and various others), the notification of a pop-up alert or prompt when the fan's favorite team scores or some type of news event is breaking out of the world is a great distinction (diffentertainer). This differentiation generally enables building and running a server infrastructure to push these events into a vendor-provided notification channel, which is beyond the skill set of many mobile application ("app") developers that focus on optimizing the user experience. And if their application is very successful, simple server-based solutions will quickly hit the upper scalability limit, since it is very challenging to distribute events to tens or even hundreds of thousands of devices in a timely manner.
For many of these applications, timeliness is an important value proposition. For example, fans are not very resistant in updating. Similarly, individuals and institutions viewing aspects of their impaired thresholds of financial investments, people participating in large auctions, or their players on Facebook who are about to suffer an attack from a passing hurricane are often not very patience in updating.
The push notification service of Apple for iOS, the C2DM service of Google for Android, and the MPNS service of microsoft for Windows Phone, as well as most other mobile platforms, provide some form of optimal shared connection to the device (providing maximum energy efficiency (and thus maximum battery efficiency)) and allow applications to leverage this shared channel via the push notification API of the respective platform. However, as described above, using these platforms to distribute a large number of notifications based on a single event is difficult and/or requires a large amount of computing resources.
The subject matter claimed herein is not limited to embodiments that solve any disadvantages or that operate only in environments such as those described above. Rather, this background is provided merely to illustrate one exemplary technology area in which some embodiments described herein may be practiced.
Disclosure of Invention
One embodiment herein relates to a method that may be implemented in a computing environment. The method includes acts for distributing events to a large number of event consumers in a manner that can minimize message duplication and message latency. The method includes determining that an event should be sent to a particular group of consumers. The method also includes replicating the event and providing separate replicas to the plurality of distribution partitions. The method also includes, at each of the distribution partitions, packaging a copy of the event with a plurality of delivery lists (delivery slips) to create a plurality of delivery packages (delivery bundles). The delivery list describes a number of individual consumers that are intended to receive the event. The method also includes distributing the event to individual consumers specified in the delivery list using the delivery package.
This summary is provided to introduce a selection of concepts in a simplified form that are further described below in the detailed description. This summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used as an aid in determining the scope of the claimed subject matter.
Additional features and advantages will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by the practice of the teachings herein. The features and advantages of the invention may be realized and obtained by means of the instruments and combinations particularly pointed out in the appended claims. Features of the present invention will become more fully apparent from the following description and appended claims, or may be learned by the practice of the invention as set forth hereinafter.
Drawings
In order to describe the manner in which the above-recited and other advantages and features of the present subject matter can be obtained, a more particular description of the subject matter briefly described above will be rendered by reference to specific embodiments thereof which are illustrated in the appended drawings. Understanding that these drawings depict only typical embodiments and are not therefore to be considered to be limiting of its scope, the embodiments will be described and explained with additional specificity and detail through the use of the accompanying drawings in which:
FIG. 1 illustrates an example of an event data distribution system;
FIG. 2 illustrates an event data acquisition and distribution system; and
fig. 3 illustrates a method of distributing events.
Detailed Description
Some embodiments described herein leverage push notification mechanisms and provide a notification management and distribution layer at the top level that allows mobile and desktop developers to leverage these push notification channels on a scale and with very timely distribution characteristics.
Some embodiments may include a method of performing broadcast of notifications through a cascading and partitioned distribution and delivery system that minimizes the number of message copies and is scalable to a very large number of delivery targets, while also minimizing the average flow time of notifications from ingress to egress for each individual target.
Some embodiments may include a method of collecting and streaming delivery statistics to a data warehouse solution for system monitoring and client and third party billing purposes.
Some embodiments may include a method of temporarily or permanently blacklisting a target due to a temporary or permanent delivery error condition.
As a basis, one embodiment system uses the publish/subscribe infrastructure provided by the Windows Azure service bus available from microsoft corporation of redmond, washington, although a similar form of infrastructure exists in various other messaging systems. The infrastructure provides two capabilities that facilitate the described implementation of the presented method: a topic and a queue.
A queue is a storage structure for messages that allows messages to be added in sequential order (enqueued) and removed in the same order as the messages were added (dequeued). Messages may be added and removed by any number of concurrent clients, allowing the load on the enqueue side to be leveled and the processing load to be balanced across the various receivers on the dequeue side. The queue also allows entities to obtain a lock on the message when dequeuing the message, allowing the consuming client to have explicit control over when the message is actually deleted from the queue or whether it can be restored back into the queue in the event of a failure to process the retrieved message.
A topic is a storage structure that has all the characteristics of a queue, but allows multiple concurrent existing 'subscriptions', each of which allows an isolated filtered view of a sequence of enqueue messages. Each subscription on the topic produces a copy of each enqueued message, assuming that the subscription's associated filter criteria positively matches the message. As a result, enqueuing to a message with a topic of 10 subscriptions (where each subscription has a simple 'pass through' condition that matches all messages) will result in a total of 10 messages, one for each subscription. Like a queue, a subscription may have multiple concurrent consumers, providing a balance of processing load across the various recipients.
Another basic concept is an 'event', which is a message in terms of the underlying publish/subscribe infrastructure. In the context of one embodiment, events are subject to a simple set of constraints governing the use of message bodies and message attributes. The message body of an event typically flows as an opaque block of data, and any event data considered by one embodiment typically flows in message attributes, which are a set of key/value pairs that are part of the message representing the event.
Embodiments may be configured to distribute a copy of information from a given input event to each of a large number of 'targets 102' associated with a particular scope, and to do so in a minimal amount of time for each target 102. The target 102 may include an address of a device or application coupled to an identifier of an adapter for some third party notification system or some network accessible external infrastructure and assistance data for accessing the notification system or infrastructure.
Some embodiments may include an architecture that is divided into three different processing roles, which are described in detail below and can be understood with reference to FIG. 1. As shown by '1', … …, and 'n' in FIG. 1, each of the processing roles may have one or more instances of that processing role. Note that when applied to processing roles, the use of 'n' in each case should be considered different from the other, meaning that each of the processing roles does not necessarily have the same number of instances. The 'distribution engine' 112 role accepts events and packages them with a delivery list (see, e.g., delivery list 128-1 in fig. 2) containing groups of targets 102. The 'delivery engine' 108 accepts these packets and processes the transport list for delivery to each network location represented by the target 102. The 'management role' illustrated by management service 142 provides an external API to management target 102 and is also responsible for accepting statistics and error data from delivery engine 108 and for processing/storing the data.
The data stream is anchored on a 'distribution topic 144' into which events are submitted for distribution. Submitted events are tagged with their associated scope using message attributes, which may be one of the above constraints that distinguish the events from the original message.
In the illustrated example, the distribution topic 144 has one pass-through (unfiltered) subscription for each 'distribution partition 120'. A 'distribution partition' is an isolated collection of resources responsible for distributing and delivering notifications to a subset of targets 102 of a given scope. The copy of each event sent into the distribution topic is available to all concurrently configured distribution partitions virtually simultaneously through their associated subscriptions, allowing parallelization of the distribution work.
The parallelization achieved by partitioning helps to achieve timely distribution. To understand this, consider a scope with ten million targets 102. If the target's data is held in non-partitioned storage, the system would have to sequentially traverse a single large database result set, or if the result set was obtained using a partitioned query to the same storage, the throughput for obtaining the target data would be at least throttled by the throughput ceiling of the forward (bounding) network gateway infrastructure of the given storage, and as a result, the delivery latency of notifications to targets 102 whose description records appear very late in the given result set would likely be unsatisfactory.
In contrast, if ten million targets 102 are distributed across 1000 stores, each holding 10000 target records, and these stores are paired with a dedicated computing infrastructure (herein described 'distribution engine 122' and 'delivery engine 108') that performs queries and processes results in partitions as described herein, then the acquisition of the target descriptions can be parallelized across a large set of computing and network resources, significantly reducing the time difference measured from the first to last event distributed when all events are distributed.
The actual number of distribution partitions is not technically limited. It may range from a single partition to any number of partitions greater than one.
In the illustrated example, once the 'distribution engine 122' of the distribution partition 120 obtains the event 104, it first calculates the size of the event data and then calculates the size of the transfer list 128, which may be calculated based on the delta between the event size and the smaller of the maximum allowable message size and the upper absolute size limit of the underlying messaging system. The size of the event is limited as follows: there is some minimum headroom to accommodate the 'transport list' data.
The delivery list 128 is a list that contains the descriptions of the targets 102. The transfer list is created by the distribution engine 122 by performing a lookup query on the targets 102 maintained in the partitioned store 124 that matches the scope of the event, thereby returning all targets 102 that match the scope of the event and a selected set of other conditions to be narrowed down based on the filtering conditions of the event data. Embodiments may include a time window condition that limits the results to targets 102 that are considered valid at the current time, meaning that the current UTC time is within the start/end valid time window contained in the target profile, as well as other filtering conditions. This facility is used for blacklisting, which is described later herein. Upon traversing the lookup results, the engine creates a copy of the event 104, fills the transport list 128 to a maximum size with the target description retrieved from the store 124, and then enqueues the resulting event packet and transport list into the partition's ' delivery queue 130 '.
The transmit roster technique ensures that the event flow rate of events from the distribution engine 122 to the delivery engine 108 is higher than the actual message flow rate on the underlying infrastructure, meaning that, for example, in the case where 30 target descriptions can be packed into the transmit roster 128 along with the event data, the event/target pairs flow 30 times as fast as if the event/target pairs were grouped directly into messages.
The delivery engine 108 is a consumer of the event/transport manifest package 126 from the delivery queue 130. The role of the delivery engine 108 is to dequeue these packets and deliver the event 104 to all destinations listed in the transport list 128. Delivery typically occurs through adapters that format event messages into notification messages understood by the respective target infrastructure. For example, the notification message may be forMPNS format for 7 phones, delivered in APN (apple push notification) format for iOS devices, in C2DM (cloud-to-device messaging) format for Android devices, in JSON (Java script object notation) format for browsers on devices, in HTTP (hypertext transfer protocol), etc.
Delivery engine 108 typically parallelizes delivery across independent targets 102 and serializes delivery across targets 102 sharing the scope implemented by the target infrastructure. An example of the latter case is that a particular adapter in the delivery engine may choose to send all events targeting a particular target application on a particular notification platform over a single network connection.
The delivery queue 130 is used to decouple the distribution engine 122 and the delivery engine 108 to allow independent scaling of the delivery engine 108 and to avoid delivery slowdown back to the distribution query/packing stage and blocking the distribution query/packing stage.
Each distribution partition 120 may have any number of delivery engine instances concurrently observing the delivery queue 130. The length of the delivery queue 130 may be used to determine how many delivery engines are concurrently active. If the queue length exceeds a certain threshold, a new delivery engine instance may be added to partition 120 to increase the transmission throughput.
The distribution partition 120 and associated distribution and delivery engine instances can scale in a virtually limitless manner to achieve large-scale optimal parallelization. If the target infrastructure is able to receive and forward one million event requests to devices in a parallel fashion, the described system is able to distribute events across its delivery infrastructure-possibly taking advantage of network infrastructure and bandwidth across data centers-in a manner that saturates the target infrastructure with event submissions for timely delivery to all required targets 102 as the target infrastructure is under-loaded and allowed under any granted delivery limits given.
Upon delivering a message to the target 102 via its respective infrastructure adapter, the system records various statistics entries in some embodiments. This includes a measured period of time of duration between receipt of the delivery package and delivery of any individual message and a measured period of time of duration of the actual transmit operation. Another part of the statistical information is an indicator of whether the delivery was successful or failed. This information is collected within the delivery engine 108 and accumulated into an average on a per-scope basis and a per-target-application basis. The 'target application' is a grouping identifier introduced for the specific purpose of statistical information accumulation. The calculated average is sent to the delivery status queue 146 at defined time intervals. The queue is consumed by a (set of) workers in the management service 142, which submits the event data to the data warehouse for various purposes. These purposes may include, in addition to operational monitoring, billing tenants to whom events are delivered and/or disclosing statistical information to tenants for billing to their own third parties.
Upon detection of delivery errors, the errors are classified as temporary and permanent error conditions. The temporary error condition may include, for example, a network failure that does not permit the system to reach the delivery point of the target infrastructure or a target infrastructure report indicating that the delivery quota has been temporarily reached. Persistent error conditions may include, for example, authentication/authorization errors on the target infrastructure, or other errors that cannot be recovered without manual intervention, and error conditions in which the target infrastructure reports that the target is no longer available or does not want to accept messages on a persistent basis. Once classified, the error report is submitted to a delivery failure queue 148. For a temporary error condition, the error may also include an absolute UTC timestamp until the error condition is expected to be resolved. At the same time, the target is locally blacklisted by the target adapter for any further local deliveries by this delivery engine instance. The blacklist may also include timestamps.
The delivery failure queue 148 is consumed by a (group of) workers in the administrative role. Persistent errors may cause the respective target to be immediately deleted from its respective distribution partition store 124 that the administrative role has access to. By 'delete' is meant that the record is indeed removed or alternatively moved out of view of the search query only by setting the 'end' timestamp of the validity period of the record to the wrong timestamp. A temporary error condition may cause the target to be deactivated during the time period indicated by the error. The deactivation may be accomplished by moving the beginning of the target's validity period up to the timestamp indicated by the error, at which time the error condition is expected to recover.
Referring now to FIG. 2, an alternative illustration is shown. As described above, embodiments are particularly useful in message dissemination systems where a single event is disseminated to multiple (and possibly a large number of) end users. An example of this is shown in figure 2. Fig. 2 shows an example where information from a large number of different sources is delivered to a large number of different targets. In some examples, information from a single source or aggregated information from multiple sources may be used to create a single event that is delivered to a large number of targets. In some embodiments, this may be accomplished using the emission technique shown in FIG. 2.
Fig. 2 shows a source 116. Various embodiments may utilize the fetch partition 140, as will be discussed later herein. Each of the fetch partitions 140 may include multiple sources 116. There may be a large and varied number of sources 116. The source 116 provides information. Such information may include, for example, but is not limited to, emails, text messages, real-time stock quotes, real-time event scores, news updates, and so forth.
FIG. 2 shows that each partition includes a fetch engine, such as illustrative fetch engine 118. The fetch engine 118 collects information from the sources 116 and generates events based on the information. In the example shown in fig. 2, a plurality of events are shown as being generated by the acquisition engine using various sources. The description is made using event 104-1. In some embodiments, the event 104-1 may be normalized as explained below. The acquisition engine 118 may be a service on a network, such as the internet, that gathers information from sources 116 on the network.
FIG. 2 shows the event 104-1 being sent to the distribution topic 144. The distribution topic 144 distributes events to multiple distribution partitions. Distribution partition 120-1 is used as an analog of all distribution partitions. Each serving multiple end users or devices represented by the subscription. The number of subscriptions served by a distribution zone may be different from the number served by other distribution zones. In some embodiments, the number of subscriptions serviced by a partition may depend on the ability to distribute the partition. Alternatively or additionally, the distribution partition may be selected to service the user based on logical or geographic proximity to the end user. This may allow alerts to be delivered to the end user in a more timely manner.
In the illustrated example, distribution partition 120-1 includes a distribution engine 122-1. The distribution engine 122-1 consults the database 124-1. Database 124-1 includes information about subscriptions and details about associated delivery targets 102. In particular, the database may include information such as information describing the platform of the target 102, the application used by the target 102, the network address of the target 102, user preferences of the end user using the target 102, and so forth. Using the information in database 124-1, distribution engine 122-1 constructs a package 126-1, where package 126-1 includes event 104 (or at least information from event 104) and a delivery list 128-1 that identifies a plurality of targets 102 in targets 102 to which information from event 104-1 is to be sent as a notification. Packet 126-1 is then placed in queue 130-1.
The distribution partition 120-1 may include multiple delivery engines. The delivery engine dequeues individual packets from the queue 130-1 and delivers the notification to the target 102. For example, the delivery engine 108-1 may retrieve the packet 126-1 from the queue 130-1 and send the event 104 information to the targets 102 identified in the transport list 128-1. Thus, notifications 134, including event 104-1 information, can be sent from various distribution partitions to targets 102 in a variety of different formats that are applicable to different targets 102 and specific to individual targets 102. This allows for the creation of individualized notifications 134 that are individualized for individual targets 102 from a common event 104-1 at the edge of the delivery system, rather than shipping a large number of individualized notifications through the delivery system.
The following discussion now refers to various methods and method acts that may be performed. Although the various method acts are discussed in, or illustrated in, a particular order by flowcharts that occur in the particular order, the particular order is not required unless explicitly stated, or required because an act is dependent on another act being completed before the act is performed.
Referring now to FIG. 3, a method 300 is shown. The method may be practiced in a computing environment. The method includes acts for distributing events to a large number of event consumers in a manner that can minimize message duplication and message latency. The method includes determining that an event should be sent to a particular group of consumers (act 302). For example, as shown in FIG. 2, an event 104 needs to be sent to one or more of the targets 102.
The method also includes replicating the event and providing separate replicas to the plurality of distribution partitions (act 304). For example, as shown in FIG. 2, the event is replicated at the distribution topic to multiple distribution partitions, such as distribution partition 120-1 and other distribution partitions as shown.
The method also includes, at each of the distribution partitions, packaging the copy of the event with a plurality of transport lists to create a plurality of delivery packages (act 306). The delivery list may describe a number of individual consumers that are intended to receive the event. An example of such a delivery package is shown at 126-1 in FIG. 2.
The method also includes distributing the event to individual consumers specified in the transfer list using a delivery package (act 308). For example, as shown in FIG. 2, the delivery engine 108-1 can deliver the event 104 to the targets 102 using the transport list 128-1.
Some embodiments of method 300 may be implemented in which partitions are determined based on partition capabilities. For example, the number of targets for which events are to be distributed by a distribution partition may be determined by capabilities, as determined by factors such as system hardware, network connectivity, current load, and so forth.
Some embodiments of method 300 may be implemented where zones are determined based on locale. For example, a partition (e.g., partition 120-1) may be assigned a target that is geographically or logically adjacent to the partition.
Some embodiments of method 300 may be implemented where the transfer list defines rules and constraints for how events are delivered to individual consumers. For example, the transfer list may include a filter specific to the consumer. In one example, a consumer (i.e., a target user) may define preferences regarding what types of events to receive or not receive. This information may be included in the transfer list so that decisions about whether to deliver the event may be made by the delivery engine at the edge of the delivery system.
In an alternative or supplemental embodiment, the delivery list may define the network location rule. For example, the delivery list may include network paths to a particular destination.
In an alternative or supplemental embodiment, the transfer list may include security credential information. For example, security credentials may be required for an event to be delivered. In particular, an application on a device may desire some security protocol information when communicating with a server that provides event data. This security protocol information may be included by the delivery engine 108-1 to ensure that the event is delivered correctly.
In an alternative or supplemental embodiment, the transfer list may include rules that map the raw event data to a format desired by the consumer. For example, the event may be generic in form, but the delivery list may define the platform of the target. This allows the delivery engine 108-1 to format the event 104 in a particular format appropriate for the defined platform before delivering the event to the target.
The methods may be implemented by a computer system including one or more processors and a computer-readable medium, such as computer memory. In particular, the computer memory may store computer-executable instructions that, when executed by the one or more processors, cause various functions to be performed, such as the various acts described in the embodiments.
Embodiments of the present invention may comprise or utilize a special purpose or general-purpose computer including computer hardware, as discussed in greater detail below. Embodiments within the scope of the present invention also include physical and other computer-readable media for carrying or storing computer-executable instructions and/or data structures. Such computer-readable media can be any available media that can be accessed by a general purpose or special purpose computer system. The computer-readable medium storing the computer-executable instructions is a physical storage medium. Computer-readable media bearing computer-executable instructions are transmission media. Thus, by way of example, and not limitation, embodiments of the invention can include at least two significantly different computer-readable media: physical computer-readable storage media and transmission computer-readable media.
Physical computer storage media includes RAM, ROM, EEPROM, CD-ROM or other optical disk storage (e.g., CD, DVD, etc.), magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store desired program code means in the form of computer-executable instructions or data structures and which can be accessed by a general purpose or special purpose computer.
A "network" is defined as one or more data links that allow electronic data to be transferred between computer systems and/or modules and/or other electronic devices. When information is transferred or provided over a network or another communications connection (either hardwired, wireless, or a combination of hardwired or wireless) to a computer, the computer properly views the connection as a transmission medium. Transmission media can include a network and/or data links which can be used to carry desired program code means in the form of computer-executable instructions or data structures and which can be accessed by a general purpose or special purpose computer. Combinations of the above are also included within the scope of computer-readable media.
Furthermore, program code means in the form of computer-executable instructions or data structures can be transferred automatically from transmission computer readable media to physical computer readable storage media (or vice versa) upon reaching various computer system components. For example, computer-executable instructions or data structures received over a network or a data link may be cached in RAM within a network interface module (e.g., a "NIC") and then ultimately transferred to computer system RAM and/or to less volatile computer-readable physical storage media at a computer system. Thus, a computer-readable physical storage medium may be included in a computer system component that also (or even primarily) utilizes transmission media.
Computer-executable instructions comprise, for example, instructions and data which cause a general purpose computer, special purpose computer, or special purpose processing device to perform a certain function or group of functions. The computer-executable instructions may be, for example, binaries, intermediate format instructions such as assembly language, or even source code. Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the features and acts described above are disclosed as example forms of implementing the claims.
Those skilled in the art will appreciate that the invention may be practiced in network computing environments with many types of computer system configurations, including, personal computers, desktop computers, laptop computers, message processors, hand-held devices, multi-processor systems, microprocessor-based or programmable consumer electronics, network PCs, minicomputers, mainframe computers, mobile telephones, PDAs, pagers, routers, switches, and the like. The invention may also be practiced in distributed system environments where local and remote computer systems, which are linked (either by hardwired data links, wireless data links, or by a combination of hardwired and wireless data links) through a network, both perform tasks. In a distributed system environment, program modules may be located in both local and remote memory storage devices.
The present invention may be embodied in other specific forms without departing from its spirit or characteristics. The described embodiments are to be considered in all respects only as illustrative and not restrictive. The scope of the invention is, therefore, indicated by the appended claims rather than by the foregoing description. All changes which come within the meaning and range of equivalency of the claims are to be embraced within their scope.

Claims (7)

1. In a computing environment, a method of distributing events to a large number of event consumers in a manner that minimizes message duplication and message latency, the method comprising:
determining that an event (104) should be sent to a particular group of consumers (302);
replicating the event (104-1) and providing individual replicas to a plurality of distribution partitions (120) (304);
at each of the distribution partitions (120), packaging copies (104-1) of the event with a plurality of transport lists (128-1) to create a plurality of delivery packages (126-1) (306), the transport lists (128-1) describing a plurality of individual consumers intended to receive the event (104-1); and
distributing the event (104-1) to individual consumers specified in the delivery list (128-1) using the delivery package (126-1).
2. The method of claim 1, wherein the distribution partition is determined based on a distribution partition capability.
3. The method of claim 1, wherein the zone is determined by a venue.
4. The method of claim 1, wherein the delivery list defines rules and constraints for how the event is delivered to individual consumers.
5. The method of claim 4, wherein the constraints define user preferences, and wherein distributing the event to individual consumers specified in the transport list using the delivery package comprises determining whether to deliver an event based on the user preferences in the transport list.
6. The method of claim 4, wherein the constraints define rules that map raw event data to a platform-specific format for individual consumer devices.
7. The method of claim 1, wherein the transfer list includes security credential information.
HK13111009.1A 2011-09-12 2013-09-26 Distributing events to large numbers of devices HK1183753A (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US61/533,657 2011-09-12
US13/278,401 2011-10-21

Publications (1)

Publication Number Publication Date
HK1183753A true HK1183753A (en) 2014-01-03

Family

ID=

Similar Documents

Publication Publication Date Title
US11818049B2 (en) Processing high volume network data
US9208476B2 (en) Counting and resetting broadcast system badge counters
JP6126099B2 (en) Marketplace for timely event data distribution
US11916727B2 (en) Processing high volume network data
US20130067024A1 (en) Distributing multi-source push notifications to multiple targets
US8595322B2 (en) Target subscription for a notification distribution system
US20160219089A1 (en) Systems and methods for messaging and processing high volume data over networks
US20130066980A1 (en) Mapping raw event data to customized notifications
US20180248709A1 (en) Managing channels in an open data ecosystem
US20130066979A1 (en) Distributing events to large numbers of devices
EP3803616A1 (en) Change notifications for object storage
CN111475315A (en) Server and subscription notification push control and execution method
HK1183753A (en) Distributing events to large numbers of devices
Anta et al. Distributed slicing in dynamic systems
HK1181579A (en) Distributing multi-source push notifications to multiple targets
Alhosseini et al. The More The Merrier: Reconstruction of Twitter Firehose
Huang et al. SoMed: scaling decentralised microblogging services with a hybrid DHT framework