[go: up one dir, main page]

HK40083650A - Techniques to provide streaming data resiliency utilizing a distributed message queue system - Google Patents

Techniques to provide streaming data resiliency utilizing a distributed message queue system Download PDF

Info

Publication number
HK40083650A
HK40083650A HK62023072350.9A HK62023072350A HK40083650A HK 40083650 A HK40083650 A HK 40083650A HK 62023072350 A HK62023072350 A HK 62023072350A HK 40083650 A HK40083650 A HK 40083650A
Authority
HK
Hong Kong
Prior art keywords
streaming data
data platform
message queue
message
service
Prior art date
Application number
HK62023072350.9A
Other languages
Chinese (zh)
Other versions
HK40083650B (en
Inventor
拉万加纳·戈维尔
维贾亚苏里亚·拉维
Original Assignee
第一资本服务有限责任公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 第一资本服务有限责任公司 filed Critical 第一资本服务有限责任公司
Publication of HK40083650A publication Critical patent/HK40083650A/en
Publication of HK40083650B publication Critical patent/HK40083650B/en

Links

Description

Techniques for providing streaming data resiliency using a distributed message queue system
RELATED APPLICATIONS
This application claims priority TO U.S. non-provisional application No. 16/881,469, filed on 22/2020, and U.S. provisional application No. 62/961,059, filed on 14/1/2020, both entitled "TECHNIQES TO PROVIDE STREAMING DATA RESILITY UTILIZING A DISTRIBUTED MESSAGE QUEUE SYSTEM". The contents of the above-mentioned application are incorporated herein by reference in their entirety.
Background
A real-time data processing environment may include various types of data systems and devices that provide and process data. In an example, real-time data may be time-critical and/or mission-critical data that needs to be processed in a timely manner and provided to other systems. For example, banking systems may need to process large amounts of real-time data to perform various banking functions, including processing transactions and providing fraud detection. When a failure occurs, it is important that data is not lost and that the system be restored to an operational state as quickly as possible. The embodiments discussed herein are directed to solving these and other problems.
Disclosure of Invention
Embodiments discussed herein are directed to providing streaming data in a resilient manner. For example, embodiments may include devices, systems, components, etc. for providing techniques and computer implementations, including receiving data from a data service provider, transmitting one or more messages including the data to a streaming data platform, and detecting a delivery failure of a message of the one or more messages to the streaming data platform. Embodiments also include transmitting the message to a distributed message queue service, wherein the message is transmitted to the distributed message queue service based on the detection of the delivery failure and stored in a queue of the distributed message queue system. The system includes publishing the message to the streaming data platform, determining whether the message published to the streaming data platform was successful or unsuccessful, and retrying the publication of the message to the streaming data platform when the publication of the message is unsuccessful.
Drawings
Fig. 1 shows an example configuration of a system for processing data and providing streaming data resiliency.
Fig. 2A illustrates an example process flow for processing data via a streaming data platform.
Fig. 2B illustrates an example process flow for providing streaming data resiliency.
Fig. 3A illustrates a second example process flow for processing data via a streaming data platform.
Fig. 3B illustrates a second example process flow for providing streaming data resiliency.
FIG. 4 illustrates an example logic flow.
Fig. 5 illustrates a second example logic flow.
Fig. 6 shows an example of a system architecture.
Detailed Description
Embodiments are generally directed to providing resiliency in capturing and processing streaming data from one or more data sources. More specifically, embodiments include utilizing a distributed cloud-based message queue system to process and provide messages that include data from real-time data streams. In one configuration, the streaming system may include an application that receives and processes streaming data. The application may provide the processed data to a streaming data platform, which may be used by other consumer applications to further process and/or store the data. For example, a streaming data platform may be used to provide intelligent real-time decisions for customers using the system. In some cases, data from an application to a streaming data platform may not be delivered. In these cases, undelivered data may be provided to the distributed message queue system for storage in a queue and provided to the streaming data platform once any problems are resolved.
In another configuration, a flow may include an application that may receive data, process the data, and send all of the processed data to a streaming data platform using a distributed message queue system. In this configuration, the application may retry sending undelivered to the streaming data platform when a delivery failure occurs. These and other details will become apparent from the description below.
Reference is now made to the drawings, wherein like reference numerals are used to refer to like elements throughout. In the following description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding thereof. It may be evident, however, that the novel embodiments may be practiced without these specific details. In other instances, well-known structures and devices are shown in block diagram form in order to facilitate describing them. The intention is to cover all modifications, equivalents, and alternatives falling within the scope of the claims.
Fig. 1 illustrates a system 100 for real-time or near real-time processing of streaming data while providing resilient backup capabilities using a distributed message queue system 105 and a service 102. The data may be provided by one or more systems based on any number of event occurrences. Typically, the data is received by the streaming system 120 for processing and provision to other internal systems, such as the consumer system 109. These systems may perform additional processing on the data to provide insight. In one example, the system 100 may be part of a banking platform, and the streaming data may be related to banking functions, such as performing transactions, processing credit or loan applications, and other general banking functions. In embodiments, data may be needed to perform critical functions, such as fraud detection, transaction authentication, loan/credit approval, and the like. The utilization of distributed message queue system 105 ensures that system 100 can continue to provide critical functions with fault tolerance by providing backup data paths and logging capabilities.
In an embodiment, the system 100 may includeA streaming data system 120 that can receive and process streaming data from one or more internal or external data sources, such as the data provider system 101. Data provider system 101 can be any type of computing system that can provide data in real-time or near real-time and in a streaming manner. In one example, data provider system 101 may be a payment processing system, for exampleSynovus Financial Corp, fisherv, first data Corp, etc., and may transfer data to streaming system 120 when performing a transaction. The data may include transaction data such as merchant identification information, transaction price, transaction time/date, etc. The data may also include authentication data for the customer, such as a user or card identifier, an account number or token, a card security code to verify value, an expiration date, and the like. The data provider system 101 may provide data when a transaction occurs and may be used by one or more consumer systems 109 to provide insights and make decisions, e.g., authenticate users, detect fraud, provide incentives, update models, and so forth. In embodiments, system 100 may include more than one or more data provider systems 101, and is not limited in this manner. For example, the additional data provider systems 101 may include a customer service system that provides customer service information, a news service system that provides news, a reward system that provides reward data, and a voice response system that provides voice response data.
In embodiments, streaming system 120 may receive data through one or more Application Programming Interface (API) portals coupled with one or more data provider systems 101. In an embodiment, the data may be received by one or more service applications 103 of the streaming system 120, the streaming system 120 being configured to transmit streaming data to the streaming data platform 107. The service application 103 may be a software engine or component coupled with one or more APIs to receive data from the data provider system 101, process the data, and provide the data to the consumer system 109.Streaming data platform 107 may processFormatted, and may utilize a binary Transmission Control Protocol (TCP) based protocol. Further, the service application 103 may utilize a producer API to publish the recording stream in Java format. In other cases, streaming data platform 107 may utilize a consumer API and subscribe to recorded streams in Java format.
In one example configuration, as shown in fig. 2A/2B, the service application 103 may provide the processed data in one or more messages directly to the streaming data platform 107 for publication to the consumer system 109. In this configuration, the distributed message queue system 105 can only be used as a backup during a data loss event to ensure that undelivered messages are ultimately delivered to the streaming data platform 107. In another configuration, as shown in fig. 3A/3B, service application 103 may utilize distributed message queue system 105 to provide processed data to streaming data platform 107. In this configuration, the service application 103 may process undelivered data. More specifically, the distributed message queue system 105 can notify the service application 103 of the undelivered data, and the service application 103 can send the undelivered processed data directly to the streaming data platform 107 for publication to the consumer system 109.
In embodiments, the streaming system 120 may include a plurality of different service applications 103 to perform various operations to process streaming data. In one example, the service application 103 may be an authentication engine for receiving and processing data for authentication, e.g., processing authentication request data to make authentication decisions. In another example, service application 103 may be a fraud engine for processing data for fraud detection, such as transaction information, transaction location, transaction amount, e.g., for real-time fraud determination. In a third example, service application 103 may be a modeling engine for receiving and processing data for model scoring, e.g., determining model features and scores, detecting model errors and warnings, and utilizing transaction data, analysis data, and other data for data validation.
In an embodiment, streaming data platform 107 may receive processed data directly from service application 103 or from distributed message queue system 105, as previously described. The streaming data platform 107 may publish data for one or more consumer systems 109 for further processing and/or storage of the data. For example, the consumer system 109 can include the modeling system 108, which can receive data including model scores to update models, generate new models, perform model verification, and store in a data store. In another example, the consumer system 109 may include an authentication system 110 to further process the data for authentication and store the authentication decision in a data store. In a third example, the consumer system 109 can include a fraud system 112 to further process the data to perform fraud defense actions, update account status, apply account limits, create fraud cases, and trigger alerts.
In an embodiment, the flow system 120 may be part of a critical system that provides banking services to customers and providers, such as transaction processing and fraud detection. Thus, any data loss is detrimental to both the streaming system 120 and the consumer system 109 that utilize the data. Previous solutions use systems and servers to record data and in such cases are sometimes affected by the same interruption and processing of data by the host system. In one example, previous systems utilized configurationsIncluding log4j of APACHE, to record data during a failure event and provide when the primary system is brought back online. However, in some cases, when a fault event occurs, it will also shut down (take down)The system itself. In other cases, when log4j of APACHE records data during a failure, it goes online very slowly, e.g., more than fifteen minutes.
Embodiments discussed herein are directed to providing store and forward solutions to provide resiliency and eliminate failure event downtime. More specifically, the inventionEmbodiments include utilizing a distributed message queue system 105 with a distributed message queue service 102, a queue 104, and a distributed message client 106 to store and forward data during a failure event. In one example, distributed message queue system 105 may be a cloud-based system, such as amazon web serviceAnd the distributed message queue service 102 can be a simple queue service of the AWSFurther, the distributed message queue client 106 can be AWSAnd (4) a client.
In an embodiment, as previously described, the distributed message queue system 105 may be utilized only when a failure is detected. More specifically, service application 103 may receive streaming data, process the streaming data, and generate one or more messages to send the data to streaming data platform 107. Service application 103 may transmit (e.g., without utilizing distributed message queue system 105) streaming data directly to streaming data platform 107. In one example, the redirection function of the SQS may be set to true, e.g., "redirect.
In some cases, service application 103 may detect a failure in delivery of one or more messages to streaming data platform 107. For example, service application 103 may receive or determine a delivery timeout (100 milliseconds (ms)) or determine a connection failure. In these cases, the service application 103 may be configured to redirect the failure message to the distributed message queue system 105 to ensure that the streaming data corresponding to the failure message is not lost and is ultimately delivered to the streaming data platform 107 and the consumer system 109. In some embodiments, the distributed message queue system 105 may be configured to trigger an alert when a failure message is being delivered. An alarm may be triggered when a specified number of "error" messages are received and processed. For example, a cloud observation alarm- "ERROR" -GUID using a metric filter for ERROR monitoring >10 in 5 minutes. QueueDepth CloudWatch alerts can also be set >1000 in 1 minute 3 times from the distributed message queue system.
In an embodiment, service application 103 may transmit one or more failure messages to distributed message queue system 105 via an API. More specifically, the service application 103 may invoke or utilize the distributed message queue service 102 and send one or more messages for storage in the queue 104. In embodiments, the queue 104 may be an encrypted queue, where one or more themes are configured to be notified for each application in each zone. In one example, the service application 103 may call a "SendMessageRequest" instance of the SQS and include the name of the queue (SQS. Queuename =), the region of the queue (SQS. Region =), and the body of the message. The request is then passed to the distributed message queue service 102 send message method, which may return a send message response object.
In embodiments, the distributed message queue system 105 may publish or send messages to the streaming data platform 107 and/or hold them in the queue 104 until they can be delivered. More specifically, distributed message queue client 106 may publish one or more messages to streaming data platform 107. In one example, distributed message queue client 106 may poll queue 104 to retrieve and send messages to streaming data platform 107. The queue 104 may be configured as an event source for the client 106 on the distributed message queue system 105. When an event occurs, the message or record of the message is in the queue 104, and the event may be triggered and detected by the client 106. The client 106 may retrieve one or more messages from the queue 104 and send the one or more messages as a single message or in batches (e.g., five messages).
In some embodiments, each service application 103 may be configured with an associated queue 104, and the client 106 may process and send data to the streaming data platform 107 based on which queue 104 has data. In other cases, a single queue 104 may be used for all service applications 103, and the client 106 may automatically discover the correct consumer of the streaming data platform 107 based on the message envelope.
The distributed message queue client 106 may determine whether a message published to the streaming data platform 107 was successfully delivered or unsuccessfully delivered. In some cases, streaming data platform 107 may be unable to process and/or receive one or more messages from distributed message queue client 106. For example, the streaming data platform 107 may throttle data, an error may be returned to the client 106, the platform 107 may not respond, and so on. When publication of the one or more messages is unsuccessful, distributed message queue client 106 may retry publication of the one or more messages to streaming data platform 107 and until the one or more messages are successfully delivered or a period of time expires. In some cases, distributed messaging client 106 may send one or more messages back to queue 104 that were not successfully delivered until some later point in time before retrying publication. For example, the client 106 may send one or more messages back to the queue 104, wait 1000ms, and then retry publishing the one or more messages to the streaming data platform 107. Note that embodiments are not limited in this manner and the time is configurable.
In some embodiments, as previously described, service application 103 may send all messages to distributed message queue system 105 for transmission to streaming data platform 107. For example, the service application 103 may set a redirect function to true, e.g., "redirect. Sdp. Messages. To. Sqs = true," to send all messages to the distributed message queuing system 105 and be processed by the distributed message queuing service 102. For example, the service application 103 may communicate with the distributed message queue service 102 via API messages that include streaming data and processed streaming data. The distributed message queue service 102 may store each message in a queue 104 for publication to the streaming data platform 107. In one example, the service application 103 may call the "SendMessageRequest" instance of the SQS and include the name of the queue 104 and the body of the message. The request is then passed to the distributed message queue service 102 send message method, which may return a send message response object to the service application 103.
The distributed message queue system 105 may publish each of the one or more messages stored in the queue 104 to the streaming data platform 107 using the distributed message queue client 106. For example, distributed message queue client 106 may poll queue 104 to retrieve one or more messages and send the one or more messages to streaming data platform 107. As described above, the queue 104 may be configured as an event source for the client 106 on the distributed message queue system 105. When an event occurs, the message or record of the message is in the queue 104, and the event can be triggered and detected by the client 106. The client 106 may retrieve one or more messages from the queue 104 and send the one or more messages as a single message or in batches (e.g., five messages).
In some cases, one or more messages may not be delivered to streaming data platform 107. The distributed message queue client 106 may determine whether a message published to the streaming data platform 107 was successfully delivered or unsuccessfully delivered. In these cases, the distributed message queue client 106 may store undelivered messages back in the queue 104, and the distributed message queue service 102 may return the object for which the delivery failure occurred to the service application 103. In an embodiment, the service application 103 may determine that a delivery failure occurred based on information received from the distributed message queue system 105. In these cases, service application 103 may transmit undelivered messages directly to streaming data platform 107.
In an embodiment, service application 103 may retry sending undelivered messages to streaming data platform 107. For example, service application 103 may determine whether a message directly delivered to streaming data platform 107 was successful or unsuccessful and retry delivering the message to streaming data platform 107 when the message is unsuccessful. Service application 103 may continue to retry to transmit messages until delivery to streaming data platform 107 is successful or a defined time period has expired.
Fig. 2A illustrates an example process flow 200 for system 100, where service application 103 sends a message directly to streaming data platform 107 unless a failure occurs. In the illustrated process flow 200, messages are delivered to the streaming data platform 107 without failure.
At line 201 of process flow 200, service application 103 may receive streaming data from one or more data provider systems 101. As described above, the streaming data may include data that performs banking functions. Service application 103 may receive the data, process the data, and generate one or more messages of the streaming data for transmission to streaming data platform 107.
At line 203, service application 103 may send one or more messages to streaming data platform 107. At line 205, the streaming data platform 107 may send a message to the consumer system 109. In some cases, the streaming data platform 107 may publish data, and one or more of the consumer systems 109 may subscribe to and receive particular data. For example, modeling system 108 and authentication system 110 may subscribe to and receive authentication data. In another example, fraud system 112 may subscribe to and receive fraud data. In a third example, modeling system 108 can subscribe to and receive model data.
FIG. 2B illustrates an example process flow 220 of the same configuration as FIG. 2A; however, a fault event is handled.
At line 221 of process flow 220, service application 103 may receive streaming data from one or more data provider systems 101. Service application 103 may receive the data, process the data, and generate one or more messages of the streaming data for transmission to streaming data platform 107.
At line 223, service application 103 may send one or more messages to streaming data platform 107. In some cases, service application 103 may detect a failure in delivery of one or more messages to streaming data platform 107, as indicated by dashed line 225. In these cases, the service application 103 may be configured to redirect the failure message to a distributed message queue system.
At line 227, the service application 103 may transmit one or more failure messages to the distributed message queue system 105 via the API. For example, the service application 103 may invoke or utilize the distributed message queue service 102 and send one or more messages for storage in the queue 104.
In an embodiment, at line 229, the distributed message queue system 105 including the distributed message queue client 106 may publish or send messages to the streaming data platform 107. In some cases, streaming data platform 107 may not receive the message and/or the message may not be delivered to streaming data platform 107. The distributed message queue client 106 may determine whether a message published to the streaming data platform 107 was successfully delivered or unsuccessfully delivered.
At dashed line 231, distributed message queue system 105 may determine one or more messages that are not received by stream data platform 107. For example, the streaming data platform 107 may throttle data, errors may be returned to the client 106, the platform 107 may not respond, etc. Further, at line 231, distributed message queue client 106 may retry publishing the one or more messages to streaming data platform 107 when the publishing of the one or more messages is unsuccessful and until the one or more messages are successfully delivered or a period of time expires.
At line 233, the streaming data platform 107 may send a message to the consumer system 109. The streaming data platform 107 may publish data and one or more of the consumer systems 109 may subscribe to and receive particular data.
Fig. 3A shows an example process flow 300 for system 100, where service application 103 sends all messages to distributed message queue system 105 for further sending to streaming data platform 107. In the illustrated process flow 300, messages are delivered to the streaming data platform 107 without a failure.
At line 301 of process flow 300, service application 103 may receive streaming data from one or more data provider systems 101. Service application 103 may receive the data, process the data, and generate one or more messages of the streaming data for transmission to streaming data platform 107.
At line 303, the service application 103 may send one or more messages to the distributed message queue system 105. For example, the service application 103 may communicate with the distributed message queue service 102 via API messages that include streaming data and processed streaming data. The distributed message queue service 102 may store each message in a queue 104 for publication to the streaming data platform 107.
At line 305, the distributed message queue system 105 may publish each of the one or more messages stored in the queue 104 to the streaming data platform 107 using the distributed message queue client 106. The client 106 may retrieve one or more messages from the queue 104 and send the one or more messages as a single message or in batches, as previously described. Further, and at line 307, the streaming data platform 107 may send a message to the consumer system 109.
FIG. 3B illustrates an example process flow 320 similar to FIG. 3A. However, in this example process flow 320, distributed message queue system 105 may not be able to deliver one or more messages to streaming data platform 107. At lines 321, 323, and 325, the streaming system 120 performs the same operations as the corresponding lines 301, 303, and 305, respectively.
However, as mentioned, some message or messages may not be delivered to streaming data platform 107. At line 327, distributed message queue client 106 may determine whether the message published to streaming data platform 107 was successfully delivered or unsuccessfully delivered. In these cases, the distributed message queue client 106 may store undelivered messages back in the queue 104, and the distributed message queue service 102 may return an object to the service application 103 that failed delivery at line 329. In an embodiment, the service application 103 may determine that a delivery failure occurred based on information received from the distributed message queue system 105.
In an embodiment, at line 331, service application 103 may transmit undelivered messages directly to streaming data platform 107. If the message is transmitted by service application 103 to streaming data platform 107, service application 103 may retry sending undelivered messages to streaming data platform 107 at line 333. For example, service application 103 may determine whether a message directly delivered to streaming data platform 107 was successful or unsuccessful and retry delivery of the message to streaming data platform 107 when the message is unsuccessful. Service application 103 may continue to retry to transmit messages until delivery to streaming data platform 107 is successful or a defined time period has expired. At line 335, the streaming data platform 107 may transmit the received message to the consumer system 109, as previously described.
Fig. 4 illustrates an example of a logic flow 400, which may be representative of one or more operations executed by the flow system 100 to provide resilient flow capabilities.
At block 410, the logic flow 400 comprises receiving streaming data from a data service provider. In one example, the consumer system 109 may utilize the streaming data to provide critical functions, such as those found in bank computing systems. In an embodiment, the service application 103 of the streaming system 120 may receive streaming data based on the service application 103. The service application 103 may perform operations and processing on the streaming data to provide services for the system 100.
At block 420, the logic flow 400 comprises transmitting one or more messages comprising streaming data to the streaming data platform. The message may be generated by the service application 103 from the streaming data platform 107. For example, the streaming data platform 107 may processFormatted, and may utilize a binary Transmission Control Protocol (TCP) based protocol. Further, the service application 103 may publish the recording stream in Java format using the producer API. In other cases, the streaming data platform may utilize a consumer API and subscribe to recorded streams in Java format.
At block 430, logic flow 400 includes detecting a delivery failure of a message to a streaming data platform. For example, service application 103 may receive an indication that a message cannot be delivered or that delivery times out.
At block 440, based on the failure detection, the logic flow 400 includes transmitting the message to a distributed message queue service. For example, the service application 103 may utilize an API provided by the service 102 of the distributed message queue system 105.
At block 450, the logic flow 400 includes publishing the message to the streaming data platform. More specifically, distributed message queue client 106 may retrieve the failed message from queue 104 and publish it to streaming data platform 107.
At block 460, the logic flow 400 includes determining whether the message issued to the streaming data platform was successful or unsuccessful. For example, the distributed message queue client 106 may determine whether a message is delivered. In response to determining that the message is delivered to the streaming data platform, the client 106 may notify the service application 103 via the service 102 and the return object.
At block 470, the logic flow 400 includes retrying to publish the message to the streaming data platform when the message publication is unsuccessful. In some cases, the client 106 may retry sending the message immediately, e.g., once the client 106 knows that the delivery failed. In other cases, the client 106 may return the message to the queue 104 and may retry at a later point in time. For example, the queue 104 may be a first-in-first-out queue and the client 106 may retry once the failure message passes through the queue.
Fig. 5 illustrates an example of a logic flow 500, which may be representative of one or more operations executed by flow system 120 to provide resilient flow capability.
At block 510, the logic flow 500 includes receiving streaming data from a data service provider, as similarly discussed with respect to block 410 of flow 400.
At block 520, the logic flow 500 includes transmitting one or more messages including data to a distributed message queue service on a distributed message queue service system. The distributed message queue service system may store each of the one or more messages in a queue for publication to the streaming data platform.
At block 530, the logic flow 500 includes publishing each of the one or more messages stored in the queue to the streaming data platform. More specifically, a distributed message queue service client of the distributed message queue system may publish messages stored in a queue for receipt by the streaming data platform 107.
At block 540, the logic flow 500 includes determining that delivery of a message of the one or more messages published by the distributed message queue service client to the streaming data platform failed. More specifically, the service application 103 may receive, for example via a return object, an indication of: delivering one or more messages from the distributed message queue system 105 fails. At block 550, the logic flow 500 includes transmitting the message directly to the streaming data platform through the service application.
As shown in FIG. 6, the computing architecture 600 may include a computer 602 having a processing unit 604, a system memory 606, and a system bus 608. The processing unit 604 can be any of various commercially available processors or can be a specially designed processor.
The system bus 608 provides an interface for system components including, but not limited to, the system memory 606 and the processing unit 604. The system bus 608 can be any of several types of bus structure that may further interconnect to a memory bus (with or without a memory controller), a peripheral bus, and a local bus using any of a variety of commercially available bus architectures.
The system memory 606 may include any type of computer-readable storage media, including any type of volatile and non-volatile memory. The computer 602 may include any type of computer-readable storage medium, including an internal (or external) Hard Disk Drive (HDD) 614. In various embodiments, the computer 602 may include any other type of disk drive, such as a magnetic floppy disk and/or an optical disk drive, for example. The HDD 614 may be connected to the system bus 608 by a HDD interface 624.
In various embodiments, any number of program modules can be stored in the drives and system memories 606 and/or 614, such as an operating system 630, one or more applications 632, other program modules 634, and program data 636, for example.
A user can enter commands and information into the computer 602 through one or more wired/wireless input devices, e.g., such as a keyboard 638 and a pointing device, such as a mouse 640. These and other input devices can be connected to the processing unit 604 through an input device interface 642 that is coupled to the system bus 608. A monitor 644 or other type of display device is also connected to the system bus 608 via an interface, such as a video adapter 646. The monitor 644 may be internal or external to the computer 602.
The computer 602 may operate in a networked environment using logical connections via wired and/or wireless communications to one or more remote computers, such as a remote computer 648. The remote computer 648 can be a workstation, a server computer, a router, a personal computer, portable computer, microprocessor-based entertainment appliance, a smart phone, a tablet computer, a peer device or other common network node, and typically includes many or all of the elements described relative to the computer 602. The logical connections depicted include wired and/or wireless connections to a network 652, such as a Local Area Network (LAN) and/or larger networks, e.g., a Wide Area Network (WAN). The network 652 may provide a connection to a global communication network (e.g., such as the internet). The network adaptor 656 may facilitate wired and/or wireless communication to the network 652. The computer 602 may be operable to communicate in accordance with any known computer networking technology, standard, or protocol, via any known wired or wireless communication technology, standard, or protocol.
It should be noted that the methods described herein need not be performed in the order described, or in any particular order. Further, various activities described with respect to the methods identified herein can be executed in serial or parallel fashion.
Although specific embodiments have been illustrated and described herein, it should be appreciated that any arrangement calculated to achieve the same purpose may be substituted for the specific embodiments shown. This disclosure is intended to cover any and all adaptations or variations of various embodiments. It is to be understood that the above description has been made in an illustrative fashion, and not a restrictive one. Combinations of the above embodiments, and other embodiments not specifically described herein, will be apparent to those of skill in the art upon reviewing the above description. Accordingly, the scope of the various embodiments includes any other applications in which the above compositions, structures, and methods are used.
It is emphasized that the abstract of the present disclosure is provided to comply with 37c.f.r. § 1.72 (b), requiring an abstract that allows the reader to quickly ascertain the nature of the technical disclosure. It is submitted with the understanding that it will not be used to interpret or limit the scope or meaning of the claims. Furthermore, in the foregoing detailed description, it can be seen that various features are grouped together in a single embodiment for the purpose of streamlining the disclosure. This method of disclosure is not to be interpreted as reflecting an intention that the claimed embodiments require more features than are expressly recited in each claim. Rather, as the following claims reflect, novel subject matter lies in less than all features of a single disclosed embodiment. Thus the following claims are hereby incorporated into the detailed description, with each claim standing on its own as a separate preferred embodiment. In the appended claims, the terms "including" and "in which" are used as the plain-english equivalents of the respective terms "comprising" and "wherein," respectively. Furthermore, the terms "first," "second," and "third," etc. are used merely as labels, and are not intended to impose numerical requirements on their objects.
Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims.

Claims (20)

1. A computer-implemented method, comprising:
receiving, by a service application, data from a data service provider;
transmitting, by the service application, one or more messages comprising the data to a streaming data platform;
detecting, by the service application, a delivery failure of a message of the one or more messages to the streaming data platform;
transmitting, by the service application and via an Application Programming Interface (API), the message to a distributed message queue service, wherein the message is transmitted to the distributed message queue service and stored in a queue of a distributed message queue system based on the delivery failure;
publishing, by a distributed message queue service client of the distributed message queue system, the message to the streaming data platform;
determining, by the distributed message queue service client, whether the message published to the streaming data platform was successful or unsuccessful; and
retrying, by the distributed message queue service client, publication of the message to the streaming data platform when publication of the message is unsuccessful.
2. The computer-implemented method of claim 1, wherein detecting the delivery failure comprises detecting at least one of a delivery timeout, a connection failure, or a combination thereof.
3. The computer-implemented method of claim 1, wherein retrying publication occurs until delivery to the streaming data platform is successful or a defined period of time has expired.
4. The computer-implemented method of claim 1, comprising storing, by the distributed message queue service, the message in a queue of the distributed message queue system until the message is posted to the streaming data platform or a defined period of time has expired.
5. The computer-implemented method of claim 4, comprising transmitting, by the distributed message queue service client, the message to the distributed message queue system for storage in the queue when publication of the message is unsuccessful.
6. The computer-implemented method of claim 1, wherein the data comprises at least one of authentication data, fraud data, model data, or a combination thereof, and the service application comprises one of an authentication engine, fraud engine, or modeling engine.
7. The computer-implemented method of claim 6, comprising publishing, by the streaming data platform, the authentication data to at least one of an authentication system and a modeling system.
8. The computer-implemented method of claim 6, comprising publishing, by the streaming data platform, the fraud data to a fraud detection system.
9. The computer-implemented method of claim 6, comprising publishing, by the streaming data platform, the model data to a modeling system.
10. A system, comprising:
a memory; and
one or more processors coupled with the memory, the one or more processors configured to:
receiving, by a service application, data from a data service provider;
transmitting, by the service application and via an Application Programming Interface (API), one or more messages including the data to a distributed message queue service on a distributed message queue service system, and each of the one or more messages is stored in a queue for publication to a streaming data platform;
publishing, by a distributed message queue service client of the distributed message queue system, each of the one or more messages stored in the queue to the streaming data platform;
determining, by the service application, that delivery of a message of the one or more messages published by the distributed message queue service client to the streaming data platform failed; and
transmitting, by the service application, the message directly to the streaming data platform, wherein the message is transmitted directly to the streaming data platform based on the delivery failure.
11. The system of claim 10, the one or more processors configured to detect the delivery failure comprises detecting at least one of a delivery timeout, a connection failure, or a combination thereof.
12. The system of claim 10, the one or more processors to:
determining, by the service application, whether a message directly transmitted to the streaming data platform is successful or unsuccessful;
when the message transmission is unsuccessful, retrying, by the service application, transmission of the message to the streaming data platform and continuing to retry transmission of the message until delivery to the streaming data platform is successful or a defined period of time has expired.
13. The system of claim 10, the one or more processors store the message in a queue of the distributed message queue system through the distributed message queue service until the message is posted to the streaming data platform or a defined period of time has expired.
14. The system of claim 10, the one or more processors to transmit the message to the distributed message queue system through the distributed message queue service client for storage in the queue and to notify the service application when publication of the message is unsuccessful.
15. The system of claim 10, wherein the data comprises at least one of authentication data, fraud data, model data, or a combination thereof, and the service application comprises one of an authentication engine, fraud engine, or modeling engine.
16. The system of claim 15, the one or more processors to publish the authentication data to at least one of an authentication system and a modeling system through the streaming data platform.
17. The system of claim 15, the one or more processors to publish the fraud data to a fraud detection system through the streaming data platform.
18. The system of claim 15, the one or more processors to publish the model data to a modeling system through the streaming data platform.
19. A system, comprising:
a memory; and
one or more processors coupled with the memory, the one or more processors configured to:
receiving, by a service application, data from a data service provider;
transmitting, by the service application, a plurality of messages including the data to a streaming data platform;
detecting, by the service application, a delivery failure of the plurality of messages to the streaming data platform;
transmitting, by the service application, the plurality of messages to a distributed message queue service of a distributed message queue system, wherein the plurality of messages are transmitted to the distributed message queue service based on the delivery failure and stored in a queue of the distributed message queue system for publication to the streaming data platform;
publishing, by a distributed message queue service client of the distributed message queue system, each of the plurality of messages to the streaming data platform;
determining, by the distributed message queue service client, whether each of the plurality of messages published to the streaming data platform was successful or unsuccessful; and
retrying, by the distributed message queue service client, publication of each unsuccessfully published message of the plurality of messages to the streaming data platform.
20. The system of claim 19, wherein the one or more processors are to retry publication until delivery to the streaming data platform is successful or a defined period of time has expired.
HK62023072350.9A 2020-01-14 2021-01-14 Techniques to provide streaming data resiliency utilizing a distributed message queue system HK40083650B (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US62/961,059 2020-01-14
US16/881,469 2020-05-22

Publications (2)

Publication Number Publication Date
HK40083650A true HK40083650A (en) 2023-06-30
HK40083650B HK40083650B (en) 2024-12-20

Family

ID=

Similar Documents

Publication Publication Date Title
CN115280740B (en) Technology that uses distributed message queue systems to provide streaming data elasticity
US12393949B2 (en) Methods and systems for providing a decision making platform
US7769807B2 (en) Policy based auditing of workflows
US20070027974A1 (en) Online service monitoring
US9015731B2 (en) Event handling system and method
US7257611B1 (en) Distributed nonstop architecture for an event processing system
EP4239557A1 (en) Real-time fraud detection based on device fingerprinting
CN113312649A (en) Message processing method, device, equipment, system and storage medium
CN113177052A (en) Method and device for processing service data consistency of distributed system
CN115828278A (en) File reliability interaction method and device based on file list
HK40083650A (en) Techniques to provide streaming data resiliency utilizing a distributed message queue system
CN119515384A (en) A transaction verification method, program product, electronic device and storage medium
HK40083650B (en) Techniques to provide streaming data resiliency utilizing a distributed message queue system
CN103685146A (en) Data processing device and data processing method for safety information interaction
US12499450B1 (en) Micro insurance and warranty cataloging, detection and evaluation system
US12481970B2 (en) Error detection for wire-transfer requests in wire-transfer applications in a computing environment
US12045119B2 (en) Techniques for detecting outages of an external system
CN115496583A (en) Asynchronous accounting data processing method, device, equipment and storage medium
CN119135648A (en) Method, device, equipment and storage medium for sending business messages
CN113132458A (en) Abnormal handling method and system based on flow replication
CN119002962A (en) Interface calling method and device based on project warranty and computer equipment
CN119603290A (en) Data file transmission method, device, storage medium and program product
CN114531349A (en) Method and device for configuring message subscription
CN118540662A (en) High-availability short message service method and device, medium and equipment
CN118052557A (en) Online processing method and device for failed transaction