EP4004732A1 - Normalizing messaging flows, optimizing messaging flows, and virtual programming in a microservice architecture - Google Patents
Normalizing messaging flows, optimizing messaging flows, and virtual programming in a microservice architectureInfo
- Publication number
- EP4004732A1 EP4004732A1 EP20761035.3A EP20761035A EP4004732A1 EP 4004732 A1 EP4004732 A1 EP 4004732A1 EP 20761035 A EP20761035 A EP 20761035A EP 4004732 A1 EP4004732 A1 EP 4004732A1
- Authority
- EP
- European Patent Office
- Prior art keywords
- messaging
- stacks
- blocking
- programmable
- message
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/54—Interprogram communication
- G06F9/547—Remote procedure calls [RPC]; Web services
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/133—Protocols for remote procedure calls [RPC]
Definitions
- the present disclosure generally relates to computing. More particularly, the present disclosure relates to systems and methods for normalizing and optimizing messaging flows and virtual programming in a microservice architecture.
- a Service-Oriented Architecture is an approach in software design in which application components provide services to other components via a communications protocol.
- the principles of service-orientation are independent of any vendor, product, or technology.
- a service is a self-contained unit of functionality, and services can be combined to provide the functionality of a large software application.
- a processing device can run any number of services, and each service is built in a way that ensures that the service can exchange information with any other service.
- Microservices are a variant of SOA used to build distributed software systems. Similar to SOA, services in a Microservice Architecture (MSA) are processes that communicate with each other over the network in order to fulfill an objective, and these services use technology-agnostic protocols.
- MSA Microservice Architecture
- a distributed software system that uses services is a network element in a telecommunications network, e.g., an optical network element, router, switch, etc.
- a non-transitory computer-readable medium includes instructions that, when executed, cause a processor to perform the steps of, in a distributed system with a microservice architecture having a framework supporting a messaging layer between client applications and server- side handlers, receiving a message by a server-side handler in the framework with the message one of blocking and non-blocking from a client application; handling the message by the server-side handler as one of blocking and non-blocking selected independent of a designation by the client application since the framework abstracts the messaging layer from the client application; and providing a response by the server-side handler to the client application.
- the client application selects blocking for the message and the server-side handler also selects blocking for the message, no abstraction is required by the framework.
- the handling can include providing an initial response to the client application and forking a new process by the framework that calls a non- blocking handler; and correlating the message with the new process with an identifier, wherein the response is provided with the identifier.
- the handling can include utilizing a timer and calling a non-blocking handler; and waiting on a resource from the non-blocking handler until expiration of the time, wherein the response is provided based on one of the resource and the expiration of the timer.
- the handling can include providing an initial response to the client application and forking a new process by the framework that calls a blocking handler, wherein the response is provided with an identifier based on a resource from the blocking handler.
- the messaging layer only supports blocking such that the server-side handler selects blocking regardless of a designation by the client application.
- the messaging layer only supports non-blocking such that the server-side handler selects non-blocking regardless of a designation by the client application.
- an apparatus in another embodiment, includes a processor and memory including instructions that, when executed, cause the processor to execute a server-side handler in a framework supporting a messaging layer between client applications and server-side handlers in a distributed system with a microservice architecture, wherein the server-side handler is configured to receive a message by a server-side handler in the framework with the message one of blocking and non- blocking from a client application, handle the message by the server-side handler as one of blocking and non-blocking selected independent of a designation by the client application since the framework abstracts the messaging layer from the client application, and provide a response by the server-side handler to the client application.
- a computer-implemented method includes, in a distributed system with a microservice architecture having a framework supporting a messaging layer between client applications and server-side handlers, receiving a message by a server-side handler in the framework with the message one of blocking and non-blocking from a client application; handling the message by the server-side handler as one of blocking and non-blocking selected independent of a designation by the client application since the framework abstracts the messaging layer from the client application; and providing a response by the server-side handler to the client application.
- a non-transitory computer-readable medium includes instructions that, when executed, cause a processor to perform the steps of, in a distributed system with a microservice architecture having a plurality of services and a messaging layer for communication therebetween, receiving messages from a first service to a second service in the messaging layer; queuing responses from the messages; and utilizing one or more bulk messaging techniques to send the responses back to the first service from the second service.
- the instructions that, when executed, can further cause the processor to perform the steps of maintaining statistics related to the one or more bulk messaging techniques; and automatically determining which of the one or more bulk messaging techniques based on the statistics, to minimize latency of the messaging layer.
- the one or more bulk messaging techniques can include any of time window-based bulking, counter-based bulking, size-based bulking, and transaction-based bulking.
- the one or more bulk messaging techniques can include multiple bulk messaging techniques, selected to minimize latency of the messaging layer.
- the one or more bulk messaging techniques can include time window-based bulking where the queuing is over a predetermined time window.
- the one or more bulk messaging techniques can include counter-based bulking where the queuing is based on a counter.
- the one or more bulk messaging techniques can include size-based bulking where the queuing is based on a size of each response.
- the one or more bulk messaging techniques can include transaction-based bulking where the queuing is based on a transaction tag.
- the first service can be configured to provide information in one or more of the messages related to the one or more bulk messaging techniques.
- an apparatus includes a processor and memory including instructions that, when executed, cause the processor to execute a messaging layer for communication between a plurality of services in a distributed system with a microservice architecture, wherein the messaging layer is configured to receive messages from a first service to a second service in the messaging layer, queue responses from the messages, and utilize one or more bulk messaging techniques to send the responses back to the first service from the second service.
- a computer-implemented method includes, in a distributed system with a microservice architecture having a plurality of services and a messaging layer for communication therebetween, receiving messages from a first service to a second service in the messaging layer; queuing responses from the messages; and utilizing one or more bulk messaging techniques to send the responses back to the first service from the second service.
- a non-transitory computer-readable medium includes instructions that, when executed, cause a processor to perform the steps of, in a distributed system with a microservice architecture having a plurality of services and messaging therebetween, creating programmable stacks of sessions, wherein each session stack is thread-specific; creating programmable stacks of descriptors, wherein each descriptor stack is specific to a session; and passing the programmable stacks of sessions and the programmable stacks of descriptors to one or more services, including across messaging and processor boundaries.
- the programmable stacks of sessions and the programmable stacks of descriptors can be utilized for any of Transactional data, Return Codes, Asynchronous messaging, and streaming.
- the programmable stacks of sessions can be virtual tasks that are created at runtime.
- the programmable stacks of descriptors can be virtual stacks that are created at runtime.
- the programmable stacks of sessions and the programmable stacks of descriptors can be schema driven.
- the programmable stacks of sessions can be automatically created and cleaned up.
- an apparatus includes a processor and memory including instructions that, when executed, cause the processor to execute a distributed system with a microservice architecture having a plurality of services and messaging therebetween, wherein the distributed system is configured to create programmable stacks of sessions, wherein each session stack is thread specific, create programmable stacks of descriptors, wherein each descriptor stack is specific to a session, and pass the programmable stacks of sessions and the programmable stacks of descriptors to one or more services, including across messaging and processor boundaries.
- a computer-implemented method includes, in a distributed system with a microservice architecture having a plurality of services and messaging therebetween, creating programmable stacks of sessions, wherein each session stack is thread-specific; creating programmable stacks of descriptors, wherein each descriptor stack is specific to a session; and passing the programmable stacks of sessions and the programmable stacks of descriptors to one or more services, including across messaging and processor boundaries.
- FIG. l is a block diagram of message flow abstraction between the server-side handlers and the client application via a framework
- FIG. 2 is a block diagram of message flow abstraction between the server-side handler and the client application via the framework for client blocking and message blocking;
- FIG. 3 is a block diagram of message flow abstraction between the server-side handler and the client application via the framework for client non-blocking and messaging non-blocking;
- FIG. 4 is a block diagram of message flow abstraction between the server-side handler and the client application via the framework for client blocking and messaging non-blocking;
- FIG. 5 is a block diagram of message flow abstraction between the server-side handler and the client application via the framework for client non-blocking and messaging blocking;
- FIG. 6 is a flowchart of a process for normalizing message flows in a Microservice Architecture
- FIG. 7 is a block diagram of a transport layer for bulk messaging
- FIG. 8 is a block diagram of a framework that can exist at a layer between the transport layer and applications;
- FIGS. 9, 10, and 11 are graphs of performance of bulk messaging with different message latency values
- FIG. 12 is a flowchart of a process for bulk messaging in a Microservice Architecture
- FIG. 13 is a block diagram of a runtime diagram of virtual tasks and virtual stacks
- FIG. 14 is a diagram illustrating programming overhead and the cost of recursion with stack-oriented programming
- FIG. 15 is a block diagram of a runtime diagram of virtual tasks and virtual stacks
- FIG. 16 is a block diagram of distributed architecture flows utilizing virtual tasks and virtual stacks
- FIG. 17 is a diagram of an example session Application Programming Interface (API) for the virtual tasks
- FIG. 18 is a diagram of an example descriptor API for the virtual stacks
- FIG. 19 is a diagram of example code utilizing the virtual tasks and virtual stacks;
- FIG. 20 is a diagram of example recursive cluster domains for a use case of virtual stacks and virtual tasks;
- FIG. 21 is a flowchart of a process for virtual tasks and virtual stacks.
- FIG. 22 is a block diagram of processing hardware.
- the present disclosure relates to systems and methods for normalizing and optimizing messaging flows and virtual programming in a microservice architecture.
- the present disclosure provides frameworks to be constructed in which messaging layers are completely abstracted to client applications and server-side handlers. Blocking and non- blocking behaviors normally drive significant design activity at the application layer. When the messaging layer only supports one messaging flow, this can drive unwanted impacts on application design. For example, if a messaging layer only supports blocking calls, all management of non- blocking behavior and parallelism must be pushed to every application that desires it. If a messaging layer only supports non-blocking calls, all simplification and correlation of messaging are now pushed to every application that desires a most simplified blocking model. To seamlessly move between blocking and non-blocking behavior would be the tax that would not be justifiable to application designers. Moving this abstraction into the framework allows for full flexibility and design evolvability without changing any application level coding or messaging layer constructs as the system evolves.
- the present disclosure provides the ability to bulk and coalesce messages in a framework, independent of service or transport protocol. This allows for more efficient mechanisms for transport. This opens the possibility of machine learning or tunable settings on a per application layer or per transport layer, without needing to change applications or messaging protocols. This allows microservices to participate in a disaggregated system without exposing details of the messaging layers to the applications, and still obtain the benefits of bulk messaging to reduce chattiness and latency in messaging between services. This also reduces the development cost to application designers and allows tweaking and enhancements in a base layer to automatically be extended to all services that use the framework.
- virtual tasks and virtual task-stacks along with virtual stacks provide ideal run time polymorphism without programming overhead.
- this paradigm can span across messaging/processor boundaries.
- microservices or simply services are software executed on a processing device. Services are fine-grained, and the protocols are lightweight. As services are fine grained, each service is a small decomposition of a larger, distributed system.
- a framework is an abstraction in which software providing functionality can be selectively modified by additional code to provide application-specific software (e.g., a client application or “app”).
- a framework includes software code that is executed on processing hardware specifically for interaction between client applications and services.
- a distributed system can include a network element which has multiple services that operate together.
- the distributed system can be any type of system with multiple services.
- a distributed system may be simply referred to as a system.
- the system includes processing hardware for executing software code.
- a client application is software code executed on processing hardware.
- the client application can be a service sending a message to another service.
- the client application can also be a separate application interacting with a distributed system, including various services.
- a server-side handler is software code executed on processing hardware.
- the server-side handler enables communication between the client application and a server.
- the framework When the framework is responsible for selecting the protocol and messaging layer used between services, some characteristics of the messaging layer can be easily negotiated and handled by the framework. These include
- FIGS. 1 - 5 are block diagrams of the functionality of a framework 10 for interaction between server-side handlers 12 and client applications 14.
- the framework 10 includes a messaging layer for communication between the services and the client applications 14.
- the framework 10 not only hides the underlying nature of the messaging layer from the server-side handlers 12 but also allows the server-side handlers 12 which require a certain behavior to have this requirement met by the framework 10 even if the selected messaging layer does not inherently behave this way. This leads to a wider range of protocols that can be supported, a wider range of service designs that can be accommodated, and a more natural progression of designs from simple to complex that do not require rewriting application level software as messaging flow patterns change.
- the main types of messaging flows of interest in the framework 10 are blocking and non- blocking.
- the client (or caller) application 14 will send a message and wait for the result of the message to be returned from the server before proceeding. Error cases can occur in which the message cannot be queued, or cannot be sent to the remote end, and these errors can qualify as a type of response to the client application 14, but the client application 14 will not proceed in its flow until the server has responded with either a failure, or responds to the message itself.
- Hypertext Transfer Protocol uses this exclusively as a messaging flow. Parallelism with blocking messages is handled by spawning multiple threads and having each thread handle a request and a response. This requires specific programming on the client application 14 to handle the threads and aggregating responses.
- Blocking messaging does not allow the client application 14 to do additional work while the response is pending, which has scalability concerns. Blocking messaging guarantees ordered processing of messages since another message cannot be sent in the same thread until the response from the previous message has been processed.
- the client application 14 will send a message and (may) wait for basic acknowledgment from the sending Application Programming Interface (API) that the request has been queued or handled.
- This response can come from the local messaging layer ("message queued for send"), or from the server ("message received"), but the processing and actual response to the message is not sent immediately. Instead, the response (or responses) will be sent asynchronously from the server-side handler 12 as it is processed.
- API Application Programming Interface
- correlation tag(s) are a unique tag attached by the messaging layer that can be used to correlated response(s) to the original sender. This can be added by the client application 14 (client tag) if the client application 14 has a threading model in which a common thread can handle responses for many senders.
- client tag a tag
- the messaging layer may also add a tag (messaging tag) to simply correlate a response to the appropriate message and to find a callback or function to invoke to handle the processing of the response.
- the messaging layer needs to invoke a receiver function to handle the response.
- the receiver data can be embedded in the message itself, but this is unlikely since it is data the server does not need to know about. Normally, the receiver data (callback function, signal, event, queue id, etc.) is registered in advance with the messaging system or is provided at the time the message is sent.
- the timeout information may also need to be provided in case a response is not processed by a certain timeout.
- the messaging layer will then call the receiver function with an error code that indicates the failure to receive a response. Any incoming response for this message after this timeout has occurred will be discarded.
- the criticality can be high or low priority, and, for retries, in case of a failure, the client application 14 can choose to retry the message a certain number of times before reporting a failure. Normally, a client application 14 must know in advance what type of messaging will be invoked when a request is made since the data provided in either case is very different.
- FIG. l is a block diagram of message flow abstraction between the server-side handlers 12 and the client application 14 via the framework 10.
- the framework 10 abstracts away the details of a messaging layer from the client applications 14, supports both blocking and non-blocking messaging flows at the messaging layer and has requirements for the client applications 14 that can request both blocking and non-blocking messaging.
- the framework 10 includes the messaging layer.
- the framework 10 may utilize a Data-Driven Framework (DDF).
- DDF Data-Driven Framework
- FIG. 1 two example client applications 14 are illustrated, one for a blocking message request - getObject() and one for a non-blocking message request - getObject(refId, clientCallback).
- the client applications 14 can specify in attributes whether the getObject can block or not. If not, a callback and refld must be provided.
- the server-side handlers 12 can specify binding in handlers (DDFHandler) whether they are blocking or not, i.e., bind(&blockingDDYHandler, BLOCK) or bind(&nonBlockingDDYHandler, NON BLOCK).
- DDF YANG (Yet Another Next Generation) can use this flag to determine how to invoke.
- FIG. 2 is a block diagram of message flow abstraction between the server-side handler 12 and the client application 14 via the framework 10 for client blocking and message blocking.
- client application 14 requires a blocking message, and this flow aligns with the messaging layer, there is no abstraction needed, i.e., direct handler invocation in the client thread context.
- a blocking call from the client application 14 will be sent directly to the messaging layer where it will block, and the response will traverse the entire path to the client application 14 when it arrives.
- FIG. 3 is a block diagram of message flow abstraction between the server-side handler 12 and the client application 14 via the framework 10 for client non-blocking and messaging non- blocking.
- the client application 14 requires a non-blocking message, and this is what the messaging layer provides, some level of correlation between the client application 14 and the messaging layer is needed.
- a non-blocking call from the client application 14 will be sent directly to the messaging layer, and the initial response will be sent back to the sender.
- the receiver information from the client application 14 will need to be stored internally and correlated to the asynchronous message sent at the messaging layer.
- this correlation is used to find the receiver and invoke it.
- Different timeout and error handling requirements between the client application 14 and messaging layer may also need to be managed.
- the message flow in FIG. 3 includes the client application 14 requesting a non-blocking message (step 20-1); the framework 10 forks a new process (step 20-2); the client thread returns (step 20-3); the forked process calls a non-blocking handler (step 20-4); the forked process waits on the resource (step 20-5); the resource is unlocked (e.g., by ddfCallback) (step 20-6); and a client callback is invoked (step 20-7).
- FIG. 4 is a block diagram of message flow abstraction between the server-side handler 12 and the client application 14 via the framework 10 for client blocking and messaging non-blocking.
- the goal of this abstraction is to make an internal non-blocking call look like a blocking call to the client application 14. From a threading model, the client application 14 must not return until the response has arrived.
- the message flow in FIG. 4 includes the client application requesting a blocking message (step 22-1); the framework 10 starts a timeout timer, caches client context, and calls non-blocking handler in a client thread (step 22-2); the framework 10 waits on the resource (step 22-3); the response thread calls ddfCallback which unblocks the caller or if the timer expires, the client context is cleaned up, and the caller is unblocked (step 22-4); and the client thread returns (step 22-5).
- FIG. 5 is a block diagram of message flow abstraction between the server-side handler 12 and the client application 14 via the framework 10 for client non-blocking and messaging blocking.
- client application 14 requests a non-blocking call, and the messaging layer only supports a blocking call, internal threading is needed to invoke the request.
- a client request is received with the non-blocking metadata, and a local thread is used with this data to handle the message request and wait for the response from the server. If a timeout occurs before the blocking call can return, an error is sent to the client application 14, and the thread may be destroyed or returned to a pool.
- the client receiver function is invoked from the internal thread with the data from the response and the non-blocking metadata provided by the client application 14.
- the message flow in FIG. 5 includes the client application 14 sending a non-blocking message (step 24-1); the framework 10 forks a new process (step 24-2); the client thread returns (step 24-3); the forked process calls a blocking handler (step 24-4); and a client callback is invoked (step 24-5).
- FIG. 6 is a flowchart of a process 30 for normalizing message flows in a Microservice Architecture.
- the process 30 is computer-implemented and includes, in a distributed system with a microservice architecture having a framework supporting a messaging layer between client applications and server-side handlers, receiving a message by a server-side handler in the framework with the message one of blocking and non-blocking from a client application (step 32); handling the message by the server-side handler as one of blocking and non-blocking selected independent of a designation by the client application since the framework abstracts the messaging layer from the client application (step 34); and providing a response by the server-side handler to the client application (step 36).
- the handling can include providing an initial response to the client application and forking a new process by the framework that calls a non-blocking handler; and correlating the message with the new process with an identifier, wherein the response is provided with the identifier.
- the handling can include utilizing a timer and calling a non- blocking handler; and waiting on a resource from the non-blocking handler until expiration of the time, wherein the response is provided based on one of the resources and the expiration of the timer.
- the handling can include providing an initial response to the client application and forking a new process by the framework that calls a blocking handler, wherein the response is provided with an identifier based on a resource from the blocking handler.
- the messaging layer can one of i) only support blocking such that the server-side handler selects blocking regardless of a designation by the client application, and ii) only support non- blocking such that the server-side handler selects non-blocking regardless of a designation by the client application.
- ⁇ 3.0 Reducing and optimizing message flows in a Microservice Architecture [0075] Again, in a distributed microservice architecture, many services run and are decoupled from one another. Data ownership is distributed, and the data that one service needs to function may exist in many other services. This may require frequent messaging to determine the current operational state and/or configuration of the other relevant services in the deployment. Even within a service, many resources may exist, and the service may have independent controllers for each resource, each making their own queries to many other services.
- the cost of messaging can be threefold: first, an encoding cost (how much processing does it take to encode and decode a message); second, a bandwidth cost (how much data needs to be sent); and third, a latency cost (what is the delay experienced with the transport of the message itself).
- encoding cost how much processing does it take to encode and decode a message
- bandwidth cost how much data needs to be sent
- latency cost what is the delay experienced with the transport of the message itself.
- bundling or bulking of messages can greatly reduce this cost, especially if the messaging protocol is blocking and messages are sent serially (the next cannot be sent until the previous message is processed).
- the present disclosure described a framework that can automatically bulk messages between two endpoints together to save on the latency cost of the messaging layer.
- Control applications may be requesting granular data from another service. Many control applications running at once may be requesting the same data from another service, and if the architecture can detect similar types of flows and perform bulking, the system efficiency may improve.
- time window-based bulking if a service has many requests being sent to another service, sending the data can be held off to allow for more requests to be made and bulk the requests into a larger message to send.
- a time window can be specified that places an upper bound on the delay incurred by the time window and when that time-period expires, all messages that have been bulked up to that point can be sent in the same request.
- sending the data can be held off based on a message counter.
- a message counter can be provided that places an upper bound on the number of messages to be bundled together, and when that counter level is met, all messages that have been bulked up to that point can be sent in the same request.
- transport layers may have a message size that is most efficient since messages below a certain size may more easily fit into a transport window or avoid the need for segmentation and reassembly.
- a message size limit can be provided that can be tracked for a given transport, and hold off sending the message as long as the size is below that limit.
- an application may have a higher-level view of the set of messages associated together in one transaction.
- a higher-level controller may have knowledge of a control loop iteration, even if the lower levels do not understand the context that the messages are being sent under. If there is a tag of some sort that is associated with messages that are related in one group, then messages related to that tag can be bulked and sent explicitly when the complete message has been assembled, and the higher-level application knows that all requests have been performed.
- the aforementioned bulk messaging techniques may be implemented individually or may be implemented in a way that allows the techniques to be combined.
- the thresholds and limits in these techniques may also benefit from machine learning or tuning to allow for the system to dynamically respond.
- the system can “learn” to automatically determine which of the bulk messaging techniques to use given various circumstances.
- the system can keep statistics related to savings (in latency, encoding, and bandwidth costs), enabling the system to train itself on where to use each of the techniques.
- Limits can also be application specific. Some applications may tolerate higher delays, and others may need each message to be as fast as possible
- the client application 14 can be able to include information on bulking options. This information may specify to send now (no bulking), wait up to X milliseconds for bulking, always bulk with others of the same session / tag, etc.
- the aspect of bulk messaging with others of the same session / tag is similar to a transaction model for sets.
- the client application 14 can have a session/transaction ID/tag that is inserted into all requests.
- FIG. 7 is a block diagram of a transport layer 40 for bulk messaging.
- FIG. 8 is a block diagram of a framework 50 that can exist at a layer between the transport layer 40 and applications 14.
- the messaging layer is part of the framework 50 and can exist at a layer between the client applications 14 and the transport layer 40, much more value can be extracted from bundling at this layer.
- the value of this middleware of the framework 50 is it can understand the services involved in the messages, can understand latency requirements and typical message flows per service. Further, the framework 50 can understand the specific content of the messages, to group all messages of one type into a bulked message ("get” messages) and allow others to flow as soon as possible (“RPC” or "notify” messages).
- the framework 50 can support bulking independent of the transport protocol since the bulking is done in a layer above the transport layer 40, it can be implemented once and will be used and usable by all transport layers 40. Finally, the framework 50 can support "coalescing" of messages. Flere, frequent messages can be throttled and summarized to the latest state periodically, and multiple "set” or “get” actions can be combined into one action, not just grouped into the same message.
- FIGS. 9, 10, and 11 are graphs of performance of bulk messaging with different message latency values.
- a bulker will wait 15ms for messages to accumulate before it is sent. This assumes that the application 14 can enqueue a message every 50ps, so on average, 300 messages are enqueued into one message. It also assumes a latency overhead based on the size of the message as it grows. As is seen in FIG. 9, it is faster in general for bulking not to be used here for all messages.
- FIG. 12 is a flowchart of a process 100 for bulk messaging in a Microservice Architecture.
- the process 100 includes, in a distributed system with a microservice architecture having a plurality of services and a messaging layer for communication therebetween, receiving messages from a first service to a second service in the messaging layer (step 102); queuing responses from the messages (step 104); and utilizing one or more bulk messaging techniques to send the responses back to the first service from the second service (step 106).
- the process 100 can also include maintaining statistics related to the one or more bulk messaging techniques; and automatically determining which of the one or more bulk messaging techniques based on the statistics, to minimize the latency of the messaging layer.
- the one or more bulk messaging techniques can include any of time window-based bulking, counter-based bulking, size-based bulking, and transaction-based bulking.
- the one or more bulk messaging techniques can include multiple bulk messaging techniques, selected to minimize the latency of the messaging layer.
- the one or more bulk messaging techniques can include time window-based bulking where the queuing is over a predetermined time window.
- the one or more bulk messaging techniques can include counter-based bulking where the queuing is based on a counter.
- the one or more bulk messaging techniques can include size-based bulking where the queuing is based on the size of each response.
- the one or more bulk messaging techniques can include transaction-based bulking where the queuing is based on a transaction tag.
- the first service can be configured to provide information in one or more of the messages related to the one or more bulk messaging techniques.
- the present disclosure includes a programming mechanism with virtual tasks and virtual stacks, where the system can not only track but also modify, add, remove, and process both data and metadata at runtime without the overhead of changing code interfaces. This can be performed for tasks (the execution flow) and the stack (the data associated with that data) and can span tasks and processes in a distributed architecture. Also, the use of a virtual-stack at runtime means that the true language-oriented APIs (function calls) do not need to change when APIs change and allows prototype and invocation extensions without modifying the core code.
- the present disclosure includes virtual tasks and virtual task-stacks along with virtual stacks to provide ideal runtime polymorphism without programming overhead.
- this approach can span across messaging/processor boundaries.
- FIG. 13 is a block diagram of a distributed system 200 having messaging across microservice boundaries.
- the distributed system 200 requires function interfaces changes for passed/returned arguments across call stacks in running thread contexts, changes in stack/global data structures which introduce synchronization overheads for re-entrant programming, and added complexity in applications for serialization and deserialization data handlers.
- the potential overhead of programming/interface re-design includes managing return codes/data, stack frame collapse mandates data to be passed up/down the chain, function/structure declarations change with additional data, and module interfaces change if function declarations change.
- programming overhead can be defined as any of
- overhead is the cost associated with tracking data versus logical flow or interface definitions.
- the programming cost can be defined as the overhead of program maintenance due to the recursive nature in programming for a sub-task/session at compile time.
- FIG. 14 is a diagram illustrating programming overhead and the cost of recursion with stack-oriented programming.
- the questions from FIG. 14 include - what if the functions only implement run-time logic 0(N), and what if the functions scope a session/sub-task of run-time logic without worrying about passing data across.
- the programming costs is the overhead of program maintenance due to recursive nature in programming for a sub-task/session at compile time, i.e., function_N, function_N-l, function_N-2, ...
- Virtual tasks and virtual stacks [00113] The present disclosure utilizes virtual tasks (also referred to as sessions/session stacks) and virtual stacks (also referred to as attribute/descriptor stacks). The following provides definitions used herein:
- FIG. 15 is a block diagram of a runtime diagram of virtual tasks and virtual stacks.
- the data contained in the stacks can be integrated into the native stack of the thread, or logged or discarded, or packaged as opaque data that is passed through to another service that knows how to decode it.
- FIG. 16 is a block diagram of distributed architecture flows utilizing virtual tasks 202 and virtual stacks 204.
- the distributed architecture creates programmable stacks of sessions, each session stack is thread specific.
- the sessions signify a subtask and add to only run time logic.
- the user interface is simple (push/pop) sessions on the fly. All session data persists throughout the recursive flow of a thread context. No locks are needed in the system. All session data can be serialized/deserialized (serdes) without worrying if a subtask is supported or not (Data Driven Advantage). It does not matter if other services support new sessions.
- the distributed architecture creates programmable stacks of descriptors, each descriptor stack is session specific.
- the descriptor stack signifies aliased values (pass by reference and values).
- a single value on the descriptor stack can be modified anywhere in thread flow (pass by pointer).
- the user interface is simple (push/pop) sessions on the fly. All descriptor stack persists throughout the recursive flow of a thread context. No locks are needed in the system.
- FIG. 17 is a diagram of an example session API for the virtual tasks 202.
- FIG. 18 is a diagram of an example descriptor API for the virtual stacks 204. These programmable stacks can be used in current mechanisms for functions such as transactional data, Return Codes, asynchronous messaging, streaming, etc. Virtual tasks and stacks can be implemented in any high-level language.
- FIG. 19 is a diagram of example code utilizing the virtual tasks and virtual stacks. The greatest flexibility is to just write runtime logic and treat the sub-task as sessions and descriptor stacks as the workbench. There is no need to modify the structures, synchronization, and cleanup (heap/ stack) - A session pop cleans up data at runtime (no leaks).
- FIG. 20 is a diagram of example recursive cluster domains for a use case of virtual stacks and virtual tasks.
- the left half ⁇ CLUSTER DOMAIN ENVELOPE> in the FIG. 20 indicates the functional diagram for a transaction which could take advantage of multi -threaded programming. However, writing code this way would be harder than shown on right-half.
- the right half in FIG. 20 shows the repetitive logical flow as part of transactions in a distributed architecture. It is very straightforward to see the logical flow is simply two calls ⁇ MAP ADD> and ⁇ MAP LOOKUP>.
- FIG. 21 is a flowchart of a process 250 for virtual tasks and virtual stacks.
- the process 250 includes, in a distributed system with a microservice architecture having a plurality of services and messaging therebetween, creating a programmable stacks of sessions, wherein each session stack is thread specific (step 252); creating a programmable stacks of descriptors, wherein each descriptor stack is specific to a session (step 254); and passing the programmable stacks of sessions and the programmable stacks of descriptors to one or more services, including across messaging and processor boundaries (step 256).
- the programmable stacks of sessions and the programmable stacks of descriptors can be utilized for any of Transactional data, Return Codes, Asynchronous messaging, and streaming.
- the programmable stacks of sessions can be virtual tasks that are created at runtime.
- the programmable stacks of descriptors can be virtual stacks that are created at runtime.
- the programmable stacks of sessions and the programmable stacks of descriptors can be schema driven.
- the programmable stacks of sessions can be automatically created and cleaned up.
- FIG. 22 is a block diagram of processing hardware 300.
- the processing hardware 300 can be part of a distributed system, executing a microservices architecture.
- the processing hardware 300 can be used to execute services in a distributed system.
- the processing hardware 300 can include a processor 302, which is a hardware device for executing software instructions.
- the processor 302 can be any custom made or commercially available processor, a central processing unit (CPU), an auxiliary processor among several processors associated with the processing hardware 300, a semiconductor-based microprocessor (in the form of a microchip or chip set), or generally any device for executing software instructions.
- the processor 302 When the processing hardware 300 is in operation, the processor 302 is configured to execute software stored within the memory, to communicate data to and from memory 308, and to generally control operations of the processing hardware 300 pursuant to the software instructions.
- the processing hardware 300 can also include a network interface 304, a data store 306, memory 308, an I/O interface 310, and the like, all of which are communicatively coupled to one another and to the processor 302.
- the network interface 304 can be used to enable the processing hardware 300 to communicate on a network.
- the network interface 304 can include, for example, an Ethernet card or a wireless local area network (WLAN) card.
- the network interface 304 can include address, control, and/or data connections to enable appropriate communications on the network.
- the data store 306 can be used to store data, such as control plane information, provisioning data, OAM&P data, etc.
- the data store 306 can include any of volatile memory elements (e.g., random access memory (RAM, such as DRAM, SRAM, SDRAM, and the like)), nonvolatile memory elements (e.g., ROM, hard drive, flash drive, CDROM, and the like), and combinations thereof.
- RAM random access memory
- nonvolatile memory elements e.g., ROM, hard drive, flash drive, CDROM, and the like
- the data store 306 can incorporate electronic, magnetic, optical, and/or other types of storage media.
- the memory 308 can include any of volatile memory elements (e.g., random access memory (RAM, such as DRAM, SRAM, SDRAM, etc.)), nonvolatile memory elements (e.g., ROM, hard drive, flash drive, CDROM, etc.), and combinations thereof.
- the memory 308 may incorporate electronic, magnetic, optical, and/or other types of storage media.
- the memory 308 can have a distributed architecture, where various components are situated remotely from one another but may be accessed by the processor 302.
- the I/O interface 310 includes components for the processing hardware 300 to communicate with other devices, such as other processing hardware 300, e.g., via a bus, backplane, midplane, etc.
- processors such as microprocessors; Central Processing Units (CPUs); Digital Signal Processors (DSPs): customized processors such as Network Processors (NPs) or Network Processing Units (NPUs), Graphics Processing Units (GPUs), or the like; Field Programmable Gate Arrays (FPGAs); and the like along with unique stored program instructions (including both software and firmware) for control thereof to implement, in conjunction with certain non-processor circuits, some, most, or all of the functions of the methods and/or systems described herein.
- processors such as microprocessors; Central Processing Units (CPUs); Digital Signal Processors (DSPs): customized processors such as Network Processors (NPs) or Network Processing Units (NPUs), Graphics Processing Units (GPUs), or the like; Field Programmable Gate Arrays (FPGAs); and the like along with unique stored program instructions (including both software and firmware) for control thereof to implement, in conjunction with certain non-processor circuits, some, most, or all of the functions of
- circuitry configured or adapted to
- logic configured or adapted to
- some embodiments may include a non-transitory computer-readable storage medium having computer readable code stored thereon for programming a computer, server, appliance, device, processor, circuit, etc. each of which may include a processor to perform functions as described and claimed herein.
- Examples of such computer-readable storage mediums include, but are not limited to, a hard disk, an optical storage device, a magnetic storage device, a ROM (Read Only Memory), a PROM (Programmable Read Only Memory), an EPROM (Erasable Programmable Read Only Memory), an EEPROM (Electrically Erasable Programmable Read Only Memory), Flash memory, and the like.
- software can include instructions executable by a processor or device (e g., any type of programmable circuitry or logic) that, in response to such execution, cause a processor or the device to perform a set of operations, steps, methods, processes, algorithms, functions, techniques, etc. as described herein for the various embodiments.
- a processor or device e g., any type of programmable circuitry or logic
Landscapes
- Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Computer And Data Communications (AREA)
Abstract
Description
Claims
Applications Claiming Priority (4)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US16/536,458 US11055155B2 (en) | 2019-08-09 | 2019-08-09 | Virtual programming in a microservice architecture |
| US16/536,416 US11169862B2 (en) | 2019-08-09 | 2019-08-09 | Normalizing messaging flows in a microservice architecture |
| US16/536,443 US10891176B1 (en) | 2019-08-09 | 2019-08-09 | Optimizing messaging flows in a microservice architecture |
| PCT/US2020/045332 WO2021030170A1 (en) | 2019-08-09 | 2020-08-07 | Normalizing messaging flows, optimizing messaging flows, and virtual programming in a microservice architecture |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| EP4004732A1 true EP4004732A1 (en) | 2022-06-01 |
Family
ID=72193619
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| EP20761035.3A Pending EP4004732A1 (en) | 2019-08-09 | 2020-08-07 | Normalizing messaging flows, optimizing messaging flows, and virtual programming in a microservice architecture |
Country Status (2)
| Country | Link |
|---|---|
| EP (1) | EP4004732A1 (en) |
| WO (1) | WO2021030170A1 (en) |
Families Citing this family (6)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US11909574B2 (en) | 2021-06-01 | 2024-02-20 | Ciena Corporation | Grouping configuration-modifying transactions from collaborating agents |
| US11483369B1 (en) | 2021-06-07 | 2022-10-25 | Ciena Corporation | Managing confirmation criteria for requested operations in distributed microservice networks |
| US11561790B2 (en) | 2021-06-21 | 2023-01-24 | Ciena Corporation | Orchestrating multi-level tools for the deployment of a network product |
| US12498994B2 (en) | 2021-07-22 | 2025-12-16 | Ciena Corporation | Coalescing publication events based on subscriber tolerances |
| US12231412B2 (en) | 2023-01-02 | 2025-02-18 | Ciena Corporation | Automatically encrypting sensitive data in a distributed microservice framework |
| US12206601B2 (en) | 2023-04-13 | 2025-01-21 | Ciena Corporation | Backpressure notifications to peers for BGP updates |
Family Cites Families (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CA2594082A1 (en) * | 2005-01-06 | 2006-07-13 | Tervela, Inc. | A caching engine in a messaging system |
| US10440128B2 (en) * | 2016-09-20 | 2019-10-08 | Ciena Corporation | Systems and methods for selecting efficient messaging between services |
| US10503568B2 (en) * | 2017-09-27 | 2019-12-10 | Oracle International Corporation | Asynchronous handling of service requests |
-
2020
- 2020-08-07 WO PCT/US2020/045332 patent/WO2021030170A1/en not_active Ceased
- 2020-08-07 EP EP20761035.3A patent/EP4004732A1/en active Pending
Non-Patent Citations (1)
| Title |
|---|
| "Microservice Patterns - MEAP v5", 25 October 2017, MANNING PUBLICATIONS, ISBN: 978-1-61729-454-9, article CHRIS RICHARDSON: "Microservice Patterns - MEAP v5", pages: ToC,Ch01 - Ch06, XP055576334 * |
Also Published As
| Publication number | Publication date |
|---|---|
| WO2021030170A1 (en) | 2021-02-18 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US11169862B2 (en) | Normalizing messaging flows in a microservice architecture | |
| US11055155B2 (en) | Virtual programming in a microservice architecture | |
| US10891176B1 (en) | Optimizing messaging flows in a microservice architecture | |
| WO2021030170A1 (en) | Normalizing messaging flows, optimizing messaging flows, and virtual programming in a microservice architecture | |
| US9553944B2 (en) | Application server platform for telecom-based applications using an actor container | |
| US9787561B2 (en) | System and method for supporting a selection service in a server environment | |
| US7159211B2 (en) | Method for executing a sequential program in parallel with automatic fault tolerance | |
| US6038604A (en) | Method and apparatus for efficient communications using active messages | |
| US20120042327A1 (en) | Method and System for Event-Based Remote Procedure Call Implementation in a Distributed Computing System | |
| US20030182464A1 (en) | Management of message queues | |
| US20070156729A1 (en) | Data structure describing logical data spaces | |
| US6832266B1 (en) | Simplified microkernel application programming interface | |
| US9507637B1 (en) | Computer platform where tasks can optionally share per task resources | |
| US10726047B2 (en) | Early thread return with secondary event writes | |
| CN114168626A (en) | Database operation processing method, device, equipment and medium | |
| CN111949687B (en) | Distributed database architecture based on shared memory and multiple processes and implementation method thereof | |
| US20060075393A1 (en) | Stack marshaler | |
| US6865579B1 (en) | Simplified thread control block design | |
| Blewett et al. | Pro Asynchronous Programming with. NET | |
| CN116010121A (en) | Multi-threaded message data access method and device based on circular linked list | |
| Snyder et al. | Using logical operators as an extended coordination mechanism in Linda | |
| Ertel et al. | A framework for the dynamic evolution of highly-available dataflow programs | |
| CN101295269B (en) | A Method of Component Interaction Synchronization Based on Transaction | |
| WO2014110701A1 (en) | Independent active member and functional active member assembly module and member disassembly method | |
| Andreoli et al. | Inducing Huge Tail Latency on a MongoDB deployment |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: UNKNOWN |
|
| STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE INTERNATIONAL PUBLICATION HAS BEEN MADE |
|
| PUAI | Public reference made under article 153(3) epc to a published international application that has entered the european phase |
Free format text: ORIGINAL CODE: 0009012 |
|
| STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE |
|
| 17P | Request for examination filed |
Effective date: 20220222 |
|
| AK | Designated contracting states |
Kind code of ref document: A1 Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR |
|
| DAV | Request for validation of the european patent (deleted) | ||
| DAX | Request for extension of the european patent (deleted) | ||
| P01 | Opt-out of the competence of the unified patent court (upc) registered |
Effective date: 20230515 |
|
| STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: EXAMINATION IS IN PROGRESS |
|
| 17Q | First examination report despatched |
Effective date: 20230622 |