WO1998059519A2 - Asynchronous message processing system and method - Google Patents
Asynchronous message processing system and method Download PDFInfo
- Publication number
- WO1998059519A2 WO1998059519A2 PCT/CA1998/000537 CA9800537W WO9859519A2 WO 1998059519 A2 WO1998059519 A2 WO 1998059519A2 CA 9800537 W CA9800537 W CA 9800537W WO 9859519 A2 WO9859519 A2 WO 9859519A2
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- message
- application code
- state
- messages
- pid
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/54—Interprogram communication
- G06F9/546—Message passing systems or structures, e.g. queues
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04Q—SELECTING
- H04Q11/00—Selecting arrangements for multiplex systems
- H04Q11/04—Selecting arrangements for multiplex systems for time-division multiplexing
- H04Q11/0428—Integrated services digital network, i.e. systems for transmission of different types of digitised signals, e.g. speech, data, telecentral, television signals
- H04Q11/0478—Provisions for broadband connections
Definitions
- the invention relates to a distributed asynchronous message processing system and method.
- deadlocks can be analysed for only the simplest of systems (a fixed number of finite state machines with very few states and rigid communication patterns) since it depends on the global state of the system.
- Stringent organization of the communication as a client/server hierarchy may prevent deadlocks but forces an often clumsy organization in which active objects (spontaneous message sources) can never communicate directly with each other.
- active objects spontaneous message sources
- the handling of peer protocols and the handling of exception and failure conditions run into problems, as they involve message flow in the opposite direction to the normal pattern.
- time constant ratio The ratio of the message transmission time between distant nodes to the local message processing time is referred to as a "time constant ratio" .
- the time constant ratio for the time periods outlined above is roughly 10 3 - 10 4 , and is already quite high.
- the invention provides in a network node having a processor, and a memory, an asynchronous message processing method comprising the steps of: maintaining a plurality of processes in the memory, each process consisting of a process state and associated application code, each process being identifiable by a pid (process identifier) ; the node receiving incoming messages; for each incoming message received, the processor determining from a destination pid forming part of the message to which of said plurality of processes the message is addressed, and running the application code associated with the destination pid to handle the message as a function of the destination process's process state and making changes to the destination process's process state if necessary; the application task, while running, generating outgoing messages if necessary; the node transmitting any outgoing messages.
- the invention provides a network node for connection to a digital network comprising: an interface to the digital network for receiving incoming messages and storing them in an input message queue, and for reading outgoing messages from an output message queue and transmitting them; a processor for maintaining a plurality of processes in a memory, each process consisting of a process state and associated application code, each process being identifiable by a pid (process identifier) , and for reading messages one at a time from the input message queue in the order they were received, determining from a destination pid forming part of the message to which of said plurality of processes the message is addressed, and running the application code associated with the destination pid to handle the message as a function of the destination process's process state and making changes to the destination process's process state if necessary and writing an outgoing message to a next available buffer location in the output message queue if necessary.
- the invention provides a digital network comprising a plurality of network nodes, each network node comprising: an interface to the digital network for receiving incoming messages and storing them in an input message queue, and for reading outgoing messages from an output message queue and transmitting them; a processor for maintaining a plurality of processes in a memory, each process consisting of a process state and associated application code, each process being identifiable by a pid (process identifier) , and for reading messages one at a time from the input message queue in the order they were received, determining from a destination pid forming part of the message to which of said plurality of processes the message is addressed, and running the application code associated with the destination pid to handle the message as a function of the destination process's process state and making changes to the destination process's process state if necessary and writing an outgoing message to a next available buffer location in the output message queue if necessary.
- Figure 1 is a block diagram of a simple network
- Figure 2 is a block diagram of one of the nodes in the network of Figure 1 ;
- Figure 3 is an illustration of the overall flow of messages within a node, according to an embodiment of the invention.
- Figure 4 is an illustration of a shared input buffer ring
- Figure 5 is a chart of a representative message syntax
- Figure 6 is a summary of kernel data structures and environment registers
- Figure 7 depicts an undo log (trail) mechanism
- Figure 8 is a flow chart of the kernel loop.
- a network context for implementing the system and methods according to an embodiment of the invention is comprised of a number of processing nodes 10 interconnected by a digital communications network 20.
- the communications network 20 may represent a local area network, a wide area network, even a global network, for example.
- the network 20 must be capable of sending digital messages, up to some modest maximum size, from any processing node 10 to any other processing node.
- the network 20 may be either a connection oriented or connectionless medium, and should provide an acceptably low rate of message loss, corruption or out of order deliveries.
- Many communi- cation technologies could be used or adapted for use by means of appropriate lower level protocols, but as a representative concrete example we will assume the use of single cell ATM (asynchronous transfer mode) .
- the individual nodes 10 minimally consist of a high speed local bus 30, a processor or cpu 32, some random access shared memory 34, and an intelligent communication adaptor 36 which provides access to the communications network 20 (see Figure 1) .
- the communication adaptor 36 and bus 30 are such as to allow direct memory access (DMA) of data between the communication adaptor 36 and message buffers in the shared memory 34, and such memory buffers are visible to the processor 32.
- DMA direct memory access
- normal communications between the processor 32 and the communication adaptor 36 are handled through the shared memory 34 only and no interrupts are generated by the communication adaptor 36 except possibly under exception conditions.
- Each processor is executing a run-time environment according to an embodiment of the invention and some application code directly, without any standard OS (operating system) intervening. It is noted that a run-time environment may run in a conventional OS context, but this complicates the description considerably. Overview of Run-time Environment
- processs which are invoked by the kernel . Each process has a lifetime from the time it is created by another process until it voluntarily terminates.
- process it is not in the conventional sense of a continuous stream of application code which is run concurrently with other processes by time slicing the processor time and saving a stack between time slices allocated to the particular process.
- a “process” is defined by 1) a process state consisting of a set of state variables stored in shared memory 34 owned by the process which exist from when the process is created until it terminates and 2) a process type defined by the code which defines the behaviour of the process and which operates on the state variables to cause state transitions. To reiterate, the only thing which exists continuously during the life of a process is the process state and the process type definition.
- the kernel is continuous in the sense that it is started and never terminated.
- the kernel controls when application processes are run.
- the kernel calls the application processes directly and thus in this sense is a1ways running .
- the kernel takes messages received by the particular node one at a time, determines for which destination process the message is intended, runs to completion the code identified by the process type of the destination process, possibly resulting in one or more changes to state variables belonging to the process state, and possibly outputting one or more messages to be sent to other processes, and finally returns to the kernel.
- This sequence of steps taken to handle each message is considered to constitute one "state transition", or simply one "transition” .
- the kernel does not process the next message received by the particular node until the state transition associated with the previous message is complete. In fact, the kernel does nothing until a return from the application code run for that transition occurs.
- the process is in an "Active" mode.
- the process is in a "Inactive” mode.
- Inactive While Inactive, the process state and type continue to exist, but the processor is not running any application code associated with the process and no changes to any state variables belonging to that process occur. Because transition application code runs to completion with no time slicing or intervention by other processes, there is no need to ever perform context switches between processes .
- state transitions be atomic.
- atomic it is meant that either all of the operations associated with a state transition occur, or none of them occur. Thus if for some reason, not all of the operations associated with a state transition can be completed, then the effects upon the state of any operations which have been completed need to be rolled back.
- the behaviour of each process is uniquely determined by its process state and process type.
- the states of different processes are non-overlapping.
- the process state of a process can only be changed by its receiving and processing a message.
- the state transitions of a process are serialized and form a strict sequence.
- the process exclusively "owns" (and is responsible for) its state data and storage.
- the application code associated with the process type specifies completely the state changes and messages sent for each sort of message received. Returning successfully from this code completes the state transition. Between transitions each process is in Inactive mode waiting for its next message.
- the state storage allocated to the terminating process is returned to the appropriate memory pool, the process type is returned to a default type which causes all messages to be discarded, and the incarnation number is incremented so that any outstanding messages to the terminated process are discarded.
- Both process creation and termination are part of the atomic transitions; in particular, creation of P is part of some atomic transition of its creator A. Direct creation can only be done locally (in the same shared memory) so that to get a remote process created, a process must work indirectly through a preexisting agent which may impose whatever security restrictions may be appropriate.
- Each process once created has a destination pid (process identifier) which is used by other processes to address messages to the process. Destination pids are discussed in detail below.
- An initial connection protocol is needed to make destination pids globally available. Initially the only processes that know the pid of a process P are P itself and its local creator. The normal way to get a pid of a remote process is to send an initial request to some well-known "name server" process, the reply to which will contain the desired pid. This could be for either a pre-existing or a newly created process.
- the process names of well-known name server processes can be handled in two ways which are preferably used in combination: The first is to give distinguished ubiquitous servers each a fixed index (a fixed or no incarnation number) on all nodes. The second is by a generic distributed name service based on having the well-known processes register themselves (the name server itself using the first strategy) .
- Each node has a rotating input buffer ring 50 and a rotating output buffer ring 52, both forming part of shared memory 34 (see Figure 2) .
- the processor 32 runs the kernel 54 which performs a dispatcher function among other functions, and which reads from the input buffer
- the application processes 56 write to the output buffer 52.
- Ordered input streams 58 from all other nodes are merged into a single serial input stream 60; in the case of ATM this merging occurs somewhere in the ATM network itself, but could in other circumstances occur in the adaptor 36 as illustrated.
- Incoming messages are inserted into the next free slot of the rotating input buffer ring 50.
- the kernel 54 asynchronously pulls messages from the input buffer ring 50 and handles them in order of arrival by activating the appropriate application process 56 without the requirement for any scheduling of processes. The activation of the appropriate application process 56 is described in further detail below.
- Outgoing messages go into consecutive slots in the rotating output buffer ring 52, from where they are pulled in order by the adaptor 36 and placed in a single serial output stream 62 which will contain ordered streams 64 to all other nodes. It is possible that the adaptor may reorder messages between various streams, but not within a single stream.
- the rotating input buffer ring 50 and the rotating output buffer ring 52 use a rotating ring mechanism to provide automatic recycling of free buffers without requiring any explicit intervention by the kernel 54.
- the size of the rings allows for asynchronism between input process, code execution, and output process, allowing for fluctuations in transition execution rate.
- the load on the system can be simply characterized by the occupancy of the buffer rings 50,52.
- the rotating input buffer ring 50 is a data structure in shared memory 34 for handling incoming messages from the communications adaptor 36.
- the rotating input buffer ring 50 is preferably a statically configured (e.g. at node initialization time) ring (i.e., a circular chained list) of fixed (maximum) size message buffers.
- a statically configured (e.g. at node initialization time) ring i.e., a circular chained list) of fixed (maximum) size message buffers.
- the use of a static ring structure avoids the need for dynamic list manipulations during processing, as well as mutual exclusion semaphores, and explicit buffer management recycling.
- buffers may be added to or deleted from the rings dynamically by the processor to adjust for slow changes in occupancy statistics.
- Each individual message buffer contains the address of the next message buffer in the chain, these addresses collectively defining where the ring is stored in memory.
- each message buffer (excluding the address of the next buffer) is shown in Figure 4.
- Each message buffer has a status word 70 used for synchronization as described below, possibly some additional control information (such as virtual channel identifiers in the case of ATM) , and each message buffer has space to store the body 72 (or content) of a message.
- the specific order and content of the fields in each buffer may be set up to suit any particular messaging protocol/buffering scheme.
- the message bodies and some or all of the dynamic control/status fields may be in a different portion of memory and a pointer to the body only is in the buffer ring.
- This variant is useful in systems in which virtual memory addresses used by the processor are different from the physical memory addresses used by the adaptor, as it permits each to have its own static ring structure pointing (in different memory address spaces) at the same (unlinked) message buffers.
- the body 72 of a message includes a destination pid field 74, a source pid field 76, and a data field 78.
- Messages are addressed to individual processes by having the corresponding pid as their destination pid field 74.
- the specific size and structure of the destination and source pid fields 74,76 is determined by the addressing needs of each specific instance of the network.
- the general structure of the pid field includes a node identifier 80 to be used to identify the node with which the process is associated, a local pid 82 to identify the process on each node, and an incarnation index 84 to be used as an incrementing counter which is incremented each time a process having a particular local pid 82 is created.
- the incarnation index 84 is provided to limit the rate of pid reuse and to provide sparseness.
- a particular incarnation of a process type is a process instance.
- a process instance table holds a mapping from the local pid 82 of the destination pid to the information needed to execute application code associated with the indicated process.
- For each local pid 82 there is a record 83 in the process instance table.
- the record consists mainly of a code pointer 86 to the location in shared memory storing the particular application code which is to handle a message for the process instance, and a state pointer 88 to the state information of the process instance.
- the incarnation index of the pid is also kept in a field 90 of this record.
- the output buffer ring is analogous to the input buffer ring.
- Input Buffer Operation by Adaptor A function of the communications adaptor 36 is to transfer (via DMA) each valid (i.e. not "empty" cell) received message into the buffer in the input buffer ring 50 pointed to by a state variable input_write_head, for example.
- the state variable input_write_he d is used to point to the next empty buffer in the input ring buffer 50.
- This state variable may be private to the adaptor 36 or alternatively may be kept in shared memory 34, but is (apart from initialization) only read and updated by the adaptor 36.
- the adaptor 36 After each successful message transfer the adaptor 36 sets a flag in the status field in the current buffer to indicate that the buffer is occupied with a valid message, and advances input_write_head to the next buffer in the chain.
- Another state variable input_read_head is maintained by the kernel 54 to control reading messages from the input buffer ring, as discussed in further detail below.
- a state variable for example OUtput_read_head, is used to point to the message to be read from the output buffer ring 52.
- This state variable may be private to the adaptor 36 or alternatively may be kept in shared memory 34, but is (apart from initialization) only read and updated by the adaptor 36.
- OUtput_write_head is maintained by the kernel 54 to control writing messages to the output buffer ring 52, as discussed in further detail below.
- a representative message syntax is shown in Figure 5. As indicated above, it includes the destination 74 and source pids 76 at the front of each message.
- the source pid 76 is stamped automatically on each message by the kernel 54 of the run-time system, and the destination pid 74 is inserted, also by the kernel, as part of the message sending primitive operation.
- the application code 56 does not have direct access to either of these fields of a message, but can ask for the message source pid as an explicit call.
- the rest of the message contents (the data field 78) is the concern of a message marshalling/demarshalling apparatus, which ideally may be integrated with the application programming language. The details of this are not covered here.
- a message marshalling/demarshalling apparatus which ideally may be integrated with the application programming language. The details of this are not covered here.
- it encodes methods 92, format indications 94 (such as the number of arguments 96 and a compressed type signature) , a standard machine-independent representation of basic types (including pids) and some sort of checksum 98.
- This format may optionally be encrypted.
- Figure 6 illustrates how the run-time system might be structured in a particular implementation.
- mi.dest pid is the destination pid contained in a message
- mi.dest pid is the destination pid contained in a message
- in - input byte stream used by demarshalling code to extract message fields from messages in the input buffer ring 50
- no - "next out " output_write_head : next available output buffer used to set destination, source pids in messages in the output buffer ring 52
- out - output byte stream used by marshalling code to format outgoing messages in the output buffer ring 52
- pe - process entry provides access to all process instance and type attributes through the process instance table 85
- sp - stack pointer top of system stack 100
- te - trail stack end end of undo log trail stack 102
- pc - program counter points to next executable instruction in the executing code.
- a memory pool 104 which is an area of memory 34 (see Figure 2) reserved for process state storage
- a code memory 106 which is an area of shared memory 34 for storing application code.
- An active timer list 112 (managed by the kernel) is also located in the shared memory 34.
- the code pointer 86 is shown pointing from the record in the process instance table 85 pointed to by pe to the location in the code memory 106 where the application code is stored.
- the state pointer 88 is shown pointing from the same record in the process instance table 85 to the location in the state memory 104 of the process state for that process instance.
- the process state may include one or more timer pointers 114 to timers in the active timer list 112.
- trail stack 102 For this purpose a single system resource called the trail stack 102, consisting of an array (or stack) of (address, value) pairs, of sufficiently large size is needed.
- the environment register te (trail end) is set to point to the start 103 of the trail stack 102. Every write to a word of state memory during application task execution must first push the address (addr) and current value (oldval) of that location onto the trail stack 102. (The efficient implementation of this requires support from the language compilers and code generators.)
- te is reset to the top of the trail stack 102, ready for the next message .
- the trail is processed in reverse order (from te backwards) , with the old values being restored to the locations indicated in the process state memory 104, for example with code along the lines of: while (te>trailstart) ⁇ val -trail (te); te--; p -trail (te); te--; (p)*-val; ⁇
- the new values (newval) written during the unsuccessful transition are discarded. Trailing applies to all updates of state variables : these consist of those in the memory segment pointed to by the state pointer of a process and everything reachable from there.
- timer area associated with creation, resetting, and deletion of timers being used by the process; timers are discussed in more detail below
- process instance table associated with creation of processes
- memory pools associated with dynamic memory allocation
- Trailing is not needed for updates to the stack or output message buffers, or other volatile variables. Also, as an optimization, it may not be necessary for the initialization (constructor) code for state objects.
- the Kernel Dispatcher Process The processor 32 runs the kernel between running application code.
- the kernel has a main loop which is run once per incoming message.
- the kernel loop is shown as a flow chart in Figure 8.
- the kernel loop will be described in the context of the previously introduced kernel data structures and environment registers which were described with reference to Figure 6.
- the kernel has an internal state variable_input_read_head (or mi) which points into the next available message in the input buffer 50. It runs an idle task as long as the input buffer is empty as indicated by the status register mi. status being equal to zero (block 200, yes path, and block 202) .
- the function of the kernel is to process incoming messages as fast as possible in their order of arrival .
- the kernel extracts the message's destination pid, mi.dest, extracts the local pid from the destination pid, verifies that the local pid is in the correct range, and uses it to access a record in the process instance table 85.
- a field of the process instance record contains the incarnation index: this must match the destination pid from the incoming message exactly.
- the incoming message is deemed to be invalid (no path, block 204) , an error counter is incremented (block 206) , and the message is discarded by simply clearing its status field and advancing the input_read_head pointer (block 222) .
- the kernel sets up the environment variables for process execution and atomicity support and invokes the application code pointed to by the code pointer in the process instance table.
- One such (notional) environmental variable may provide the message source pid, in case the application code needs to access it.
- the state pointer 88 and the code pointer 86 are read from the process instance table 85 and fetched into registers and/or pushed onto the stack, depending on the call conventions.
- the method or command field of the message must be extracted, and the pointer to the next field of the message after the command, namely the input byte stream register in, is also fetched into a register and/or pushed onto the stack.
- a copy (local to the kernel) of the output_write_head is made (block 210) .
- the application code then runs (block 212) by invoking the application code pointed to by the code pointer, passing the input byte stream in and the state pointer as parameters, and expecting a return code or result "res".
- diagnostic information can be captured (block 224) , the system stack is reset to its normal point (ie. where it would be just before the call to the application code) , and control transferred to the abort processing (block 216) described above.
- relevant diagnostic information roughly in order of increasing size and decreasing utility, consists of: (0) the error type and program counter, (1) the current input message, (2) the (complete) current process state, and (3) the complete input buffer ring (which includes both recently processed messages and the near term future messages) .
- a process transition may generate an outgoing message to which a response is expected. Such a process transition runs to completion (as always) and the process goes into "Inactive" mode. When the expected reply message is received, it goes into the input buffer ring and is handled by activating the original process in the usual manner.
- the action may be to check the status of the partner (by sending a message to it) , or to rebind its name, and continue until successful.
- the action may be simply to abort, with any retry generated by external (to those processes ) means.
- timer : record link(s) : ⁇ timer; owner: process_id value : integer; end; Timers can be created and destroyed by a process using the following two procedures; the timer pointer (s) are kept in the process state.
- This process creates a new timer record and returns its pointer; the owner field gets the id of the currently running process and the timer record is appended to a global timer chain. (If different clock resolutions are needed, then the resolution becomes an input parameter) .
- These timer chains are global variables managed by the kernel .
- Timer setting and cancelling may then be done very cheaply by the following two macros :
- the real time clock interrupt periodically arranges to run a timer scan (between application process executions) that runs through all the chained timers, decrementing any non-negative value fields. Any value field which becomes 0 results in a message being sent to the owner process .
- the timeout message here is assumed to have the command "timeout" so the resulting action is purely state driven.
- the set_timer can include a message code for use on the timeout message.
- Process IDS may be used to represent capabilities. Having a pid entitles an entity to send whatever messages the associated process will accept and hence represents a capability to do certain things.
- pids are just binary values and can be stored in any convenient data structure and passed freely in messages, etc., and there are no controls or restrictions placed on such capability passing.
- revoking a capability previously granted can only be done by destroying the process and then creating a new one with a different pid to carry on its function if necessary; node reinitialization being an extreme case of this.
- the working assumption is that this is a benign cooperative environment, with all nodes "trusted” to be non-malicious, but possibly misbehaving under failure conditions.
- Incarnation numbers which fill the otherwise unused bits of the pid, provide that the same process id is not reused within some time interval, sufficient that old memorized pids will likely have been purged. A cheap way to revoke outstanding capabilities is then to increment the incarnation number of a process while maintaining its state.
Landscapes
- Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Theoretical Computer Science (AREA)
- Computer Networks & Wireless Communication (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Computer And Data Communications (AREA)
- Communication Control (AREA)
Abstract
Description
Claims
Priority Applications (3)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| JP50344299A JP2002505050A (en) | 1997-06-24 | 1998-06-02 | Asynchronous message processing system and method |
| CA002292450A CA2292450A1 (en) | 1997-06-24 | 1998-06-02 | Asynchronous message processing system and method |
| EP98923964A EP0992142A2 (en) | 1997-06-24 | 1998-06-02 | Asynchronous message processing system and method |
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US88045997A | 1997-06-24 | 1997-06-24 | |
| US08/880,459 | 1997-06-24 |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| WO1998059519A2 true WO1998059519A2 (en) | 1998-12-30 |
| WO1998059519A3 WO1998059519A3 (en) | 1999-03-25 |
Family
ID=25376329
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/CA1998/000537 Ceased WO1998059519A2 (en) | 1997-06-24 | 1998-06-02 | Asynchronous message processing system and method |
Country Status (4)
| Country | Link |
|---|---|
| EP (1) | EP0992142A2 (en) |
| JP (1) | JP2002505050A (en) |
| CA (1) | CA2292450A1 (en) |
| WO (1) | WO1998059519A2 (en) |
Cited By (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2001033407A3 (en) * | 1999-11-01 | 2002-12-05 | Seebeyond Technology Corp | Systems and methods of message queuing |
| US6970945B1 (en) | 1999-11-01 | 2005-11-29 | Seebeyond Technology Corporation | Systems and methods of message queuing |
| CN115827281A (en) * | 2023-02-02 | 2023-03-21 | 广州钛动科技股份有限公司 | Data processing method and system based on lightweight SQS message processing framework |
Families Citing this family (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN116155842B (en) * | 2022-12-29 | 2025-09-09 | 天翼物联科技有限公司 | Method, system, equipment and medium for processing instruction message queue |
Family Cites Families (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US4736364A (en) * | 1986-03-12 | 1988-04-05 | American Telephone And Telegraph Company, At&T Bell Laboratories | Switching system control arrangements |
-
1998
- 1998-06-02 JP JP50344299A patent/JP2002505050A/en active Pending
- 1998-06-02 WO PCT/CA1998/000537 patent/WO1998059519A2/en not_active Ceased
- 1998-06-02 EP EP98923964A patent/EP0992142A2/en not_active Withdrawn
- 1998-06-02 CA CA002292450A patent/CA2292450A1/en not_active Abandoned
Cited By (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2001033407A3 (en) * | 1999-11-01 | 2002-12-05 | Seebeyond Technology Corp | Systems and methods of message queuing |
| US6970945B1 (en) | 1999-11-01 | 2005-11-29 | Seebeyond Technology Corporation | Systems and methods of message queuing |
| US7363630B2 (en) | 1999-11-01 | 2008-04-22 | Sun Microsystems, Inc. | System and method of intelligent queuing |
| CN115827281A (en) * | 2023-02-02 | 2023-03-21 | 广州钛动科技股份有限公司 | Data processing method and system based on lightweight SQS message processing framework |
Also Published As
| Publication number | Publication date |
|---|---|
| EP0992142A2 (en) | 2000-04-12 |
| CA2292450A1 (en) | 1998-12-30 |
| WO1998059519A3 (en) | 1999-03-25 |
| JP2002505050A (en) | 2002-02-12 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| Liskov et al. | Implementation of argus | |
| Das et al. | GTW: a time warp system for shared memory multiprocessors | |
| EP0850444B1 (en) | Support for application programs in a distributed environment | |
| US5452430A (en) | System and method for storing persistent and non-persistent queued data and for recovering the persistent data responsive to a system restart | |
| US7194569B1 (en) | Method of re-formatting data | |
| US8245207B1 (en) | Technique for dynamically restricting thread concurrency without rewriting thread code | |
| US5794005A (en) | Synchronous parallel emulation and discrete event simulation system with self-contained simulation objects and active event objects | |
| US7526777B2 (en) | Wireless device operating system (OS) application programmer's interface (API) | |
| CA2088429A1 (en) | Communications system | |
| US20060090103A1 (en) | Critical path profiling of threaded programs | |
| US11620215B2 (en) | Multi-threaded pause-less replicating garbage collection | |
| CN111813522A (en) | A Virtual ARINC 653 Simulation and Verification Platform | |
| CN112241398A (en) | A data migration method and system | |
| WO1998059519A2 (en) | Asynchronous message processing system and method | |
| CN104769553A (en) | System and method for supporting work sharing muxing in a cluster | |
| US7320044B1 (en) | System, method, and computer program product for interrupt scheduling in processing communication | |
| Klimiankou | Micro-CLK: returning to the asynchronicity with communication-less microkernel | |
| Ishikawa | Multiple threads template library | |
| Mazzucco et al. | Engineering distributed shared memory middleware for java | |
| EP1016964A2 (en) | System and method for permitting access to a singular network process by multiple user processes | |
| JPH02158840A (en) | Message area controlling system for inter-task communication | |
| CN120371570A (en) | Inter-process communication method and device | |
| Kakulavarapu et al. | Design of the runtime system for the Portable Threaded-C language | |
| Seidel et al. | Global synchronization algorithms for the Intel IPSC/860 | |
| Beineke et al. | DXNet: Scalable Messaging for Multi-Threaded Java-Applications Processing Big Data in Clouds |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| AK | Designated states |
Kind code of ref document: A2 Designated state(s): CA JP |
|
| AL | Designated countries for regional patents |
Kind code of ref document: A2 Designated state(s): AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE |
|
| DFPE | Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101) | ||
| AK | Designated states |
Kind code of ref document: A3 Designated state(s): CA JP |
|
| AL | Designated countries for regional patents |
Kind code of ref document: A3 Designated state(s): AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE |
|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application | ||
| ENP | Entry into the national phase |
Ref document number: 2292450 Country of ref document: CA Kind code of ref document: A Ref document number: 2292450 Country of ref document: CA |
|
| WWE | Wipo information: entry into national phase |
Ref document number: 1998923964 Country of ref document: EP |
|
| WWP | Wipo information: published in national office |
Ref document number: 1998923964 Country of ref document: EP |
|
| WWW | Wipo information: withdrawn in national office |
Ref document number: 1998923964 Country of ref document: EP |