[go: up one dir, main page]

WO2016122658A1 - Request processing - Google Patents

Request processing Download PDF

Info

Publication number
WO2016122658A1
WO2016122658A1 PCT/US2015/013963 US2015013963W WO2016122658A1 WO 2016122658 A1 WO2016122658 A1 WO 2016122658A1 US 2015013963 W US2015013963 W US 2015013963W WO 2016122658 A1 WO2016122658 A1 WO 2016122658A1
Authority
WO
WIPO (PCT)
Prior art keywords
sub
request
application programming
programming interface
interface version
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/US2015/013963
Other languages
French (fr)
Inventor
Thomas W. Hanson
Justin York
Eric Thomas
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hewlett Packard Enterprise Development LP
Original Assignee
Hewlett Packard Enterprise Development LP
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hewlett Packard Enterprise Development LP filed Critical Hewlett Packard Enterprise Development LP
Priority to PCT/US2015/013963 priority Critical patent/WO2016122658A1/en
Publication of WO2016122658A1 publication Critical patent/WO2016122658A1/en
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • G06F9/5055Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering software capabilities, i.e. software resources associated or available to the machine
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/50Indexing scheme relating to G06F9/50
    • G06F2209/5018Thread allocation

Definitions

  • Datacenter systems support the management of devices interconnected via a computer network.
  • the managed devices include management interfaces that allow the managed devices to be configured by a management platform.
  • management interfaces that allow the managed devices to be configured by a management platform.
  • API Application Programming Interface
  • Figure 1 is a block diagram illustrating one example of a system.
  • Figure 2 illustrates one example of request processing of the system of Figure 1 .
  • Figure 3 is a block diagram illustrating one example of a processing system for implementing a management platform.
  • Figure 4 is a flow diagram illustrating one example of a method for request processing.
  • each implements a common Application Programming Interface (API).
  • API Application Programming Interface
  • the API may evolve to new versions as new data and formats are developed and/or old data is deprecated.
  • Management interfaces may or may not be updated to support new API versions and if they are updated, there may be variable time lags across vendors, model numbers, or particular instances of a device.
  • the software application determines which version of the API a particular management interface is using and adapts to communicate using the same API version.
  • the management platform contains or has access to an implementation of each version of the API to be supported.
  • the application may use multiple parallel threads of execution to concurrently support multiple different API versions.
  • the application may also support a very high throughput in terms of transactions per second, which requires very quick access to the code that implements the various API versions.
  • the software application determines the version of the API being used by a management interface with which the application is to communicate, identifies the corresponding dynamic library and schema to load, loads the dynamic library and schema, and then executes some or all of the contained code.
  • the disclosed technique utilizes a tiered architecture of processes and threads leveraging the specific attributes of each to optimize the access time to the dynamic library code and schema and to optimize the resources consumed by the software application, the dynamic library code, and the schemas.
  • FIG. 1 is a block diagram illustrating one example of a system 100.
  • System 100 includes a management platform 102, a network 1 14, and a plurality of managed devices 1 18 1 -1 18 ⁇ , where "X" is any suitable number of managed devices.
  • Management platform 102 is communicatively coupled to each managed device 1 18i-1 18 X through a communication path 1 12, network 1 14, and a communication path 1 16.
  • Network 1 14 may be an Ethernet network, a fibre channel network, the Internet, another suitable network, or combination thereof.
  • Each managed device 1 18 1 -1 18 X includes a management interface 120 120 ⁇ , respectively.
  • Each management interface 120 ⁇ -120 ⁇ implements an API version 122i-122 Y , respectively, where "Y" is any suitable number of API versions. More than one managed device may implement the same version of the API.
  • Managed devices 1 18i-1 18 ⁇ may include any system or device that is individually configurable, such as computer systems, storage systems, network devices (e.g., network switches), and/or individual boards or components within a system or device (e.g., a network adapter).
  • Management platform 102 includes at least one management software 106 and request processing 104.
  • Request processing 104 is communicatively coupled to management software 106 through a communication path 108.
  • Management software 106 includes a software application for communicating with and configuring each managed device 1 18i-1 18 ⁇ via a respective
  • Management software 106 sends processing requests for a managed device to request processing 104 including an indication of the API version of the managed device.
  • Request processing 104 executes the requests by using a sub-process and threads of the sub-process corresponding to the API version of the managed device as will be described below with reference to Figure 2. Once processing of a request is complete, request processing 104 returns a response to the request to management software 106.
  • Management platform 102 supports the multiple API versions 122 122 Y of management interfaces 120 ⁇ -120 ⁇ while providing high throughput.
  • Request processing 104 uses separate resources such as code libraries and data files for each API version which are loaded once upon a first request for a particular API version. Keeping the libraries and data files separate from the common core of request processing significantly reduces the size of the request processing executable file and eliminates the need for updates to request processing to support new API versions. Updates are still used, however, to provide new libraries and schemas for new API versions.
  • a management interface is on an interface card such as a network adapter.
  • A-MIL implements a continuously running service that translates human readable Extensible Markup Language (XML) data into binary data formatted in ASN.1 and vice versa.
  • XML Extensible Markup Language
  • there are two sets of resources that are specific to the API version associated with an incoming request including the dynamic library containing the code and the schema files that define the XML interface. Obtaining access to the dynamic library and schema files causes a delay. The impact of the delay, however, is mitigated by leveraging the characteristics of processes and threads as described herein.
  • Figure 2 illustrates one example of request processing 104 of system 100 of Figure 1 .
  • An executing application runs in a context that contains a set of resources including virtual memory, file descriptors, and Input/Output (I/O) connections.
  • the specific set of resources varies based on the operating system. This context is typically referred to as a "process" or "thread.”
  • a process is an independent context while a thread shares many of the resources of a parent process with other threads under the same parent but executes independently of other threads.
  • the specifics of process versus thread are operating system dependent. For example, Linux considers each to be a task with a particular set of attributes. These attributes may be selected in many different combinations to produce a task that is neither process nor thread as those terms are commonly used.
  • a process is considered to be completely independent of other processes and a thread is considered to share, at a minimum, virtual memory, I/O connections, and files with other threads in the same process.
  • Any Linux or other operating system task with these attributes is considered to be equivalent to the processes and threads disclosed herein.
  • Request processing 104 uses a tiered architecture including a top level process as indicated at 204, a variable sized set of sub-processes 206 ⁇ -206 ⁇ , and a variable number of threads within each sub-process such as 212i-212 N .
  • the top level process at 204 is a dispatcher which receives incoming requests 202, maintains the set of sub-processes 206 ⁇ -206 ⁇ , and directs requests 202 to the appropriate sub-process 206 ⁇ -206 ⁇ .
  • the set of sub- processes includes sub-processes 206 ⁇ -206 ⁇ , where "Y" is equal to the number of different API versions to be supported. Each sub-process 206i-206 Y is used to execute requests for a particular API version.
  • sub-process 206i executes requests for API version 1 and sub-process 206 ⁇ executes requests for API version Y.
  • Each sub-process 206 ⁇ -206 ⁇ preloads a schema 208 ⁇ -208 ⁇ and a library 210 ⁇ -210 ⁇ , respectively, for the API version supported by the sub- process.
  • Each sub-process 206i-206 Y creates threads to execute requests for the API version supported by the sub-process.
  • sub-process 206i creates threads 212 212 N , where "N" is any suitable number of threads for concurrently executing requests for sub-process 206i .
  • Sub-process 206 ⁇ creates threads 214 214 M , where "M" is any suitable number of threads for concurrently executing requests for sub-process 206 ⁇ .
  • dispatcher 204 upon receipt of an incoming request 202, dispatcher 204 examines the request to determine the API version with which the request complies and, if the API version is supported, selects the sub-process that implements that version, and forwards the request to that sub-process. If the API version has not yet been encountered, dispatcher 204 creates a new sub- process, adds the new sub-process to the set of available sub-processes, and forwards the request to the new sub-process.
  • Each incoming request is associated with an I/O connection (e.g., a socket file descriptor in Linux), that is used to send the response back to the source of the request. This I/O connection is forwarded to the sub-process along with the request.
  • I/O connection e.g., a socket file descriptor in Linux
  • a sub-process When a sub-process is created by dispatcher 204, the sub-process is informed which API version to support. As part of the initialization process of the sub-process, the sub-process loads the correct library and schemas, configures memory, and may perform other steps to enable processing of requests for that API version. These other steps may include for example creating a global set of function pointers that address the functions supplied by the dynamic library. At this point, the cost (i.e. time delay) of the initialization of the sub-process has been incurred and requests can be processed without again incurring this cost.
  • a sub-process waits for requests to be forwarded by dispatcher 204 and processes the requests as they arrive. For each request, the sub-process creates a new thread and provides the thread with the request and the associated I/O connection. Once the thread has been created, the sub-process is free to accept and process another request and may create a parallel thread for each subsequent request.
  • Each thread when created, has available to it all of the resources that were initialized by its parent sub-process. This includes the library, function pointers, and schema files as well as the I/O connection to the requestor. The thread makes use of these resources to process the received request and send the response back to the requestor via the associated I/O connection. After processing of a request is complete, the thread exits (i.e. terminates). In this example, no idle threads are maintained at any time, thereby avoiding the overhead of a thread pool. In another example, a fixed or dynamic thread pool may be maintained by a sub-process rather than the sub-process creating threads for each request. There may be as many, or as few, concurrent threads as needed to handle the processing load at any point in time.
  • management platform 102 provides low and predictable response times, uses minimal resources, and is less complex than some alternative solutions. Specifically, in request processing 104, the library and schema files associated with a particular API version are loaded once when a request associated with that API version is first received. All subsequent requests for the same API version re-use the preloaded resources. Since no new resources are loaded if the API version has already been encountered, there is little variability between requests other than that associated with the complexity of the request. Management platform 102 is compatible with a variable number of API versions to be supported, while the number of supported API versions operational at one time can vary from one or two versions to ten or more versions.
  • FIG. 3 is a block diagram illustrating one example of a system 300.
  • System 300 may include at least one computing device and may provide management platform 102 previously described and illustrated with reference to Figures 1 and 2.
  • System 300 includes a processor 302 and a machine- readable storage medium 306.
  • Processor 302 is communicatively coupled to machine-readable storage medium 306 through a communication path 304.
  • the following description refers to a single processor and a single machine-readable storage medium, the description may also apply to a system with multiple processors and multiple machine-readable storage mediums.
  • the instructions may be distributed (e.g., stored) across multiple machine-readable storage mediums and the instructions may be distributed (e.g., executed by) across multiple processors.
  • Processor 302 includes one or more Central Processing Units (CPUs), microprocessors, and/or other suitable hardware devices for retrieval and execution of instructions stored in machine-readable storage medium 306.
  • processor 302 may fetch, decode, and execute instructions 308 to receive a request, instructions 310 to determine whether a sub-process has been initialized, instructions 312 to initialize a sub-process if a sub-process has not been initialized, and instructions 314 to execute the request.
  • processor 302 may include one or more electronic circuits comprising a number of electronic components for performing the functionality of one or more of the instructions in machine-readable storage medium 306.
  • executable instruction representations e.g., boxes
  • executable instructions and/or electronic circuits included within one box may, in alternate examples, be included in a different box illustrated in the figures or in a different box not shown.
  • Machine-readable storage medium 306 is a non-transitory storage medium and may be any suitable electronic, magnetic, optical, or other physical storage device that stores executable instructions.
  • machine-readable storage medium 306 may be, for example, Random Access Memory (RAM), an Electrically-Erasable Programmable Read-Only Memory (EEPROM), a storage drive, an optical disc, and the like.
  • Machine-readable storage medium 306 may be disposed within system 300, as illustrated in Figure 3. In this case, the executable instructions may be installed on system 300.
  • machine- readable storage medium 306 may be a portable, external, or remote storage medium that allows system 300 to download the instructions from the
  • the executable instructions may be part of an installation package.
  • Machine-readable storage medium 306 stores instructions to be executed by a processor (e.g., processor 302) including instructions 308, 310, 312, and 314 to process requests as previously described and illustrated with reference to Figures 1 and 2.
  • processor 302 may execute instructions 308 to receive a request, the request including an indication of an application programming interface version.
  • Processor 302 may execute instructions 310 to determine whether a sub-process has been initialized for the application programming interface version.
  • Processor 302 may execute instructions 312 to initialize a sub-process including preloading resources for the sub-process for the application
  • Processor 302 may execute instructions 314 to execute the request using the sub-process for the application programming interface version by creating at least one thread of the sub-process.
  • the request is executed by the sub-process creating a thread pool to process the received request.
  • the request may be associated with an input/output connection that is forwarded with the request to the sub-process for the application programming interface version.
  • FIG. 4 is a flow diagram illustrating one example of a method 400 for request processing.
  • a first request is received, the first request including an indication of a first application programming interface version of a first managed device and an associated input/output connection for the first request.
  • initializing the first sub-process includes loading a schema and a library for the first application programming interface version. Initializing the first sub-process may also include creating a global set of function pointers that address functions supplied by the library. At 408, the first request and the associated input/output connection for the first request are forwarded to the first sub-process. At 410, the first request is executed using the first sub-process by creating at least one thread of the first sub-process.
  • the method further includes receiving a second request, the second request including an indication of a second application programming interface version of a second managed device and an associated input/output connection for the second request.
  • the method includes determining whether a second sub-process has been initialized for the second application
  • the method includes initializing a second sub- process including preloading resources for use by the second sub-process for processing requests associated with the second application programming interface version in response to determining that a sub-process has not been initialized for the second application programming interface version.
  • the method includes maintaining the first sub-process, forwarding the second request and the associated input/output connection for the second request to the second sub-process, and executing the second request using the second sub-process by creating at least one thread of the second sub-process.
  • the method may further include receiving a third request, the third request including an indication of the first application programming interface version and an associated input/output connection for the third request.
  • the method includes forwarding the third request and the associated input/output connection for the third request to the first sub-process and executing the third request using the first sub-process by creating at least one new thread of the first sub-process or by using at least one existing thread of the first sub-process.

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Stored Programmes (AREA)

Abstract

One example of a system receives a request. The request includes an indication of an application programming interface version. The system determines whether a sub-process has been initialized for the application programming interface version. If needed, the system initializes a sub-process including preloading resources for the sub-process for the application programming interface version. The system executes the request using the sub-process for the application programming interface version by creating at least one thread of the sub-process which utilizes the preloaded resources.

Description

REQUEST PROCESSING
Background
[0001] Datacenter systems support the management of devices interconnected via a computer network. The managed devices include management interfaces that allow the managed devices to be configured by a management platform. To allow a management interface and the management platform to properly exchange data, each implements a common Application Programming Interface (API).
Brief Description of the Drawings
[0002] Figure 1 is a block diagram illustrating one example of a system.
[0003] Figure 2 illustrates one example of request processing of the system of Figure 1 .
[0004] Figure 3 is a block diagram illustrating one example of a processing system for implementing a management platform.
[0005] Figure 4 is a flow diagram illustrating one example of a method for request processing.
Detailed Description
[0006] In the following detailed description, reference is made to the
accompanying drawings which form a part hereof, and in which is shown by way of illustration specific examples in which the disclosure may be practiced. It is to be understood that other examples may be utilized and structural or logical changes may be made without departing from the scope of the present disclosure. The following detailed description, therefore, is not to be taken in a limiting sense, and the scope of the present disclosure is defined by the appended claims. It is to be understood that features of the various examples described herein may be combined, in part or whole, with each other, unless specifically noted otherwise.
[0007] To allow a management interface of a managed device and a software application of a management platform to properly exchange data, each implements a common Application Programming Interface (API). Over time, the API may evolve to new versions as new data and formats are developed and/or old data is deprecated. Management interfaces may or may not be updated to support new API versions and if they are updated, there may be variable time lags across vendors, model numbers, or particular instances of a device.
Therefore, the software application determines which version of the API a particular management interface is using and adapts to communicate using the same API version. For the software application to be able to use the same API version of a particular management interface, the management platform contains or has access to an implementation of each version of the API to be supported.
[0008] Where the software application supports large numbers of discrete management interfaces, the application may use multiple parallel threads of execution to concurrently support multiple different API versions. The application may also support a very high throughput in terms of transactions per second, which requires very quick access to the code that implements the various API versions.
[0009] Examples of this software application contain all or part of the
implementation of a particular version of an API in a dynamically loadable library and may also contain a set of associated schemas. The software application determines the version of the API being used by a management interface with which the application is to communicate, identifies the corresponding dynamic library and schema to load, loads the dynamic library and schema, and then executes some or all of the contained code. The disclosed technique utilizes a tiered architecture of processes and threads leveraging the specific attributes of each to optimize the access time to the dynamic library code and schema and to optimize the resources consumed by the software application, the dynamic library code, and the schemas.
[0010] Figure 1 is a block diagram illustrating one example of a system 100. System 100 includes a management platform 102, a network 1 14, and a plurality of managed devices 1 181 -1 18χ, where "X" is any suitable number of managed devices. Management platform 102 is communicatively coupled to each managed device 1 18i-1 18X through a communication path 1 12, network 1 14, and a communication path 1 16. Network 1 14 may be an Ethernet network, a fibre channel network, the Internet, another suitable network, or combination thereof.
[0011] Each managed device 1 181 -1 18X includes a management interface 120 120χ, respectively. Each management interface 120ι-120χ implements an API version 122i-122Y, respectively, where "Y" is any suitable number of API versions. More than one managed device may implement the same version of the API. Managed devices 1 18i-1 18χ may include any system or device that is individually configurable, such as computer systems, storage systems, network devices (e.g., network switches), and/or individual boards or components within a system or device (e.g., a network adapter).
[0012] Management platform 102 includes at least one management software 106 and request processing 104. Request processing 104 is communicatively coupled to management software 106 through a communication path 108.
Management software 106 includes a software application for communicating with and configuring each managed device 1 18i-1 18χ via a respective
management interface 120i-120x. Management software 106 sends processing requests for a managed device to request processing 104 including an indication of the API version of the managed device. Request processing 104 executes the requests by using a sub-process and threads of the sub-process corresponding to the API version of the managed device as will be described below with reference to Figure 2. Once processing of a request is complete, request processing 104 returns a response to the request to management software 106.
[0013] Management platform 102 supports the multiple API versions 122 122Y of management interfaces 120ι-120χ while providing high throughput. Request processing 104 uses separate resources such as code libraries and data files for each API version which are loaded once upon a first request for a particular API version. Keeping the libraries and data files separate from the common core of request processing significantly reduces the size of the request processing executable file and eliminates the need for updates to request processing to support new API versions. Updates are still used, however, to provide new libraries and schemas for new API versions.
[0014] In one specific example for an Adapter Management Interface Library (A- MIL), a management interface is on an interface card such as a network adapter. A-MIL implements a continuously running service that translates human readable Extensible Markup Language (XML) data into binary data formatted in ASN.1 and vice versa. In A-MIL, there are two sets of resources that are specific to the API version associated with an incoming request including the dynamic library containing the code and the schema files that define the XML interface. Obtaining access to the dynamic library and schema files causes a delay. The impact of the delay, however, is mitigated by leveraging the characteristics of processes and threads as described herein.
[0015] Figure 2 illustrates one example of request processing 104 of system 100 of Figure 1 . An executing application runs in a context that contains a set of resources including virtual memory, file descriptors, and Input/Output (I/O) connections. The specific set of resources varies based on the operating system. This context is typically referred to as a "process" or "thread."
Generally, a process is an independent context while a thread shares many of the resources of a parent process with other threads under the same parent but executes independently of other threads. The specifics of process versus thread are operating system dependent. For example, Linux considers each to be a task with a particular set of attributes. These attributes may be selected in many different combinations to produce a task that is neither process nor thread as those terms are commonly used.
[0016] For purposes of this disclosure, a process is considered to be completely independent of other processes and a thread is considered to share, at a minimum, virtual memory, I/O connections, and files with other threads in the same process. Any Linux or other operating system task with these attributes is considered to be equivalent to the processes and threads disclosed herein.
[0017] Request processing 104 uses a tiered architecture including a top level process as indicated at 204, a variable sized set of sub-processes 206ι-206γ, and a variable number of threads within each sub-process such as 212i-212N. The top level process at 204 is a dispatcher which receives incoming requests 202, maintains the set of sub-processes 206ι-206γ, and directs requests 202 to the appropriate sub-process 206ι-206γ. In this example, the set of sub- processes includes sub-processes 206ι-206γ, where "Y" is equal to the number of different API versions to be supported. Each sub-process 206i-206Y is used to execute requests for a particular API version. For example, sub-process 206i executes requests for API version 1 and sub-process 206γ executes requests for API version Y. Each sub-process 206ι-206γ preloads a schema 208ι-208γ and a library 210ι-210γ, respectively, for the API version supported by the sub- process. Each sub-process 206i-206Y creates threads to execute requests for the API version supported by the sub-process. For example, sub-process 206i creates threads 212 212N, where "N" is any suitable number of threads for concurrently executing requests for sub-process 206i . Sub-process 206γ creates threads 214 214M, where "M" is any suitable number of threads for concurrently executing requests for sub-process 206γ.
[0018] In operation, upon receipt of an incoming request 202, dispatcher 204 examines the request to determine the API version with which the request complies and, if the API version is supported, selects the sub-process that implements that version, and forwards the request to that sub-process. If the API version has not yet been encountered, dispatcher 204 creates a new sub- process, adds the new sub-process to the set of available sub-processes, and forwards the request to the new sub-process. Each incoming request is associated with an I/O connection (e.g., a socket file descriptor in Linux), that is used to send the response back to the source of the request. This I/O connection is forwarded to the sub-process along with the request.
[0019] When a sub-process is created by dispatcher 204, the sub-process is informed which API version to support. As part of the initialization process of the sub-process, the sub-process loads the correct library and schemas, configures memory, and may perform other steps to enable processing of requests for that API version. These other steps may include for example creating a global set of function pointers that address the functions supplied by the dynamic library. At this point, the cost (i.e. time delay) of the initialization of the sub-process has been incurred and requests can be processed without again incurring this cost. Once initialized, a sub-process waits for requests to be forwarded by dispatcher 204 and processes the requests as they arrive. For each request, the sub-process creates a new thread and provides the thread with the request and the associated I/O connection. Once the thread has been created, the sub-process is free to accept and process another request and may create a parallel thread for each subsequent request.
[0020] Each thread, when created, has available to it all of the resources that were initialized by its parent sub-process. This includes the library, function pointers, and schema files as well as the I/O connection to the requestor. The thread makes use of these resources to process the received request and send the response back to the requestor via the associated I/O connection. After processing of a request is complete, the thread exits (i.e. terminates). In this example, no idle threads are maintained at any time, thereby avoiding the overhead of a thread pool. In another example, a fixed or dynamic thread pool may be maintained by a sub-process rather than the sub-process creating threads for each request. There may be as many, or as few, concurrent threads as needed to handle the processing load at any point in time.
[0021] By using request processing 104, management platform 102 provides low and predictable response times, uses minimal resources, and is less complex than some alternative solutions. Specifically, in request processing 104, the library and schema files associated with a particular API version are loaded once when a request associated with that API version is first received. All subsequent requests for the same API version re-use the preloaded resources. Since no new resources are loaded if the API version has already been encountered, there is little variability between requests other than that associated with the complexity of the request. Management platform 102 is compatible with a variable number of API versions to be supported, while the number of supported API versions operational at one time can vary from one or two versions to ten or more versions.
[0022] Figure 3 is a block diagram illustrating one example of a system 300. System 300 may include at least one computing device and may provide management platform 102 previously described and illustrated with reference to Figures 1 and 2. System 300 includes a processor 302 and a machine- readable storage medium 306. Processor 302 is communicatively coupled to machine-readable storage medium 306 through a communication path 304. Although the following description refers to a single processor and a single machine-readable storage medium, the description may also apply to a system with multiple processors and multiple machine-readable storage mediums. In such examples, the instructions may be distributed (e.g., stored) across multiple machine-readable storage mediums and the instructions may be distributed (e.g., executed by) across multiple processors.
[0023] Processor 302 includes one or more Central Processing Units (CPUs), microprocessors, and/or other suitable hardware devices for retrieval and execution of instructions stored in machine-readable storage medium 306. Processor 302 may fetch, decode, and execute instructions 308 to receive a request, instructions 310 to determine whether a sub-process has been initialized, instructions 312 to initialize a sub-process if a sub-process has not been initialized, and instructions 314 to execute the request. As an alternative or in addition to retrieving and executing instructions, processor 302 may include one or more electronic circuits comprising a number of electronic components for performing the functionality of one or more of the instructions in machine-readable storage medium 306. With respect to the executable instruction representations (e.g., boxes) described and illustrated herein, it should be understood that part or all of the executable instructions and/or electronic circuits included within one box may, in alternate examples, be included in a different box illustrated in the figures or in a different box not shown.
[0024] Machine-readable storage medium 306 is a non-transitory storage medium and may be any suitable electronic, magnetic, optical, or other physical storage device that stores executable instructions. Thus, machine-readable storage medium 306 may be, for example, Random Access Memory (RAM), an Electrically-Erasable Programmable Read-Only Memory (EEPROM), a storage drive, an optical disc, and the like. Machine-readable storage medium 306 may be disposed within system 300, as illustrated in Figure 3. In this case, the executable instructions may be installed on system 300. Alternatively, machine- readable storage medium 306 may be a portable, external, or remote storage medium that allows system 300 to download the instructions from the
portable/external/remote storage medium. In this case, the executable instructions may be part of an installation package.
[0025] Machine-readable storage medium 306 stores instructions to be executed by a processor (e.g., processor 302) including instructions 308, 310, 312, and 314 to process requests as previously described and illustrated with reference to Figures 1 and 2. Processor 302 may execute instructions 308 to receive a request, the request including an indication of an application programming interface version. Processor 302 may execute instructions 310 to determine whether a sub-process has been initialized for the application programming interface version.
[0026] Processor 302 may execute instructions 312 to initialize a sub-process including preloading resources for the sub-process for the application
programming interface version in response to determining that a sub-process has not been initialized for the application programming interface version. In one example, the sub-process is initialized by loading a schema and a library for the application programming interface version. A plurality of sub-processes for a plurality of application programming interface versions may be maintained. [0027] Processor 302 may execute instructions 314 to execute the request using the sub-process for the application programming interface version by creating at least one thread of the sub-process. In one example, the request is executed by the sub-process creating a thread pool to process the received request. The request may be associated with an input/output connection that is forwarded with the request to the sub-process for the application programming interface version.
[0028] Figure 4 is a flow diagram illustrating one example of a method 400 for request processing. At 402, a first request is received, the first request including an indication of a first application programming interface version of a first managed device and an associated input/output connection for the first request. At 404, it is determined whether a first sub-process has been initialized for the first application programming interface version. If a new sub-process is needed, at 406, a first sub-process is initialized including preloading resources for use by the first sub-process for processing requests associated with the first application programming interface version. This initialization is in response to determining that a sub-process has not been initialized for the first application programming interface version. In one example, initializing the first sub-process includes loading a schema and a library for the first application programming interface version. Initializing the first sub-process may also include creating a global set of function pointers that address functions supplied by the library. At 408, the first request and the associated input/output connection for the first request are forwarded to the first sub-process. At 410, the first request is executed using the first sub-process by creating at least one thread of the first sub-process.
[0029] In one example, the method further includes receiving a second request, the second request including an indication of a second application programming interface version of a second managed device and an associated input/output connection for the second request. The method includes determining whether a second sub-process has been initialized for the second application
programming interface version. The method includes initializing a second sub- process including preloading resources for use by the second sub-process for processing requests associated with the second application programming interface version in response to determining that a sub-process has not been initialized for the second application programming interface version. The method includes maintaining the first sub-process, forwarding the second request and the associated input/output connection for the second request to the second sub-process, and executing the second request using the second sub-process by creating at least one thread of the second sub-process.
[0030] The method may further include receiving a third request, the third request including an indication of the first application programming interface version and an associated input/output connection for the third request. The method includes forwarding the third request and the associated input/output connection for the third request to the first sub-process and executing the third request using the first sub-process by creating at least one new thread of the first sub-process or by using at least one existing thread of the first sub-process.
[0031] Although specific examples have been illustrated and described herein, a variety of alternate and/or equivalent implementations may be substituted for the specific examples shown and described without departing from the scope of the present disclosure. This application is intended to cover any adaptations or variations of the specific examples discussed herein. Therefore, it is intended that this disclosure be limited only by the claims and the equivalents thereof.

Claims

1 . A system comprising:
a management platform; and
a plurality of managed devices communicatively coupled to the management platform, each managed device comprising a management interface having an application programming interface version to communicate with the management platform,
wherein the management platform is to receive a request, the request including an indication of an application programming interface version of a managed device, determine whether a sub-process has been initialized for the application programming interface version of the managed device, initialize a sub-process including preloading resources for the sub-process for the application programming interface version of the managed device in response to determining that a sub-process has not been initialized for the application programming interface version of the managed device, and execute the request using the sub-process for the application programming interface version of the managed device by creating at least one thread of the sub-process.
2. The system of claim 1 , wherein the management platform is to initialize the sub-process by loading a schema and a library for the application
programming interface version of the managed device.
3. The system of claim 1 , wherein the management platform is to terminate the at least one thread once the request is completed.
4. The system of claim 1 , wherein the management platform maintains a thread pool for the sub-process to execute the request.
5. The system of claim 1 , wherein the managed device is a network adapter.
6. A machine-readable storage medium encoded with instructions, the instructions executable by a processor of a system to cause the system to: receive a request, the request including an indication of an application programming interface version;
determine whether a sub-process has been initialized for the application programming interface version;
initialize a sub-process including preloading resources for the sub- process for the application programming interface version in response to determining that a sub-process has not been initialized for the application programming interface version; and
execute the request using the sub-process for the application
programming interface version by creating at least one thread of the sub- process.
7. The machine-readable storage medium of claim 6, wherein the sub- process is initialized by loading a schema and a library for the application programming interface version.
8. The machine-readable storage medium of claim 6, wherein the request is executed by the sub-process creating a thread pool to process the received request.
9. The machine-readable storage medium of claim 6, wherein the request is associated with an input/output connection that is forwarded with the request to the sub-process for the application programming interface version.
10. The machine-readable storage medium of claim 6, wherein the instructions are executable by the processor to further cause the system to: maintain a plurality of sub-processes for a plurality of application programming interface versions.
1 1 . A method comprising:
receiving a first request, the first request including an indication of a first application programming interface version of a first managed device and an associated input/output connection for the first request;
determining whether a first sub-process has been initialized for the first application programming interface version;
initializing a first sub-process including preloading resources for the first sub-process for the first application programming interface version in response to determining that a sub-process has not been initialized for the first application programming interface version;
forwarding the first request and the associated input/output connection for the first request to the first sub-process; and
executing the first request using the first sub-process by creating at least one thread of the first sub-process.
12. The method of claim 1 1 , further comprising:
receiving a second request, the second request including an indication of a second application programming interface version of a second managed device and an associated input/output connection for the second request;
determining whether a second sub-process has been initialized for the second application programming interface version;
initializing a second sub-process including preloading resources for the second sub-process for the second application programming interface version in response to determining that a sub-process has not been initialized for the second application programming interface version;
maintaining the first sub-process;
forwarding the second request and the associated input/output connection for the second request to the second sub-process; and
executing the second request using the second sub-process by creating at least one thread of the second sub-process.
13. The method of claim 1 1 , wherein initializing the first sub-process comprises loading a schema and a library for the first application programming interface version.
14. The method of claim 13, wherein initializing the first sub-process further comprises creating a global set of function pointers that address functions supplied by the library.
15. The method of claim 1 1 , further comprising:
receiving a third request, the third request including an indication of the first application programming interface version and an associated input/output connection for the third request;
forwarding the third request and the associated input/output connection for the third request to the first sub-process; and
executing the third request using the first sub-process by creating at least one new thread of the first sub-process or by using at least one existing thread of the first sub-process.
PCT/US2015/013963 2015-01-30 2015-01-30 Request processing Ceased WO2016122658A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/US2015/013963 WO2016122658A1 (en) 2015-01-30 2015-01-30 Request processing

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/US2015/013963 WO2016122658A1 (en) 2015-01-30 2015-01-30 Request processing

Publications (1)

Publication Number Publication Date
WO2016122658A1 true WO2016122658A1 (en) 2016-08-04

Family

ID=56544074

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2015/013963 Ceased WO2016122658A1 (en) 2015-01-30 2015-01-30 Request processing

Country Status (1)

Country Link
WO (1) WO2016122658A1 (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070101335A1 (en) * 2005-11-03 2007-05-03 Microsoft Corporation Identifying separate threads executing within a single process
US20100131956A1 (en) * 2008-11-24 2010-05-27 Ulrich Drepper Methods and systems for managing program-level parallelism
US20100162271A1 (en) * 2008-12-23 2010-06-24 International Business Machines Corporation Management of Process-to-Process Intra-Cluster Communication Requests
US20110087731A1 (en) * 2009-10-08 2011-04-14 Laura Wong Systems and methods to process a request received at an application program interface
US20110093870A1 (en) * 2009-10-21 2011-04-21 International Business Machines Corporation High Performance and Resource Efficient Communications Between Partitions in a Logically Partitioned System

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070101335A1 (en) * 2005-11-03 2007-05-03 Microsoft Corporation Identifying separate threads executing within a single process
US20100131956A1 (en) * 2008-11-24 2010-05-27 Ulrich Drepper Methods and systems for managing program-level parallelism
US20100162271A1 (en) * 2008-12-23 2010-06-24 International Business Machines Corporation Management of Process-to-Process Intra-Cluster Communication Requests
US20110087731A1 (en) * 2009-10-08 2011-04-14 Laura Wong Systems and methods to process a request received at an application program interface
US20110093870A1 (en) * 2009-10-21 2011-04-21 International Business Machines Corporation High Performance and Resource Efficient Communications Between Partitions in a Logically Partitioned System

Similar Documents

Publication Publication Date Title
EP3933585B1 (en) Electronic device and method of controlling same
EP4264427B1 (en) Multi-tenant control plane management on computing platform
US9846586B2 (en) Creating a virtual machine and cloud server
US20180276040A1 (en) Event-driven scheduling using directed acyclic graphs
US20160342408A1 (en) Rolling upgrade of a distributed application
US9495269B1 (en) Mobility validation by trial boot using snap shot
US9772928B2 (en) Distributed kernel thread list processing for kernel patching
EP3637252A1 (en) Virtual machine deployment method and omm virtual machine
US20190155654A1 (en) Merging connection pools to form a logical pool of connections during a preset period of time thereby more efficiently utilizing connections in connection pools
EP4202678A1 (en) Remote storage for hardware microservices hosted on xpus and soc-xpu platforms
US20160150015A1 (en) Methods for integrating applications with a data storage network and devices thereof
US20200183751A1 (en) Handling expiration of resources allocated by a resource manager running a data integration job
US10025608B2 (en) Quiesce handling in multithreaded environments
JP2016207184A (en) Methods of updating firmware components, computer systems, and memory device
US11561843B2 (en) Automated performance tuning using workload profiling in a distributed computing environment
US9164775B2 (en) Method and apparatus for performing an out of band job
US9047144B2 (en) System and method for providing Quality-of-Services in a multi-event processing environment
US10397130B2 (en) Multi-cloud resource reservations
WO2016122658A1 (en) Request processing
US7555644B2 (en) System and method for operating system image provisioning in a utility computing environment
US11435929B2 (en) System and method for content addressable storage system update appliance
US9557984B2 (en) Performing code load operations on managed components in a system
KR102318863B1 (en) Operating method of server providing clound computing service
US12288100B2 (en) Workload distribution by utilizing unused central processing unit capacity in a distributed computing system
US11334342B1 (en) Updating firmware of unsupported devices

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15880548

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 15880548

Country of ref document: EP

Kind code of ref document: A1