CN109426561A - A kind of task processing method, device and equipment - Google Patents
A kind of task processing method, device and equipment Download PDFInfo
- Publication number
- CN109426561A CN109426561A CN201710757902.3A CN201710757902A CN109426561A CN 109426561 A CN109426561 A CN 109426561A CN 201710757902 A CN201710757902 A CN 201710757902A CN 109426561 A CN109426561 A CN 109426561A
- Authority
- CN
- China
- Prior art keywords
- thread
- task
- quota
- available
- dynamic
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5005—Allocation of resources, e.g. of the central processing unit [CPU] to service a request
- G06F9/5027—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5061—Partitioning or combining of resources
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2209/00—Indexing scheme relating to G06F9/00
- G06F2209/50—Indexing scheme relating to G06F9/50
- G06F2209/5011—Pool
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2209/00—Indexing scheme relating to G06F9/00
- G06F2209/50—Indexing scheme relating to G06F9/50
- G06F2209/5018—Thread allocation
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2209/00—Indexing scheme relating to G06F9/00
- G06F2209/50—Indexing scheme relating to G06F9/50
- G06F2209/503—Resource availability
Landscapes
- Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Multi Processors (AREA)
Abstract
The application provides a kind of task processing method, device and equipment, this method comprises: the processing request of the task of reception, determines task fixation thread quota corresponding with the task;If available thread resources are not present in the fixed thread quota of the task, judging, which can be used, whether there is available thread resources in dynamic thread number;The fixed thread quota of the task includes the thread resources that the task is able to use, and the usable dynamic thread number includes the shared thread resources of multiple tasks;If it does, being that the task distributes thread resources according to the usable dynamic thread number;The task is handled by the thread resources distributed for the task.By the technical solution of the application, task processing capacity can be improved, shorten task and handle the time, reduce period of reservation of number, promote user experience.The shared of thread resources may be implemented in the isolation that thread resources may be implemented.
Description
Technical Field
The present application relates to the field of internet technologies, and in particular, to a method, an apparatus, and a device for task processing.
Background
With the rapid development of the internet, the server needs to provide services for more and more users, and the types of services are more and more, such as user registration service, commodity data modification service, order service, commodity search service, and the like, and each service is processed, so that the server needs to process a large number of tasks. And the processing of these tasks requires a long time, and accordingly, the waiting time of the user is also long, thereby affecting the user experience to a certain extent. How to quickly and efficiently handle these tasks has become a primary problem for servers.
Disclosure of Invention
The application provides a task processing method, which comprises the following steps:
receiving a processing request of a task, and determining a task fixed thread quota corresponding to the task;
if the available thread resources do not exist in the task fixed thread quota, judging whether the available thread resources exist in the available dynamic thread number or not; the task fixed thread quota comprises thread resources which can be used by the task, and the usable dynamic thread number comprises thread resources shared by a plurality of tasks;
if yes, thread resources are distributed for the tasks according to the usable dynamic thread number;
and processing the task through the thread resource allocated to the task.
The present application provides a task processing device, the device comprising:
the receiving module is used for receiving a processing request of a task;
the determining module is used for determining a task fixed thread quota corresponding to the task; when the available thread resources do not exist in the task fixed thread quota, judging whether the available thread resources exist in the available dynamic thread number or not; the task fixed thread quota comprises thread resources which can be used by the task, and the usable dynamic thread number comprises thread resources shared by a plurality of tasks;
the allocation module is used for allocating thread resources to the task according to the available dynamic thread number when the available thread resources exist in the available dynamic thread number;
and the processing module is used for processing the task through the thread resource distributed to the task.
The present application provides a task processing device, the task processing device includes:
the processor is used for receiving a processing request of a task and determining a task fixed thread quota corresponding to the task; if the available thread resources do not exist in the task fixed thread quota, judging whether the available thread resources exist in the available dynamic thread number or not; the task fixed thread quota comprises thread resources which can be used by the task, and the usable dynamic thread number comprises thread resources shared by a plurality of tasks; if yes, thread resources are distributed for the tasks according to the usable dynamic thread number; and processing the task through the thread resource allocated to the task.
Based on the technical scheme, in the embodiment of the application, a task fixed thread quota can be configured for each task, the task fixed thread quota includes thread resources that can be used by the task, a usable dynamic thread number is configured for a plurality of tasks, and the usable dynamic thread number includes thread resources shared by the plurality of tasks. After receiving a processing request of a task, thread resources can be allocated to the task according to a task fixed thread quota or the number of usable dynamic threads, and the task is processed through the thread resources allocated to the task. Therefore, under the batch task application scene, the task processing capacity can be improved, the task processing time can be shortened, the user waiting time can be reduced, and the user experience can be improved. By configuring the task fixed thread quota for each task, each task has the thread resource which is exclusive to itself, and the thread resource can be isolated. Sharing of thread resources can be achieved by configuring the number of available dynamic threads for a plurality of tasks so that the plurality of tasks have shared thread resources.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments of the present application or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments described in the present application, and other drawings can be obtained by those skilled in the art according to the drawings of the embodiments of the present application.
FIG. 1 is a schematic diagram of an application scenario in an embodiment of the present application;
FIGS. 2A-2D are flow diagrams of a task processing method in one embodiment of the present application;
fig. 3 is a block diagram of a task processing device according to an embodiment of the present application.
Detailed Description
The terminology used in the embodiments of the present application is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in this application and the claims, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein is meant to encompass any and all possible combinations of one or more of the associated listed items.
It should be understood that although the terms first, second, third, etc. may be used in the embodiments of the present application to describe various information, the information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope of the present application. Depending on the context, moreover, the word "if" as used may be interpreted as "at … …" or "when … …" or "in response to a determination".
The embodiment of the present application provides a task processing method, which may be applied to a task processing device, where the task processing device may be a server, a Personal Computer (PC), a notebook Computer, a data platform, an e-commerce platform, a tablet Computer, a terminal device, and the like, and the type of the task processing device is not limited, and all devices capable of processing tasks are within the protection scope of the embodiment of the present application.
In order to implement the processing of the task, it is necessary to allocate a thread resource to the task and process the task through the thread resource. And the creation and release of thread resources require system overhead, resulting in longer task processing time. For example, when processing task 1, a thread resource is created for task 1, after the processing of task 1 is completed, the thread resource is released, when processing task 2, a thread resource is created for task 2, and so on. Therefore, when the task processing device needs to process a large number of tasks, thread resources are continuously created and released, a large amount of system overhead is consumed, the task processing time is long, the waiting time of a user is long, and the user experience is influenced.
With respect to the above discovery, a thread pool may be enabled for fast and efficient processing of tasks, and a large amount of thread resources may be included in the thread pool. Therefore, when a task needs to be processed, a thread resource can be directly selected from the thread pool, the task is processed through the thread resource, and after the task is processed, the thread resource is directly recovered to the thread pool. In summary, since all thread resources are in the thread pool, it is not necessary to re-create thread resources for the task and release thread resources, so that system overhead can be saved, task processing capability can be improved, task processing time can be shortened, user waiting time can be reduced, and user experience can be improved.
Based on the thread resources in the thread pool, the task processing device may process the task in the following manner:
in the first mode, a global thread pool is used, that is, all tasks share one global thread pool.
For example, after receiving a processing request of task 1, the task processing device selects a thread resource from the global thread pool, and processes task 1 through the thread resource. After receiving the processing request of task 2, one thread resource is selected from the global thread pool, and task 2 is processed through the thread resource. After receiving the processing request of task 3, one thread resource is selected from the global thread pool, and task 3 is processed through the thread resource. And so on, all tasks may share a global thread pool.
However, the above method has the following problems: due to the limited thread resources (e.g. 20 thread resources), after the task processing device receives a large number of processing requests of task 1, task 1 may occupy all thread resources in the global thread pool, resulting in no available thread resources for other tasks.
And secondly, using a separate thread pool, namely using a separate thread pool for each task.
For example, task 1 corresponds to separate thread pool 1, task 2 corresponds to separate thread pool 2, and so on. After receiving the processing request of task 1, the task processing device selects a thread resource from the individual thread pool 1, and processes task 1 through this thread resource. After receiving a processing request of task 2, a thread resource is selected from the individual thread pool 2, and task 2 is processed by this thread resource. And so on, each task uses a separate thread pool, i.e., processes the task using thread resources in the separate thread pool.
However, the above method has the following problems: given that the number of processing requests of task 1 is large and the number of processing requests of task 2 is small, the thread resources in the single thread pool 1 may not meet the processing needs of task 1, and the thread resources in the single thread pool 2 are largely idle, thereby causing waste of resources.
And the third mode is to use the elastic thread pool to realize the dynamic allocation of thread resources and the isolation of the thread resources so as to solve the problem of the first mode and also realize the sharing of the thread resources so as to solve the problem of the second mode.
In one example, the elastic thread pool may include, but is not limited to: the task thread pool and the shared thread pool, each task can correspond to one task thread pool independently, and all tasks can correspond to one shared thread pool together. The task thread pool may include thread resources that can be used by the task, and the shared thread pool may include thread resources that can be commonly used by a plurality of tasks (e.g., all tasks).
Referring to fig. 1, taking tasks 1 and 2 to be processed as an example, a task thread pool 1 is set for task 1, a task thread pool 2 is set for task 2, and task 1 and task 2 share a shared thread pool. Each type of service (e.g., a registered user service, a commodity data modification service, an order service, a commodity search service, etc.) may correspond to one task, and the task is used to process all requests for the service. For example, task 1 is used to process all requests corresponding to the services of the registered users, that is, after receiving the request for the services of the registered users, task 1 processes the requests. For another example, the task 2 is used for processing all requests corresponding to the commodity data modification service, that is, after receiving a request for the commodity data modification service, the task 2 processes the request.
In one example, in the third approach using elastic thread pool, the following concepts can be involved:
1. maximum number of threads: i.e. the maximum allowed number of threads of the task processing device. The maximum thread number is related to the performance of the task processing device, and therefore, the maximum thread number can be set according to the performance of the task processing device, and the setting mode is not limited, and can be arbitrarily set according to experience, for example, the maximum thread number is 1000.
2. The task fixed thread quota is a thread resource exclusive to the task, can be the number of available thread resources in the task thread pool, can be set for each task according to experience, and is not limited in the setting mode as long as the sum of the task fixed thread quotas of all the tasks is less than the maximum thread number. For example, the fixed task thread quota of task 1 is 3, which means that task 1 alone uses 3 thread resources, and task 2 cannot use the 3 thread resources of task 1. The task fixed thread quota of task 2 is 2, which means that task 2 alone uses 2 thread resources, and task 1 cannot use these 2 thread resources. In order to implement the above process, the task fixed thread quota 3 may be configured to the task thread pool 1 corresponding to the task 1, that is, the task thread pool 1 has 3 thread resources; the task fixed thread quota 2 may be configured to a task thread pool 2 corresponding to the task 2, that is, the task thread pool 2 has 2 thread resources.
3. The number of usable dynamic threads, which is a thread resource that can be shared by all tasks, may be the number of thread resources available in the shared thread pool described above. The number of usable dynamic threads may be: the difference between the maximum thread number and the sum of all task fixed thread quotas. For example, when the maximum thread number is 10, the task fixed thread quota of the task thread pool 1 is 3, and the task fixed thread quota of the task thread pool 2 is 2, then the usable dynamic thread number of the shared thread pool may be 5(10-2-3), which indicates that task 1 and task 2 may share the 5 thread resources of the shared thread pool.
In summary, each task thread pool has a task fixed thread quota, that is, each task can use its own thread resource in the task thread pool, so that the thread resources can be isolated by the task thread pool. Since the shared thread pool has available dynamic thread number, that is, thread resources which can be shared by all tasks are included in the shared thread pool, the sharing of the thread resources can be realized through the shared thread pool.
4. The task dynamic thread quota is thread resources in usable dynamic thread numbers (shared thread pools) which can be used by the task, namely the number of the thread resources of the shared thread pools which can be occupied by the task, and can be set for each task according to experience without limitation on the setting mode. In order to avoid that a task occupies all thread resources in the shared thread pool and other tasks cannot use the thread resources in the shared thread pool, a task dynamic thread quota is set for each task, which indicates that the task can only use the thread resources in the shared thread pool of the task dynamic thread quota.
For example, the task dynamic thread quota of task 1 is 3, which means that task 1 can only use 3 thread resources in the shared thread pool, that is, task 1 may also use 3 thread resources in the shared thread pool on the basis of using 3 thread resources of the task fixed thread quota. The task dynamic thread quota of the task 2 is 3, which means that the task 2 can only use 3 thread resources in the shared thread pool, that is, the task 2 can also use 3 thread resources in the shared thread pool on the basis of using 2 thread resources of the task fixed thread quota.
When available thread resources exist in the shared thread pool, if the number of the thread resources in the shared thread pool used by the task 1 is less than 3, the thread resources in the shared thread pool can be used, but when the number of the thread resources in the shared thread pool used by the task 1 is equal to 3, the thread resources in the shared thread pool cannot be used. If the number of the thread resources in the shared thread pool used by the task 2 is less than 3, the thread resources in the shared thread pool can be used, but when the number of the thread resources in the shared thread pool used by the task 2 is equal to 3, the thread resources in the shared thread pool cannot be used. When no thread resource is available in the shared thread pool, neither task 1 nor task 2 can use the thread resource in the shared thread pool.
In the application scenario, referring to fig. 2A, a flowchart of a task processing method is shown.
Step 201, receiving a processing request of a task, and determining a task fixed thread quota corresponding to the task.
Step 202, if there is no available thread resource in the task fixed thread quota, it is determined whether there is an available thread resource in the available dynamic thread number. If so, step 203 is performed. If not, step 205 is performed. The task fixed thread quota may include thread resources that can be used by the task, and the usable dynamic thread count may include thread resources shared by a plurality of tasks (i.e., all tasks).
In an example, when the task fixed thread quota corresponding to the task is greater than 0, it indicates that there is an available thread resource in the task fixed thread quota, that is, there is an available thread resource in the task thread pool of the task. When the task fixed thread quota corresponding to the task is equal to 0, it indicates that no available thread resource exists in the task fixed thread quota, that is, no available thread resource exists in the task thread pool of the task.
The initial value of the task fixed thread quota is already introduced in the above process, for example, the initial value of the task fixed thread quota of task 1 is 3, and the initial value of the task fixed thread quota of task 2 is 2. Based on this, after the task is allocated with the thread resource according to the task fixed thread quota each time, the task fixed thread quota is reduced by 1 until the task fixed thread quota is 0, which indicates that the task fixed thread quota has no available thread resource, and therefore, the task is not allocated with the thread resource according to the task fixed thread quota any more.
After the thread resources are allocated to the task according to the task fixed thread quota, the thread resources allocated to the task can be released after the task is processed, and the task fixed thread quota is increased by 1. Thus, when the task fixed thread quota is greater than 0, thread resources can be allocated to the task according to the task fixed thread quota, the task fixed thread quota is reduced by 1, and the process is repeated continuously in the same way.
In one example, when the number of available dynamic threads is greater than 0, it indicates that there are available thread resources in the number of available dynamic threads, that is, there are available thread resources in the shared thread pool, step 203 is executed; when the number of available dynamic threads is equal to 0, it indicates that there is no available thread resource source in the number of available dynamic threads, that is, there is no available thread resource in the shared thread pool, step 205 is executed.
The initial value of the number of dynamic threads may be used, and the above process is described, for example, the initial value of the number of dynamic threads may be 5. On the basis, after thread resources are allocated to the task according to the number of available dynamic threads (namely, the shared thread pool), the number of available dynamic threads can be reduced by 1, until the number of available dynamic threads is 0, which indicates that no available thread resources exist in the number of available dynamic threads, and therefore, no thread resources are allocated to the task according to the number of available dynamic threads.
After the thread resources are allocated to the task according to the number of the available dynamic threads, the thread resources allocated to the task are released after the task is processed, and the number of the available dynamic threads is increased by 1. Thus, when the number of the available dynamic threads is larger than 0, thread resources can be allocated to the tasks according to the number of the available dynamic threads, the number of the available dynamic threads is reduced by 1, and the like, and the processes are continuously repeated.
In step 203, thread resources are allocated to the task according to the number of available dynamic threads (i.e. one thread resource in the number of available dynamic threads is allocated to the task), and the number of available dynamic threads is decreased by 1.
Step 204, the task is processed through the thread resource allocated to the task.
Step 205, add the task to the waiting queue, until there is available thread resource in the task fixed thread quota and/or available dynamic thread number, the task in the waiting queue will not be processed.
In an example, the execution sequence is only an example given for convenience of description, and the execution sequence between the steps may also be changed, and is not limited. Moreover, in other embodiments, the steps of the respective methods do not have to be performed in the order shown and described herein, and the methods may include more or less steps than those described herein. Moreover, a single step described in this specification may be broken down into multiple steps for description in other embodiments; while various steps described in this specification may be combined into a single step in other embodiments.
Based on the technical scheme, in the embodiment of the application, a task fixed thread quota can be configured for each task, the task fixed thread quota includes thread resources that can be used by the task, a usable dynamic thread number is configured for a plurality of tasks, and the usable dynamic thread number includes thread resources shared by the plurality of tasks. After receiving a processing request of a task, thread resources can be allocated to the task according to a task fixed thread quota or the number of usable dynamic threads, and the task is processed through the thread resources allocated to the task. Therefore, under the batch task application scene, the task processing capacity can be improved, the task processing time can be shortened, the user waiting time can be reduced, and the user experience can be improved. By configuring the task fixed thread quota for each task, each task has the thread resource which is exclusive to itself, and the thread resource can be isolated. Sharing of thread resources can be achieved by configuring the number of available dynamic threads for a plurality of tasks so that the plurality of tasks have shared thread resources.
On the basis of fig. 2A, referring to fig. 2B, it is a flowchart of another task processing method.
Step 211, receiving a processing request of a task, and determining a task fixed thread quota corresponding to the task.
Step 212, determine whether there is available thread resource in the task fixed thread quota.
If not, step 213 may be performed; if so, step 217 may be performed.
In step 213, it is determined whether there are available thread resources in the available dynamic thread count.
If so, step 214 may be performed. If not, step 216 may be performed.
Step 214, allocating thread resources for the task according to the number of available dynamic threads (i.e. allocating one thread resource in the number of available dynamic threads to the task), and subtracting 1 from the number of available dynamic threads.
Step 215, the task is processed through the thread resource allocated for the task.
Step 216, add the task to the waiting queue until the task has a fixed thread quota and/or available thread resources in the available dynamic thread count, and then process the task in the waiting queue.
Step 217, allocating a thread resource to the task according to the task fixed thread quota (i.e. allocating one thread resource in the task fixed thread quota to the task), and subtracting 1 from the task fixed thread quota.
The processing flow of step 211 to step 217 is similar to the processing flow of step 201 to step 205, and is not described herein again. Further, after step 217, step 215 may also be performed.
In an example, the execution sequence is only an example given for convenience of description, and the execution sequence between the steps may also be changed in the application scenario, and is not limited.
Based on the technical scheme, in the embodiment of the application, a task fixed thread quota can be configured for each task, the task fixed thread quota includes thread resources that can be used by the task, a usable dynamic thread number is configured for a plurality of tasks, and the usable dynamic thread number includes thread resources shared by the plurality of tasks. After receiving a processing request of a task, thread resources can be allocated to the task according to a task fixed thread quota or the number of usable dynamic threads, and the task is processed through the thread resources allocated to the task. Therefore, under the batch task application scene, the task processing capacity can be improved, the task processing time can be shortened, the user waiting time can be reduced, and the user experience can be improved. By configuring the task fixed thread quota for each task, each task has the thread resource which is exclusive to itself, and the thread resource can be isolated. Sharing of thread resources can be achieved by configuring the number of available dynamic threads for a plurality of tasks so that the plurality of tasks have shared thread resources.
Referring to fig. 2C, a flowchart of another task processing method is shown on the basis of fig. 2A/2B.
Step 221, receiving a processing request of a task, and determining a task fixed thread quota corresponding to the task.
Step 222, determining whether the task fixed thread quota has available thread resources.
If not, step 223 may be performed; if so, step 228 may be performed.
Step 223, determining whether there is an available thread resource in the task dynamic thread quota corresponding to the task. If so, step 224 may be performed; if not, step 227 may be performed.
At step 224, it is determined whether there are available thread resources in the dynamic thread count.
If so, step 225 may be performed; if not, step 227 may be performed.
Step 225, allocating thread resources for the task according to the number of available dynamic threads (i.e. allocating one thread resource in the number of available dynamic threads to the task), and subtracting 1 from the number of available dynamic threads.
The task is processed through the thread resources allocated for the task, step 226.
Step 227, add the task to the wait queue until the task has a fixed thread quota and/or there is available thread resource in the available dynamic thread count, and then process the task in the wait queue.
Step 228, allocating a thread resource to the task according to the task fixed thread quota (i.e., allocating one thread resource in the task fixed thread quota to the task), and subtracting 1 from the task fixed thread quota.
The processing flow from step 221 to step 228 is similar to the processing flow from step 201 to step 205, and is not described herein again. Further, after step 228, step 226 is performed.
In one example, after a task is processed by a thread resource allocated to the task, if the thread resource is allocated to the task according to a task fixed thread quota, the task fixed thread quota may be increased by 1. If the thread resource allocated to the task is the thread resource allocated to the task according to the number of the available dynamic threads, the number of the available dynamic threads may be increased by 1, and a task dynamic thread quota corresponding to the task may be increased by 1. In addition, after thread resources are allocated to the task according to the number of the available dynamic threads, the dynamic thread quota of the task can be reduced by 1.
In one example, in step 223, when the task dynamic thread quota is greater than 0, it indicates that there is available thread resource in the task dynamic thread quota; when the task dynamic thread quota is equal to 0, it indicates that no available thread resource exists in the task dynamic thread quota.
In an example, the execution sequence is only an example given for convenience of description, and the execution sequence between the steps may also be changed in the application scenario, and is not limited.
Based on the technical scheme, in the embodiment of the application, a task fixed thread quota can be configured for each task, the task fixed thread quota includes thread resources that can be used by the task, a usable dynamic thread number is configured for a plurality of tasks, and the usable dynamic thread number includes thread resources shared by the plurality of tasks. After receiving a processing request of a task, thread resources can be allocated to the task according to a task fixed thread quota or the number of usable dynamic threads, and the task is processed through the thread resources allocated to the task. Therefore, under the batch task application scene, the task processing capacity can be improved, the task processing time can be shortened, the user waiting time can be reduced, and the user experience can be improved. By configuring the task fixed thread quota for each task, each task has the thread resource which is exclusive to itself, and the thread resource can be isolated. Sharing of thread resources can be achieved by configuring the number of available dynamic threads for a plurality of tasks so that the plurality of tasks have shared thread resources.
Referring to fig. 2D, a flowchart of another task processing method is shown on the basis of fig. 2A/2B.
Step 231, receiving a processing request of a task, and determining a task fixed thread quota corresponding to the task.
Step 232, determine whether there are available thread resources in the task fixed thread quota.
If not, step 233 may be performed; if so, step 238 may be performed.
At step 233, it is determined whether there are available thread resources in the available dynamic thread count.
If so, step 234 may be performed; if not, step 237 may be performed.
Step 234, determine whether there is an available thread resource in the task dynamic thread quota corresponding to the task. If so, step 235 may be performed; if not, step 237 may be performed.
Step 235, allocating thread resources for the task according to the number of available dynamic threads (i.e. allocating one thread resource in the number of available dynamic threads to the task), and subtracting 1 from the number of available dynamic threads.
The task is processed by the thread resource allocated for the task, step 236.
Step 237, add the task to the wait queue until there is a thread resource available in the task's fixed thread quota and/or available dynamic thread count.
Step 238, according to the task fixed thread quota, allocating a thread resource to the task (i.e., allocating one thread resource in the task fixed thread quota to the task), and subtracting 1 from the task fixed thread quota.
The processing flow of steps 231 to 238 is similar to the processing flow of steps 201 to 205, and is not described in detail herein. Further, after step 238, step 236 is performed.
In an example, the execution sequence is only an example given for convenience of description, and the execution sequence between the steps may also be changed in the application scenario, and is not limited.
Based on the technical scheme, in the embodiment of the application, a task fixed thread quota can be configured for each task, the task fixed thread quota includes thread resources that can be used by the task, a usable dynamic thread number is configured for a plurality of tasks, and the usable dynamic thread number includes thread resources shared by the plurality of tasks. After receiving a processing request of a task, thread resources can be allocated to the task according to a task fixed thread quota or the number of usable dynamic threads, and the task is processed through the thread resources allocated to the task. Therefore, under the batch task application scene, the task processing capacity can be improved, the task processing time can be shortened, the user waiting time can be reduced, and the user experience can be improved. By configuring the task fixed thread quota for each task, each task has the thread resource which is exclusive to itself, and the thread resource can be isolated. Sharing of thread resources can be achieved by configuring the number of available dynamic threads for a plurality of tasks so that the plurality of tasks have shared thread resources.
Based on the same application concept as the method, an embodiment of the present application further provides a task processing device, as shown in fig. 3, which is a structural diagram of the task processing device, and the task processing device includes:
a receiving module 301, configured to receive a processing request of a task;
a determining module 302, configured to determine a task fixed thread quota corresponding to the task; when the available thread resources do not exist in the task fixed thread quota, judging whether the available thread resources exist in the available dynamic thread number or not; the task fixed thread quota comprises thread resources which can be used by the task, and the usable dynamic thread number comprises thread resources shared by a plurality of tasks;
an allocating module 303, configured to, when there is an available thread resource in the available dynamic thread number, allocate a thread resource to the task according to the available dynamic thread number;
a processing module 304, configured to process the task through the thread resource allocated to the task.
The allocating module 303 is further configured to, when there is an available thread resource in the task fixed thread quota, allocate a thread resource to the task according to the task fixed thread quota, and subtract 1 from the task fixed thread quota; the processing module 304, further configured to add the task to a wait queue when there is no available thread resource in the available dynamic thread count; when available thread resources exist in the available dynamic thread number, after the thread resources are allocated to the task according to the available dynamic thread number, subtracting 1 from the available dynamic thread number; when the task fixed thread quota is larger than 0, indicating that available thread resources exist in the task fixed thread quota; when the task fixed thread quota is equal to 0, indicating that no available thread resource exists in the task fixed thread quota; when the number of the usable dynamic threads is larger than 0, indicating that available thread resources exist in the number of the usable dynamic threads; when the number of usable dynamic threads is equal to 0, it indicates that there is no thread resource available in the number of usable dynamic threads.
The processing module 304 is further configured to, after the task is processed through the thread resource allocated to the task, if the thread resource is allocated to the task according to the task fixed thread quota, release the thread resource allocated to the task after the task is processed, and add 1 to the task fixed thread quota; if the thread resources are allocated to the task according to the number of the usable dynamic threads, releasing the thread resources allocated to the task after the task is processed, and adding 1 to the number of the usable dynamic threads.
The determining module 302 is further configured to determine whether an available thread resource exists in a task dynamic thread quota corresponding to the task before determining whether an available thread resource exists in the available dynamic thread number; when the task dynamic thread quota has available thread resources, judging whether the available thread resources exist in the available dynamic thread number or not; the processing module is further configured to add the task to a waiting queue when the task dynamic thread quota does not have available thread resources; or,
the determining module 302 is further configured to determine whether an available thread resource exists in a task dynamic thread quota corresponding to the task before allocating a thread resource to the task according to the available dynamic thread number; the allocating module 303 is further configured to, when the task dynamic thread quota has available thread resources, allocate thread resources to the task according to the number of available dynamic threads; the processing module 304 is further configured to add the task to a waiting queue when the task dynamic thread quota does not have available thread resources.
The processing module 304 is further configured to, after allocating thread resources to the task according to the number of usable dynamic threads, subtract 1 from the task dynamic thread quota; the processing module 304 is further configured to add 1 to the task dynamic thread quota after the task is processed through the thread resource allocated to the task and the task processing is completed; when the task dynamic thread quota is larger than 0, indicating that available thread resources exist in the task dynamic thread quota; when the task dynamic thread quota is equal to 0, it indicates that no available thread resource exists in the task dynamic thread quota.
Based on the same application concept as the method, an embodiment of the present application further provides a task processing device, where the task processing device may include: receiver, processor, memory, transmitter, etc.; wherein,
the processor is used for receiving a processing request of a task and determining a task fixed thread quota corresponding to the task; if the available thread resources do not exist in the task fixed thread quota, judging whether the available thread resources exist in the available dynamic thread number or not; the task fixed thread quota comprises thread resources which can be used by the task, and the usable dynamic thread number comprises thread resources shared by a plurality of tasks; if yes, thread resources are distributed for the tasks according to the usable dynamic thread number; and processing the task through the thread resource allocated to the task.
Based on the same application concept as the method, the embodiment of the present application further provides a machine-readable storage medium, which can be applied to a task processing device, where the machine-readable storage medium stores several computer instructions, and the computer instructions, when executed, perform the following processes:
receiving a processing request of a task, and determining a task fixed thread quota corresponding to the task; if the available thread resources do not exist in the task fixed thread quota, judging whether the available thread resources exist in the available dynamic thread number or not; the task fixed thread quota comprises thread resources which can be used by the task, and the usable dynamic thread number comprises thread resources shared by a plurality of tasks; if yes, thread resources are distributed for the tasks according to the usable dynamic thread number; and processing the task through the thread resource allocated to the task.
The systems, devices, modules or units illustrated in the above embodiments may be implemented by a computer chip or an entity, or by a product with certain functions. A typical implementation device is a computer, which may take the form of a personal computer, laptop computer, cellular telephone, camera phone, smart phone, personal digital assistant, media player, navigation device, email messaging device, game console, tablet computer, wearable device, or a combination of any of these devices.
For convenience of description, the above devices are described as being divided into various units by function, and are described separately. Of course, the functionality of the units may be implemented in one or more software and/or hardware when implementing the present application.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, embodiments of the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
Furthermore, these computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The above description is only an example of the present application and is not intended to limit the present application. Various modifications and changes may occur to those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the scope of the claims of the present application.
Claims (14)
1. A method for processing a task, the method comprising:
receiving a processing request of a task, and determining a task fixed thread quota corresponding to the task;
if the available thread resources do not exist in the task fixed thread quota, judging whether the available thread resources exist in the available dynamic thread number or not; the task fixed thread quota comprises thread resources which can be used by the task, and the usable dynamic thread number comprises thread resources shared by a plurality of tasks;
if yes, thread resources are distributed for the tasks according to the usable dynamic thread number;
and processing the task through the thread resource allocated to the task.
2. The method of claim 1,
after determining the task fixed thread quota corresponding to the task, the method further includes:
if the available thread resources exist in the task fixed thread quota, distributing the thread resources for the task according to the task fixed thread quota, and subtracting 1 from the task fixed thread quota;
when the task fixed thread quota is greater than 0, indicating that available thread resources exist in the task fixed thread quota; when the task fixed thread quota is equal to 0, it indicates that no available thread resource exists in the task fixed thread quota.
3. The method of claim 1, wherein after determining whether thread resources are available in the dynamic thread count, the method further comprises:
if no thread resources are available, the task is added to the wait queue.
4. The method of claim 1, wherein after allocating thread resources for the task according to the number of available dynamic threads, the method further comprises:
subtracting 1 from the number of usable dynamic threads; when the number of the usable dynamic threads is larger than 0, the usable dynamic threads represent that available thread resources exist in the number of the usable dynamic threads; when the number of usable dynamic threads is equal to 0, it indicates that there is no thread resource available in the number of usable dynamic threads.
5. The method according to claim 1 or 2,
after the task is processed through the thread resources allocated to the task, the method further comprises:
if the thread resources are allocated to the task according to the task fixed thread quota, releasing the thread resources allocated to the task after the task processing is finished, and adding 1 to the task fixed thread quota;
if the thread resources are allocated to the task according to the number of the usable dynamic threads, releasing the thread resources allocated to the task after the task is processed, and adding 1 to the number of the usable dynamic threads.
6. The method of claim 1, wherein before determining whether thread resources are available in the dynamic thread count, the method further comprises:
judging whether available thread resources exist in a task dynamic thread quota corresponding to the task;
if yes, executing a process for judging whether available thread resources exist in the available dynamic thread number;
if not, the task is added to the wait queue.
7. The method of claim 1, wherein before allocating thread resources for the task based on the number of available dynamic threads, the method further comprises:
judging whether available thread resources exist in a task dynamic thread quota corresponding to the task;
if so, executing a process of allocating thread resources for the task according to the usable dynamic thread number; if not, the task is added to the wait queue.
8. The method according to claim 6 or 7, wherein after allocating thread resources for the task according to the available dynamic thread number, the method further comprises:
subtracting 1 from the task dynamic thread quota;
after the task is processed through the thread resources allocated to the task, the method further comprises:
after the task processing is completed, adding 1 to the task dynamic thread quota;
when the task dynamic thread quota is larger than 0, indicating that available thread resources exist in the task dynamic thread quota; when the task dynamic thread quota is equal to 0, it indicates that no available thread resource exists in the task dynamic thread quota.
9. A task processing apparatus, characterized in that the apparatus comprises:
the receiving module is used for receiving a processing request of a task;
the determining module is used for determining a task fixed thread quota corresponding to the task; when the available thread resources do not exist in the task fixed thread quota, judging whether the available thread resources exist in the available dynamic thread number or not; the task fixed thread quota comprises thread resources which can be used by the task, and the usable dynamic thread number comprises thread resources shared by a plurality of tasks;
the allocation module is used for allocating thread resources to the task according to the available dynamic thread number when the available thread resources exist in the available dynamic thread number;
and the processing module is used for processing the task through the thread resource distributed to the task.
10. The apparatus of claim 9,
the allocation module is further configured to, when available thread resources exist in the task fixed thread quota, allocate the thread resources to the task according to the task fixed thread quota, and subtract 1 from the task fixed thread quota;
the processing module is further used for adding the task to a waiting queue when no available thread resource exists in the available dynamic thread number; when available thread resources exist in the available dynamic thread number, after the thread resources are allocated to the task according to the available dynamic thread number, subtracting 1 from the available dynamic thread number;
when the task fixed thread quota is greater than 0, indicating that available thread resources exist in the task fixed thread quota; when the task fixed thread quota is equal to 0, indicating that no available thread resource exists in the task fixed thread quota; when the number of the usable dynamic threads is larger than 0, indicating that available thread resources exist in the number of the usable dynamic threads; when the number of usable dynamic threads is equal to 0, it indicates that there is no thread resource available in the number of usable dynamic threads.
11. The apparatus of claim 9 or 10,
the processing module is further configured to, after the task is processed through the thread resource allocated to the task, if the thread resource is allocated to the task according to the task fixed thread quota, release the thread resource allocated to the task after the task processing is completed, and add 1 to the task fixed thread quota; if the thread resources are allocated to the task according to the number of the usable dynamic threads, releasing the thread resources allocated to the task after the task is processed, and adding 1 to the number of the usable dynamic threads.
12. The apparatus of claim 9,
the determining module is further configured to determine whether an available thread resource exists in a task dynamic thread quota corresponding to the task before determining whether an available thread resource exists in the available dynamic thread number; when the task dynamic thread quota has available thread resources, judging whether the available thread resources exist in the available dynamic thread number or not; the processing module is further configured to add the task to a waiting queue when the task dynamic thread quota does not have available thread resources; or,
the determining module is further configured to determine whether an available thread resource exists in a task dynamic thread quota corresponding to the task before allocating the thread resource to the task according to the number of the available dynamic threads; the allocation module is further configured to allocate thread resources to the task according to the number of the usable dynamic threads when the task dynamic thread quota has available thread resources; the processing module is further configured to add the task to a wait queue when the task dynamic thread quota does not have available thread resources.
13. The apparatus of claim 12,
the processing module is further configured to subtract 1 from the task dynamic thread quota after allocating thread resources to the task according to the number of the usable dynamic threads;
the processing module is further configured to add 1 to the task dynamic thread quota after the task is processed through the thread resource allocated to the task and the task processing is completed;
when the task dynamic thread quota is larger than 0, indicating that available thread resources exist in the task dynamic thread quota; when the task dynamic thread quota is equal to 0, it indicates that no available thread resource exists in the task dynamic thread quota.
14. A task processing apparatus characterized in that the task processing apparatus comprises:
the processor is used for receiving a processing request of a task and determining a task fixed thread quota corresponding to the task; if the available thread resources do not exist in the task fixed thread quota, judging whether the available thread resources exist in the available dynamic thread number or not; the task fixed thread quota comprises thread resources which can be used by the task, and the usable dynamic thread number comprises thread resources shared by a plurality of tasks; if yes, thread resources are distributed for the tasks according to the usable dynamic thread number; and processing the task through the thread resource allocated to the task.
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201710757902.3A CN109426561A (en) | 2017-08-29 | 2017-08-29 | A kind of task processing method, device and equipment |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201710757902.3A CN109426561A (en) | 2017-08-29 | 2017-08-29 | A kind of task processing method, device and equipment |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| CN109426561A true CN109426561A (en) | 2019-03-05 |
Family
ID=65503508
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN201710757902.3A Pending CN109426561A (en) | 2017-08-29 | 2017-08-29 | A kind of task processing method, device and equipment |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN109426561A (en) |
Cited By (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN109976891A (en) * | 2019-03-28 | 2019-07-05 | 北京网聘咨询有限公司 | The server task processing method of task based access control thread configuration |
| CN110109760A (en) * | 2019-05-10 | 2019-08-09 | 深圳前海达闼云端智能科技有限公司 | Memory resource control method and device |
| CN110457124A (en) * | 2019-08-06 | 2019-11-15 | 中国工商银行股份有限公司 | For the processing method and its device of business thread, electronic equipment and medium |
Citations (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN101009642A (en) * | 2006-12-31 | 2007-08-01 | 华为技术有限公司 | A resource allocation method and device based on the task packet |
| US20090165007A1 (en) * | 2007-12-19 | 2009-06-25 | Microsoft Corporation | Task-level thread scheduling and resource allocation |
| US20150121390A1 (en) * | 2013-10-24 | 2015-04-30 | International Business Machines Corporation | Conditional serialization to improve work effort |
| US20150339164A1 (en) * | 2009-12-23 | 2015-11-26 | Citrix Systems, Inc. | Systems and methods for managing spillover limits in a multi-core system |
| CN106095590A (en) * | 2016-07-21 | 2016-11-09 | 联动优势科技有限公司 | A kind of method for allocating tasks based on thread pool and device |
-
2017
- 2017-08-29 CN CN201710757902.3A patent/CN109426561A/en active Pending
Patent Citations (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN101009642A (en) * | 2006-12-31 | 2007-08-01 | 华为技术有限公司 | A resource allocation method and device based on the task packet |
| US20090165007A1 (en) * | 2007-12-19 | 2009-06-25 | Microsoft Corporation | Task-level thread scheduling and resource allocation |
| US20150339164A1 (en) * | 2009-12-23 | 2015-11-26 | Citrix Systems, Inc. | Systems and methods for managing spillover limits in a multi-core system |
| US20150121390A1 (en) * | 2013-10-24 | 2015-04-30 | International Business Machines Corporation | Conditional serialization to improve work effort |
| CN106095590A (en) * | 2016-07-21 | 2016-11-09 | 联动优势科技有限公司 | A kind of method for allocating tasks based on thread pool and device |
Cited By (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN109976891A (en) * | 2019-03-28 | 2019-07-05 | 北京网聘咨询有限公司 | The server task processing method of task based access control thread configuration |
| CN109976891B (en) * | 2019-03-28 | 2020-11-03 | 北京网聘咨询有限公司 | Server task processing method based on task thread configuration |
| CN110109760A (en) * | 2019-05-10 | 2019-08-09 | 深圳前海达闼云端智能科技有限公司 | Memory resource control method and device |
| CN110109760B (en) * | 2019-05-10 | 2021-07-02 | 达闼机器人有限公司 | A memory resource control method and device |
| CN110457124A (en) * | 2019-08-06 | 2019-11-15 | 中国工商银行股份有限公司 | For the processing method and its device of business thread, electronic equipment and medium |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN110647394B (en) | Resource allocation method, device and equipment | |
| CN114020470B (en) | Resource allocation method and device, readable medium and electronic equipment | |
| CN111475235B (en) | Acceleration method, device, equipment and storage medium for function calculation cold start | |
| CN108614726B (en) | Virtual machine creation method and device | |
| CN107341050B (en) | Service processing method and device based on dynamic thread pool | |
| KR102182295B1 (en) | Apparatus for scheduling task based on hardware and method thereof | |
| CN114625533B (en) | Distributed task scheduling method, device, electronic device and storage medium | |
| US7036123B2 (en) | System using fair-share scheduling technique to schedule processes within each processor set based on the number of shares assigned to each process group | |
| CN103067468B (en) | Cloud dispatching method and system thereof | |
| CN107688495B (en) | Method and apparatus for scheduling processors | |
| CN111913792B (en) | Service processing method and device | |
| CN113419846B (en) | Resource allocation method and device, electronic equipment and computer readable storage medium | |
| CN106897299B (en) | Database access method and device | |
| CN105589750A (en) | CPU (Central Processing Unit) resource scheduling method and server | |
| CN106325996B (en) | A method and system for allocating GPU resources | |
| US20200174821A1 (en) | System, method and computer program for virtual machine resource allocation | |
| CN106325995B (en) | A method and system for allocating GPU resources | |
| CN109426561A (en) | A kind of task processing method, device and equipment | |
| CN108028806B (en) | Method and device for allocating virtual resources in Network Function Virtualization (NFV) network | |
| CN105051689A (en) | Method, apparatus and system for scheduling resource pool in multi-core system | |
| CN110188975A (en) | A kind of resource acquiring method and device | |
| CN109582445A (en) | Message treatment method, device, electronic equipment and computer readable storage medium | |
| CN116157778A (en) | System and method for hybrid centralized and distributed scheduling on shared physical host | |
| CN112099956A (en) | Resource allocation method, device and equipment | |
| CN109189581B (en) | A job scheduling method and device |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| RJ01 | Rejection of invention patent application after publication |
Application publication date: 20190305 |
|
| RJ01 | Rejection of invention patent application after publication |