Detailed Description
The service allocation method and apparatus provided in the embodiments of the present invention will be described in detail below with reference to the accompanying drawings.
Furthermore, the terms "including" and "having," and any variations thereof, as referred to in the description of the present application, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements but may alternatively include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
It should be noted that, in the embodiments of the present invention, words such as "exemplary" or "for example" are used to indicate examples, illustrations or explanations. Any embodiment or design described as "exemplary" or "e.g.," an embodiment of the present invention is not necessarily to be construed as preferred or advantageous over other embodiments or designs. Rather, use of the word "exemplary" or "such as" is intended to present concepts related in a concrete fashion.
The term "and/or" as used herein includes the use of either or both of the two methods.
In the description of the present application, the meaning of "a plurality" means two or more unless otherwise specified.
Based on the problems existing in the background art, embodiments of the present invention provide a method and an apparatus for service allocation, where a service allocation device obtains computing power demand information of a target service, where the computing power demand information includes a target computing power corresponding to the target service; then, the service distribution equipment determines at least two candidate calculation force nodes, wherein the connection duration of the calculation force nodes and the service distribution equipment is less than a connection duration threshold value, and the credit degrees of the calculation force nodes are greater than 0 (namely meet a first preset condition), from a plurality of calculation force nodes of which the residual calculation forces are greater than or equal to the target calculation force; and the service distribution equipment determines the respective credibility of the at least two candidate computational power nodes, and determines the computational power node with the highest credibility in the at least two candidate computational power nodes as a target computational power node for processing the target service. In the embodiment of the invention, the service distribution equipment can determine at least two candidate computation power nodes which are determined to be in a normal state and have higher credibility in the multiple computation power nodes based on the respective connection time lengths of the multiple computation power nodes and the respective credibility of the multiple computation power nodes, and further, the service distribution equipment selects one computation power node with higher credibility from the at least two candidate computation power nodes by combining the respective credibility of the at least two candidate computation power nodes, and determines the computation power node with higher credibility as the target computation power node, so that the rationality of service distribution can be improved.
Further, the target calculation power node determined in the embodiment of the present invention satisfies the first preset condition and has high reliability, and the success rate of processing the target service can be improved by using the target calculation power node to process the target service.
The service allocation method provided in the embodiment of the present invention is applied to the service processing scenario shown in fig. 1, and specifically, a device (i.e., a service allocation device 101) in a service allocation system 10 allocates a suitable computational node to a target service. Specifically, when a user (or user equipment) needs to process a certain service, the UE sends computing power demand information to the service distribution equipment 101 in the service distribution system 10; when the service distribution apparatus 101 acquires the computing power demand information of the target service, a target computing power node for processing the target service may be determined from the data processing system 20. A plurality of force nodes may be included in the data processing system 20, including, for example, force node 201, force node 202, and force node 203. Wherein a force node may be composed of one or more devices. Illustratively, as shown in fig. 1, device 2011, device 2012 and device 2013 are included in force node 201, device 2021 and device 2022 are included in force node 202, and device 2031, device 2032, device 2033 and device 2034 are included in force node 203. In general, in practical applications, the connections between the above-mentioned devices or service functions may be wireless connections, and fig. 1 illustrates the connections between the devices by solid lines for convenience of intuitively representing the connections between the devices.
Specifically, the computational node 201, the computational node 202 and the computational node 203 are all connected to the service distribution apparatus 101. The effort node 201, the effort node 202 or the effort node 203 may be used to process the traffic, i.e. to provide the user (or UE) with the effort corresponding to the traffic. In this embodiment of the present invention, the service distribution device 101 may determine a target computation force node for processing the target service based on a connection duration between the computation force node (including the computation force node 201, the computation force node 202, and the computation force node 203) and the service distribution device 101, a reputation of the computation force node, and a credibility of the computation force node.
In an embodiment of the present invention, the computation nodes included in the data processing system 20 may be one or more of a terminal device, a Mobile Edge Computing (MEC) device, or a data center device.
Optionally, the data processing system may include one or more computational nodes, and one computational node may also include one or more devices. The embodiment of the invention does not limit the number of each computational power node and equipment in the data processing system.
An embodiment of the present invention provides a service allocating apparatus, where the service allocating apparatus may be a server, fig. 2 is a schematic diagram of a hardware structure of the server for executing the service allocating method provided in the embodiment of the present invention, and as shown in fig. 2, the server 30 may include a processor 301, a memory 302, a network interface 303, and the like.
The processor 301 is a core component of the server 30, and the processor 301 is configured to run an operating system of the server 30 and application programs (including a system application program and a third-party application program) on the server 30, so as to implement the service distribution method performed by the server 30.
In this embodiment, the processor 301 may be a Central Processing Unit (CPU), a microprocessor, a Digital Signal Processor (DSP), an application-specific integrated circuit (ASIC), a Field Programmable Gate Array (FPGA) or other programmable logic device, a transistor logic device, a hardware component, or any combination thereof, which is capable of implementing or executing various exemplary logic blocks, modules, and circuits described in connection with the disclosure of the embodiment of the present invention; a processor may also be a combination of computing functions, e.g., comprising one or more microprocessors, a DSP and a microprocessor, or the like.
Optionally, the processor 301 of the server 30 includes one or more CPUs, which are single-core CPUs (single-CPUs) or multi-core CPUs (multi-CPUs).
The memory 302 includes, but is not limited to, a Random Access Memory (RAM), a Read Only Memory (ROM), an erasable programmable read-only memory (EPROM), a flash memory, an optical memory, or the like. The memory 302 holds the code for the operating system.
Optionally, the processor 301 implements the service allocation method in the embodiment of the present invention by reading the instruction stored in the memory 302, or the processor 301 implements the service allocation method provided in the embodiment of the present invention by using an instruction stored inside. In the case that the processor 301 implements the service allocation method provided in the embodiment of the present invention by reading the memory store, the memory store instructions for implementing the service allocation method provided in the embodiment of the present invention.
The network interface 303 is a wired interface, such as a Fiber Distributed Data Interface (FDDI) interface or a Gigabit Ethernet (GE) interface. Alternatively, the network interface 303 is a wireless interface. The network interface 303 is used for the server 30 to communicate with other devices.
The memory 302 is used for storing the remaining computing power of the computing power node. Optionally, the memory 302 is also used for storing the connection duration of the force node and the service distribution equipment, and the like. The at least one processor 301 further executes the method described in the embodiments of the present invention according to the remaining computing power of the computing power node stored in the memory 302 and the connection duration of the computing power node and the service distribution equipment. For more details of the processor 301 to implement the above functions, reference is made to the following description of various method embodiments.
Optionally, the service distribution device further includes a bus, and the processor 301 and the memory 302 are connected to each other through the bus 304, or are connected to each other in other manners.
Optionally, the service distribution device further includes an input/output interface 305, where the input/output interface 305 is configured to connect with an input device and receive the computing power requirement information of the target service input by the user through the input device. Input devices include, but are not limited to, a keyboard, a touch screen, a microphone, and the like. The input/output interface 305 is also used to interface with an output device that outputs the result of the traffic assignment by the processor 301 (i.e., to determine a target compute node to process the target traffic). Output devices include, but are not limited to, a display, a printer, and the like.
In the embodiment of the invention, because different services have different computing power requirements, when a user (or user equipment) needs to process a certain service (target service), the UE sends computing power requirement information to the service distribution equipment, so that the service distribution equipment can determine a target computing power node for processing the target service based on the computing power and other information corresponding to the target service included in the computing power requirement information.
As shown in fig. 3, the service allocation method provided by the embodiment of the present invention may include S101-S104.
S101, acquiring computing power demand information of a target service by service distribution equipment.
And the calculation demand information of the target service comprises target calculation corresponding to the target service.
It should be understood that the target services include, but are not limited to, image retrieval, image processing, and the like. The target computing power corresponding to the target service refers to the computing power that should be achieved by the computing power node (i.e., the target computing power node) processing the target service.
S102, the service distribution equipment determines at least two candidate computational force nodes from the plurality of computational force nodes.
And the residual computing power of the computing power nodes is greater than or equal to the target computing power corresponding to the target service. The at least two candidate computational power nodes meet a first preset condition, wherein the first preset condition is that the connection time length of the at least two candidate computational power nodes and the service distribution equipment is less than a connection time length threshold value, and the credit degrees of the at least two candidate computational power nodes are greater than 0.
It should be understood that, after acquiring the target computation power corresponding to the target service, the service distribution device may select a plurality of computation power nodes whose remaining computation power is greater than or equal to the target computation power (i.e., the computation power requirement of the target service) from the nodes included in the data processing system, and further determine at least two candidate computation power nodes from the plurality of computation power nodes.
In this embodiment of the present invention, before the step S102 (or S101), the service distribution device may perform authentication and authentication on each computation force node (including multiple computation force nodes), so that each computation force node establishes a connection with the service distribution device respectively. The traffic distribution facility may then distribute traffic for each of the computing power nodes.
It can be understood that, after the above-mentioned computing power node establishes a connection with the service distribution device, the service distribution device stores computing power node connection state information, where the computing power node connection state information includes an identifier of the computing power node, a time when the computing power node establishes a connection with the service distribution device, and a current time. Table 1 below is an example of the computational node connection state information. In table 1, the network access time of the computation node is the time for the computation node to establish connection with the service distribution device.
TABLE 1
In the embodiment of the present invention, the connection state information of the computational power node is used to determine a connection duration between the computational power node and the service distribution device, and the connection duration between the computational power node and the service distribution device is a time difference between a connection establishment time of the computational power node and the service distribution device and a current time. After the service distribution equipment determines respective connection time lengths of the plurality of computational power nodes and the service distribution equipment, the service distribution equipment determines candidate computational power nodes of which the connection time lengths are smaller than a connection time length threshold value in the plurality of computational power nodes according to a preset connection time length threshold value. For example, in table 1, the 5 force calculation nodes correspond to the same connection duration threshold, the connection duration threshold is 120 hours, and the service distribution device determines that the force calculation nodes whose connection duration is less than the connection duration threshold are force calculation node 1, force calculation node 2, and force calculation node 3.
In the embodiment of the present invention, after the service distribution device determines the respective connection durations and connection duration thresholds of the plurality of computation power nodes and the service distribution device, candidate computation power nodes whose connection durations are less than the connection duration threshold may be determined.
Optionally, different computation force nodes may correspond to the same connection duration threshold, or may correspond to different connection duration thresholds, respectively, and the embodiment of the present invention is not limited specifically.
In the embodiment of the invention, the credibility of one candidate computational power node meets the following requirements:
Cn=Can+Cbn-Ccn;
wherein, CnRepresenting the reputation of the candidate force node, CanRepresenting the initial reputation of the candidate force node, CbnRepresenting the corresponding gain credit degree when the candidate calculation force node successfully processes the service, CcnAnd representing the corresponding profit reduction credibility when the candidate calculation force node fails to process the service.
It should be understood that each computation force node corresponds to a basic reputation, i.e. the initial reputation, when it leaves the factory (i.e. after the production is completed), in the embodiment of the present invention, the initial reputation of each computation force node is the same. When a certain service is successfully processed by a computing node, the credit degree of the computing node is increased, and otherwise, when the computing node fails to process the certain service, the credit degree of the computing node is correspondingly reduced.
Specifically, the corresponding gain credit degree when one candidate computational power node successfully processes the service satisfies:
wherein, CbnRepresenting the corresponding gain credit degree when the candidate calculation force node successfully processes the service, i representing that the candidate calculation force node successfully processes i services within the preset time, omegajRepresents the calculation power, omega, corresponding to each service in the i servicesnRepresenting the computing power of the candidate computing power node, and g is a gain factor.
And when one candidate computing power node fails to process the service, the corresponding profit-reducing credit degree meets the following requirements:
wherein, CcnRepresenting the corresponding profit reduction credit degree when the candidate calculation force node fails to process the service, x representing that the candidate calculation force node fails to process x services in preset time, omegayRepresents the calculation power, omega, corresponding to each service in the x servicesnAnd h is a reduction factor.
Table 2 below is an example of a service successfully processed by a computation node within a preset time and a service failed to be processed within the preset time. As shown in table 2, when i is 0, it indicates that the power node has not successfully processed the traffic within the preset time, and when x is 0, it indicates that the power node has not processed the failed traffic within the preset time.
TABLE 2
Based on table 2, the service distribution device may determine the gain reputation and the benefit reduction reputation corresponding to each of the 5 computation force nodes.
Illustratively, assume the calculated forces (i.e., ω) corresponding to the 5 calculated force nodes shown in Table 2n) The same is 200 FLOP/S. It is assumed that the initial reputation degrees of the 5 computation force nodes are all 100, g is 40, and h is 20. In combination with the calculation formula of the reputation degrees of the computation force nodes, the service distribution equipment can determine the respective reputation degrees of the 5 computation force nodes, see table 3 below.
TABLE 3
| Force calculation node
|
Creditworthiness of computational power node
|
| Force calculation node 1
|
100
|
| Force calculation node 2
|
70
|
| Force calculation node 3
|
-97.5
|
| Force calculation node 4
|
105
|
| Force calculation node 5
|
-1100 |
To this end, the service distribution equipment determines that the calculation force node 1, the calculation force node 2 and the calculation force node 4 are calculation force nodes with the credibility of the calculation force node larger than 0.
With reference to the examples in table 1 and table 3, the service distribution device determines that, among the 5 computation force nodes, the computation force node 1 and the computation force node 2 satisfy the first preset condition, that is, the computation force node 1 and the computation force node 2 are candidate computation force nodes.
Optionally, in another implementation manner, in a case that the first computational power node does not satisfy the first preset condition, the service distribution device performs authentication and authentication on the first computational power node, so that the first computational power node and the service distribution device reestablish a connection.
Wherein the first computational power node is one of the plurality of computational power nodes.
With reference to the description of the foregoing embodiments, it should be understood that when the connection duration of the first computational power node and the service distribution apparatus is greater than or equal to the time threshold, it indicates that the connection duration of the first computational power node and the service distribution apparatus is too long, and thus, it may not be possible to determine whether the first computational power node is in a normal state (or whether the first computational power node has security problems, including DOS attack, password leakage, and the like). For example, when the first compute node is attacked by an attacker, it needs to continuously respond to request packets sent by the attacker, which occupies a large amount of memory, and when the first compute node receives a real processing request, it may not respond. Further, the service distribution equipment re-authenticates the first computational power node so as to re-establish the connection between the first computational power node and the service distribution equipment; and/or when the credit degree of the first force computing node is less than or equal to 0, the number of times that the first force computing node fails to process the service within the preset time is greater than the number of times that the first force computing node successfully processes the service, because of the problems of low performance of equipment, low resource utilization rate of the equipment, network delay and the like in the first force computing node, the first force computing node may more easily fail to process the service or the first force computing node can successfully process the service but cannot complete the service within the preset time, and then the service distribution equipment re-authenticates the first force computing node, so that the first force computing node and the service distribution equipment re-establish connection.
It should be noted that, after the first computational node reestablishes the connection with the service distribution equipment, the first computational node may participate in the next service distribution. And the service distribution equipment updates the stored connection state information of the force calculation node, and updates the time for establishing the connection between the force calculation node and the service distribution equipment to the current time (namely the time for reestablishing the connection), and the credit degree of the first force calculation node is equal to the initial credit degree of the first force calculation node at the moment.
In the embodiment of the invention, the service distribution equipment consumes resources when performing authentication and certification on the computing power node each time, so that the first computing power node is authenticated and certified under the condition that the first computing power node does not meet the first preset condition, the normal state of the first computing power node can be ensured, the success rate of processing services is improved, and the resource waste caused by frequent authentication and certification can be avoided.
S103, the service distribution equipment determines the respective credibility of the at least two candidate calculation force nodes.
It should be understood that the credibility of a candidate computing power node may be determined according to the computing power assignability rate of the candidate computing power node, the quality of service of the candidate computing power node, the credibility of the candidate computing power node, and the like.
Taking the candidate computational power node n as an example, in an implementation manner of the embodiment of the present invention, the reliability of one candidate computational power node satisfies:
wherein Q isnRepresenting the confidence of candidate force nodes, rnComputing power assignability ratio, p, representing candidate computing power nodesnRepresenting quality of service of candidate computing power nodes, CnRepresenting the reputation of the candidate force node, CanAnd d, e and f are constants, and d + e + f is 1.
Specifically, the computing power assignable rate of the candidate computing power node is a ratio of the remaining computing power of the candidate computing power node to the total computing power of the candidate computing power node, and the service quality of the candidate computing power node is a product of a non-failure rate of the device included in the candidate computing power node and a non-failure rate of the link included in the candidate computing power node.
It should be understood that the remaining computing power of a candidate computing power node is the difference between the total computing power of the candidate computing power node and the used computing power of the candidate computing power node. The computing power assignability ratio of a candidate computing power node satisfies:
ωn remains=ωn-ωn has been used
Wherein r isnA computing power assignability ratio, ω, representing the candidate computing power nodenRepresents the total computing power of the candidate computing power node (i.e., the computing power of the candidate computing power node), ωn remainsRepresenting the residual force, ω, of the candidate force noden is alreadyUse ofRepresenting the used computing power of the candidate computing power node.
In conjunction with the description of the above embodiments, it should be understood that an algorithm node may include one or more devices and at least one device (or server), and the algorithm of the algorithm node is the sum of the algorithms of the devices in the one or more devices.
And, the quality of service of a computing node is determined by the non-failure rate of one or more devices included in the computing node and the non-failure rate of the link between the one or more devices.
In the following, taking the candidate computational power node n as an example, the computational power of the candidate computational power node n satisfies:
ωn=ωn1+ωn2+...+ωnm
wherein, ω isnRepresents the computing power of the candidate computing power node n, m represents the number of devices included in the candidate computing power node n, and m is an integer greater than or equal to 1.
Moreover, for the candidate computational power node n, any two devices inside the candidate computational power node n have a connection relation, so that the candidate computational power node n exists inside the candidate computational power node n
A link, such that the quality of service of the candidate computing power node satisfies:
wherein p is
nRepresenting the quality of service, p, of the candidate computational node n
nmRepresenting the non-failure rate of the mth device in the candidate computational power node n,
represents the number of the candidate computing power nodes n
A non-failure rate of the link, m being an integer greater than or equal to 1.
Specifically, the non-failure rate of the mth device in the candidate computation force node n satisfies:
Pnm=Sis normal/SWork by×Pnm initial
SWork by=SIs normal+SFault of
Wherein, PnmIndicates the non-failure rate, S, of the m-th deviceIs normalIndicates the time when the mth device is operating normally, SWork byRepresents the total time of operation of the m-th device, Pnm initialRepresents the initial non-failure rate, S, of the mth deviceFault ofWhich represents the fault operation time of the mth device (i.e., the time during which the mth device stops operating normally due to a fault), m is an integer greater than or equal to 1.
Similarly, the method for determining the non-failure rate of a link in the candidate computational power node n is similar to the method for determining the non-failure rate of a device in the candidate node n, and a link is taken as an example below.
Specifically, the non-failure rate of the mth link in the candidate computation force node n satisfies:
lnM=S'is normal/S'Work by×lnM initial
S'Work by=S'Is normal+S'Fault of
Wherein l
nMRepresents the non-failure rate, S 'of the Mth link'
Is normalIndicates the time, S ', at which the M link normally operates'
Work byIndicates the total time of operation of the Mth link,/
nM initialRepresents the initial non-failure rate, S 'of the Mth link'
Fault ofIndicating the fault operation time of the mth link (i.e., the time during which the mth link stops operating normally due to a fault), M being an integer greater than or equal to 1, M being 1,2, …,
it should be noted that, when a candidate computational power node only includes one device, the service quality of the candidate computational power node is independent of the link, and the service quality of the candidate computational power node is the non-failure rate of the device.
In connection with the above example in S102, it is assumed that the residual computing power of the computing power node 1 is 200 flo/S and the residual computing power of the computing power node 2 is 180 flo/S. The service distribution equipment determines that the computing power distributability rate of the computing power node 1 is 100 percent and the computing power distributability rate r of the computing power node 2 is2The content was 90%.
Illustratively, as shown in fig. 4, the computation node 1 includes 2 devices and 1 link, i.e., a device 11, a device 12, and a link (r); the computation node 2 comprises 3 devices and 3 links, namely a device 21, a device 22, a device 23, a link (c) and a link (c). Assuming that the respective non-failure rates of 2 devices included in the computational power node 1 shown in the figure are the same as the respective non-failure rates of 3 devices included in the computational power node 2, and are both 99%; the non-failure rates of 1 link included in the computation node 2 are the same as the respective non-failure rates of 3 links included in the computation node 2, and are both 98%. The service distribution equipment determines that the service quality of the computational node 1 is 96.05 percent and the service quality p of the computational node 2 is p2The content was 94.12%.
As can be seen from the above example in table 2, the initial reputation of the computational force node 1 is the same as the initial reputation of the computational force node 2, and both are 100, and the reputation of the computational force node 1 (i.e., C)1) Calculate the reputation of node 2 (i.e., C) to 1002) Is 70.
Further, assuming that d is 0.4, e is 0.1, and f is 0.5, the traffic distribution apparatus determines that the reliability of the force node 1 is 0.996 and the reliability of the force node 2 is 0.811.
S104, the service distribution equipment determines the calculation power node with the highest credibility in the at least two candidate calculation power nodes as a target calculation power node for processing the target service.
Illustratively, in conjunction with the example in S103 described above, the traffic distribution apparatus determines the computation node 1 as a target computation node for processing the target traffic.
The embodiment of the invention provides a service distribution method and a device, wherein service distribution equipment acquires computing power demand information of a target service, and the computing power demand information comprises target computing power corresponding to the target service; then, the service distribution equipment determines at least two candidate calculation force nodes, wherein the connection duration of the calculation force nodes and the service distribution equipment is less than a connection duration threshold value, and the credit degrees of the calculation force nodes are greater than 0 (namely meet a first preset condition), from a plurality of calculation force nodes of which the residual calculation forces are greater than or equal to the target calculation force; and the service distribution equipment determines the respective credibility of the at least two candidate computational power nodes, and determines the computational power node with the highest credibility in the at least two candidate computational power nodes as a target computational power node for processing the target service. In the embodiment of the invention, the service distribution equipment can determine at least two candidate computation power nodes which are determined to be in a normal state and have higher credibility in the multiple computation power nodes based on the respective connection time lengths of the multiple computation power nodes and the respective credibility of the multiple computation power nodes, and further, the service distribution equipment selects one computation power node with higher credibility from the at least two candidate computation power nodes by combining the respective credibility of the at least two candidate computation power nodes, and determines the computation power node with higher credibility as the target computation power node, so that the rationality of service distribution can be improved.
Further, the target calculation power node determined in the embodiment of the present invention satisfies the first preset condition and has high reliability, and the success rate of processing the target service can be improved by using the target calculation power node to process the target service.
The embodiment of the present invention may perform functional module division on the service distribution device and the like according to the above method example, for example, each functional module may be divided corresponding to each function, or two or more functions may be integrated into one processing module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. It should be noted that, the division of the modules in the embodiment of the present invention is schematic, and is only a logic function division, and there may be another division manner in actual implementation.
In the case of dividing each functional module by corresponding functions, fig. 5 shows a schematic diagram of a possible structure of the service allocating apparatus in the foregoing embodiment, as shown in fig. 5, the service allocating apparatus 40 may include: an acquisition module 401 and a determination module 402.
The obtaining module 401 is configured to obtain computing power requirement information of a target service, where the computing power requirement information includes a target computing power corresponding to the target service.
A determining module 402, configured to determine at least two candidate computation power nodes from multiple computation power nodes, where remaining computation powers of the multiple computation power nodes are all greater than or equal to a target computation power corresponding to the target service, and the at least two candidate computation power nodes meet a first preset condition, where the first preset condition is that a connection duration between the at least two candidate computation power nodes and the service distribution apparatus is less than a connection duration threshold, and reputation of the at least two candidate computation power nodes is greater than 0; and determining respective trustworthiness of the at least two candidate force nodes; and determining the computing power node with the highest credibility in the at least two candidate computing power nodes as a target computing power node for processing the target service.
Optionally, the reputation of one candidate force node satisfies:
Cn=Can+Cbn-Ccn
wherein, CnRepresenting the reputation of the candidate force node, CanRepresenting the initial reputation of the candidate force node, CbnRepresenting the corresponding gain credit degree when the candidate calculation force node successfully processes the service, CcnAnd representing the corresponding profit reduction credibility when the candidate calculation force node fails to process the service.
Optionally, the confidence level of a candidate computing power node satisfies:
wherein Q isnRepresenting the confidence level, r, of the candidate computational power nodenA computing power assignability ratio, p, representing the candidate computing power nodenRepresenting the quality of service of the candidate computational power node, CnRepresenting the candidate computing powerReputation of the node, CanAnd d, e and f are constants, and d + e + f is 1. The computing power assignability ratio of the candidate computing power node is the ratio of the residual computing power of the candidate computing power node to the total computing power of the candidate computing power node, and the service quality of the candidate computing power node is the product of the non-failure rate of the equipment contained in the candidate computing power node and the non-failure rate of the link contained in the candidate computing power node.
Optionally, the service distribution apparatus 40 further includes a storage module 403.
The storage module 403 is configured to store connection state information of the computational power node, where the connection state information of the computational power node includes an identifier of the computational power node, time for establishing connection between the computational power node and the service distribution apparatus, and current time, and the connection state information of the computational power node is used to determine connection duration between the computational power node and the service distribution apparatus.
Optionally, the service distribution apparatus 40 further includes an authentication module 404.
An authentication and authorization module 404, configured to authenticate and authorize a first computational power node when the first computational power node does not satisfy the first preset condition, so that the first computational power node reestablishes a connection with the service distribution apparatus, where the first computational power node is one of the multiple computational power nodes.
In the case of an integrated unit, fig. 6 shows a schematic diagram of a possible structure of the service distribution device according to the above embodiment. As shown in fig. 6, the traffic distribution apparatus 50 may include: a processing module 501 and a communication module 502. The processing module 501 may be configured to control and manage the actions of the service distribution apparatus 50, for example, the processing module 501 may be configured to support the service distribution apparatus 50 to execute S102, S103, and S104 in the above method embodiments. The communication module 502 may be configured to support the service distribution apparatus 50 to communicate with other entities, for example, the communication module 502 may be configured to support the service distribution apparatus 50 to execute S101 in the above method embodiment. Optionally, as shown in fig. 6, the service distribution apparatus 50 may further include a storage module 503 for storing program codes and data of the service distribution apparatus 50.
The processing module 501 may be a processor or a controller (for example, the processor 301 shown in fig. 2). The communication module 502 may be a transceiver, a transceiver circuit, or a communication interface, etc. (e.g., may be the network interface 303 as shown in fig. 2 described above). The storage module 503 may be a memory (e.g., may be the memory 302 described above with reference to fig. 2).
When the processing module 501 is a processor, the communication module 502 is a transceiver, and the storage module 503 is a memory, the processor, the transceiver, and the memory may be connected by a bus. The bus may be a Peripheral Component Interconnect (PCI) bus, an Extended Industry Standard Architecture (EISA) bus, or the like. The bus may be divided into an address bus, a data bus, a control bus, etc.
It should be understood that, in various embodiments of the present invention, the sequence numbers of the above-mentioned processes do not mean the execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation on the implementation process of the embodiments of the present invention.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented using a software program, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. The procedures or functions described in accordance with the embodiments of the invention are all or partially effected when the computer program instructions are loaded and executed on a computer. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored on a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, from one website, computer, server, or data center to another website, computer, server, or data center via wire (e.g., coaxial cable, fiber optics, Digital Subscriber Line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that can be accessed by a computer or can comprise one or more data storage devices, such as a server, a data center, etc., that can be integrated with the medium. The usable medium may be a magnetic medium (e.g., floppy Disk, hard Disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., Solid State Disk (SSD)), among others.
The above description is only for the specific embodiments of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present invention, and all the changes or substitutions should be covered within the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.