CN116033025A - Distribution network automation computing task scheduling method and system based on cloud-edge collaboration - Google Patents
Distribution network automation computing task scheduling method and system based on cloud-edge collaboration Download PDFInfo
- Publication number
- CN116033025A CN116033025A CN202211688277.9A CN202211688277A CN116033025A CN 116033025 A CN116033025 A CN 116033025A CN 202211688277 A CN202211688277 A CN 202211688277A CN 116033025 A CN116033025 A CN 116033025A
- Authority
- CN
- China
- Prior art keywords
- node
- cloud
- delay
- edge
- real
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
Images
Classifications
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y04—INFORMATION OR COMMUNICATION TECHNOLOGIES HAVING AN IMPACT ON OTHER TECHNOLOGY AREAS
- Y04S—SYSTEMS INTEGRATING TECHNOLOGIES RELATED TO POWER NETWORK OPERATION, COMMUNICATION OR INFORMATION TECHNOLOGIES FOR IMPROVING THE ELECTRICAL POWER GENERATION, TRANSMISSION, DISTRIBUTION, MANAGEMENT OR USAGE, i.e. SMART GRIDS
- Y04S10/00—Systems supporting electrical power generation, transmission or distribution
- Y04S10/50—Systems or methods supporting the power network operation or management, involving a certain degree of interaction with the load-side end user applications
Landscapes
- Power Sources (AREA)
Abstract
Description
技术领域Technical Field
本发明涉及电网调度技术领域,尤其涉及一种基于云边协同的配电网自动化计算任务调度方法及系统。The present invention relates to the field of power grid dispatching technology, and in particular to a method and system for dispatching automated computing tasks in a distribution network based on cloud-edge collaboration.
背景技术Background Art
配电网自动化业务对于配电网安全经济运行具有重要意义,云边协同系统能够提供可达性高、总量大的算力资源,在接入对象爆发增长、运行场景日益复杂、任务规模不断扩张的背景下有效支撑新型电力系统的自动化业务。各类自动化业务在执行过程中映射为云边协同系统中的计算业务,为了保障业务的实时性要求,需要向计算业务分配足够的算力资源。The automation business of distribution network is of great significance to the safe and economic operation of distribution network. The cloud-edge collaborative system can provide computing resources with high accessibility and large total amount, effectively supporting the automation business of new power system under the background of explosive growth of access objects, increasingly complex operation scenarios and continuous expansion of task scale. Various automation businesses are mapped to computing businesses in the cloud-edge collaborative system during execution. In order to ensure the real-time requirements of the business, sufficient computing resources need to be allocated to the computing business.
传统配电网自动化系统未采用云边架构,各个子系统相互独立,垂直隔离,缺少计算资源共享机制。因此,相应的计算业务请求在对应系统内执行,不需要关注计算业务调度。Traditional distribution network automation systems do not adopt cloud-edge architecture. Each subsystem is independent and vertically isolated, and lacks a computing resource sharing mechanism. Therefore, the corresponding computing business requests are executed within the corresponding system, and there is no need to pay attention to computing business scheduling.
而现有通用的云边协同调度方法多针对互联网应用场景,多关注呼损率、负载均衡率等,一方面未能适应部分配电网自动化业务的高实时性要求,另一方面也缺少与配电网结构相适应的分布式调度方法。The existing general cloud-edge collaborative scheduling methods are mostly targeted at Internet application scenarios, and pay more attention to call loss rate, load balancing rate, etc. On the one hand, they fail to adapt to the high real-time requirements of some distribution network automation services. On the other hand, they lack distributed scheduling methods that are compatible with the distribution network structure.
发明内容Summary of the invention
本发明提供了一种基于云边协同的配电网自动化计算任务调度方法及系统,解决了现有的云边协同调度方法未能适应部分配电网自动化业务的高实时性要求,也缺少与配电网结构相适应的分布式调度方法的技术问题。The present invention provides a distribution network automation computing task scheduling method and system based on cloud-edge collaboration, which solves the technical problems that the existing cloud-edge collaborative scheduling method fails to adapt to the high real-time requirements of some distribution network automation services and lacks a distributed scheduling method that is compatible with the distribution network structure.
有鉴于此,本发明第一方面提供了一种基于云边协同的配电网自动化计算任务调度方法,应用云边协同系统,所述云边协同系统包括云节点、核心通信网、边缘层和多个电力终端设备,其中,所述边缘层包括多个与多个所述电力终端设备一一对应的边缘节点,每个边缘节点之间通信连接,所述边缘层通过所述核心通信网与所述云节点通信连接,所述电力终端设备与其对应的边缘节点通信连接;其方法包括以下步骤:In view of this, the first aspect of the present invention provides a method for scheduling distribution network automation computing tasks based on cloud-edge collaboration, applying a cloud-edge collaboration system, wherein the cloud-edge collaboration system includes a cloud node, a core communication network, an edge layer and a plurality of power terminal devices, wherein the edge layer includes a plurality of edge nodes corresponding one-to-one to a plurality of the power terminal devices, each edge node is communicatively connected, the edge layer is communicatively connected to the cloud node through the core communication network, and the power terminal device is communicatively connected to its corresponding edge node; the method comprises the following steps:
S1、通过电力终端设备向相应的边缘节点发送业务实例请求消息,并将其相应的边缘节点初始化为源节点;S1. Send a service instance request message to the corresponding edge node through the power terminal device, and initialize its corresponding edge node as the source node;
S2、通过所述源节点解析所述业务实例请求消息,得到其相应的实时性要求等级;S2. Parsing the service instance request message through the source node to obtain its corresponding real-time requirement level;
S3、基于预设的调度程序,根据所述实时性要求等级获得调度最优的节点作为执行节点,并通过所述执行节点执行所述业务实例请求消息。S3. Based on a preset scheduling program, the node with the best scheduling is obtained as the execution node according to the real-time requirement level, and the service instance request message is executed through the execution node.
优选地,所述业务实例请求消息包含实时性要求等级,所述实时性要求等级由高至低依次表示为0、1和2。Preferably, the service instance request message includes a real-time requirement level, and the real-time requirement level is represented as 0, 1 and 2 from high to low.
优选地,步骤S3具体包括:Preferably, step S3 specifically includes:
S301、通过所述源节点识别所述实时性要求等级,若所述实时性要求等级为0,则将所述源节点作为执行节点,并就地执行所述业务实例请求消息;若所述实时性要求等级为1或2,则执行步骤S302;S301, identifying the real-time requirement level through the source node, if the real-time requirement level is 0, using the source node as an execution node, and executing the service instance request message locally; if the real-time requirement level is 1 or 2, executing step S302;
S302、通过所述源节点根据预设的延时期望计算公式计算所述源节点的执行延时期望,并将所述业务实例请求消息、所述实时性要求等级和所述源节点的执行延时期望发送至所述云节点;S302, calculating the execution delay expectation of the source node according to a preset delay expectation calculation formula by the source node, and sending the service instance request message, the real-time requirement level and the execution delay expectation of the source node to the cloud node;
S303、通过所述云节点根据所述预设的延时期望计算公式计算所述云节点的执行延时期望;S303, calculating the expected execution delay of the cloud node according to the preset expected delay calculation formula by the cloud node;
S304、通过所述云节点识别所述实时性要求等级,若所述实时性要求等级为1,则执行步骤S305,若所述实时性要求等级为2,则执行步骤S306;S304, identifying the real-time requirement level through the cloud node, if the real-time requirement level is 1, executing step S305, if the real-time requirement level is 2, executing step S306;
S305、启动单云单边协同调度,具体包括:通过下式的优化问题函数比较所述业务实例请求消息分别在所述源节点和所述云节点的执行时延期望,得到执行时延期望最小的节点作为执行节点,并通过所述执行节点执行所述业务实例请求消息,其公式为:S305, starting single-cloud unilateral collaborative scheduling, specifically including: comparing the expected execution delays of the service instance request message at the source node and the cloud node respectively through the following optimization problem function, obtaining a node with the smallest expected execution delay as an execution node, and executing the service instance request message through the execution node, wherein the formula is:
s.t.i=FNg或i=Cldsti=FN gor i=Cld
式中,i为节点,E[l(g,i)]为执行时延期望,g为业务实例请求消息,FNg为源节点,Cld为云节点;Where i is the node, E[l(g,i)] is the expected execution delay, g is the service instance request message, FNg is the source node, and Cld is the cloud node;
S306、启动进行全局云边协同调度,具体包括:通过所述云节点向其它边缘节点发送计算请求,在其他边缘节点根据所述预设的延时期望计算公式计算相应的边缘节点的执行延时期望,并将所述边缘节点的执行延时期望发送至所述云节点;S306, starting global cloud-edge collaborative scheduling, specifically including: sending a calculation request to other edge nodes through the cloud node, calculating the execution delay expectation of the corresponding edge node according to the preset delay expectation calculation formula at the other edge node, and sending the execution delay expectation of the edge node to the cloud node;
S307、通过所述云节点根据所述优化问题函数比较所述业务实例请求消息分别在所述源节点、所述云节点和其它所述边缘节点的执行时延期望,得到执行时延期望最小的节点作为执行节点,并通过所述执行节点执行所述业务实例请求消息。S307, using the cloud node to compare the expected execution delays of the business instance request message at the source node, the cloud node and the other edge nodes according to the optimization problem function, obtaining a node with the smallest expected execution delay as an execution node, and executing the business instance request message through the execution node.
优选地,所述预设的延时期望计算公式为:Preferably, the preset delay expectation calculation formula is:
E[l(g,i)]=lsch(g,i)+E[lque(g,i)]+ltr(g,i)+lcpt(g,i)E[l(g,i)]=l sch (g,i)+E[l que (g,i)]+l tr (g,i)+l cpt (g,i)
式中,lsch(g,i)为调度时延,ltr(g,i)为传输时延,E[lque(g,i)]为排队时延期望,lcpt(g,i)为计算时延;其中,Where l sch (g,i) is the scheduling delay, l tr (g,i) is the transmission delay, E[l que (g,i)] is the expected queuing delay, and l cpt (g,i) is the calculation delay.
式中,EmerG(g)为业务实例请求消息的实时性要求等级,ε为单次通信的时延;Where Emer G(g) is the real-time requirement level of the service instance request message, and ε is the delay of a single communication;
式中,DG(g)为业务实例请求消息的数据量,为源节点与节点i之间的传输速度;Where D G(g) is the data volume of the service instance request message, is the transmission speed between the source node and node i;
式中,wG(g)为业务实例请求消息的计算负荷,ci为节点i的算力容量;Where w G(g) is the computing load of the service instance request message, and ci is the computing capacity of node i;
式中,τ为单位时间,为卸载系数,为卸载系数的期望,E[ni,G]为单位时间τ内接收业务案例种类G请求数量的期望。In the formula, τ is the unit time, is the unloading coefficient, is the unloading factor E[ni ,G ] is the expectation of the number of requests for business case type G received within a unit time τ.
第二方面,本发明提供了一种基于云边协同的配电网自动化计算任务调度系统,应用云边协同系统,所述云边协同系统包括云节点、核心通信网、边缘层和多个电力终端设备,其中,所述边缘层包括多个与多个所述电力终端设备一一对应的边缘节点,每个边缘节点之间通信连接,所述边缘层通过所述核心通信网与所述云节点通信连接,所述电力终端设备与其对应的边缘节点通信连接;其调度系统包括:In a second aspect, the present invention provides a distribution network automation computing task scheduling system based on cloud-edge collaboration, which applies a cloud-edge collaboration system, wherein the cloud-edge collaboration system includes a cloud node, a core communication network, an edge layer, and a plurality of power terminal devices, wherein the edge layer includes a plurality of edge nodes corresponding one-to-one to a plurality of the power terminal devices, each edge node is communicatively connected, the edge layer is communicatively connected to the cloud node through the core communication network, and the power terminal device is communicatively connected to its corresponding edge node; its scheduling system includes:
请求发送模块,用于通过电力终端设备向相应的边缘节点发送业务实例请求消息,并将其相应的边缘节点初始化为源节点;A request sending module, used to send a service instance request message to a corresponding edge node through a power terminal device, and initialize its corresponding edge node as a source node;
解析模块,用于通过所述源节点解析所述业务实例请求消息,得到其相应的实时性要求等级;A parsing module, used for parsing the service instance request message through the source node to obtain its corresponding real-time requirement level;
调度模块,用于基于预设的调度程序,根据所述实时性要求等级获得调度最优的节点作为执行节点,并通过所述执行节点执行所述业务实例请求消息。The scheduling module is used to obtain the optimally scheduled node as the execution node based on the preset scheduling program and the real-time requirement level, and execute the service instance request message through the execution node.
优选地,所述业务实例请求消息包含实时性要求等级,所述实时性要求等级由高至低依次表示为0、1和2。Preferably, the service instance request message includes a real-time requirement level, and the real-time requirement level is represented as 0, 1 and 2 from high to low.
优选地,所述调度模块具体包括:Preferably, the scheduling module specifically includes:
第一调度模块,用于通过所述源节点识别所述实时性要求等级,若所述实时性要求等级为0,则将所述源节点作为执行节点,并就地执行所述业务实例请求消息;若所述实时性要求等级为1或2,则执行第一计算模块;a first scheduling module, configured to identify the real-time requirement level through the source node, and if the real-time requirement level is 0, use the source node as an execution node and execute the service instance request message locally; if the real-time requirement level is 1 or 2, execute the first calculation module;
所述第一计算模块,用于通过所述源节点根据预设的延时期望计算公式计算所述源节点的执行延时期望,并将所述业务实例请求消息、所述实时性要求等级和所述源节点的执行延时期望发送至所述云节点;The first calculation module is used to calculate the execution delay expectation of the source node according to a preset delay expectation calculation formula through the source node, and send the service instance request message, the real-time requirement level and the execution delay expectation of the source node to the cloud node;
第二计算模块,用于通过所述云节点根据所述预设的延时期望计算公式计算所述云节点的执行延时期望;A second calculation module, used to calculate the execution delay expectation of the cloud node according to the preset delay expectation calculation formula through the cloud node;
调度判断模块,用于通过所述云节点识别所述实时性要求等级,若所述实时性要求等级为1,则执行第二调度模块,若所述实时性要求等级为2,则执行第三调度模块;A scheduling judgment module, used for identifying the real-time requirement level through the cloud node, and executing the second scheduling module if the real-time requirement level is 1, and executing the third scheduling module if the real-time requirement level is 2;
所述第二调度模块,用于启动单云单边协同调度,具体包括:通过下式的优化问题函数比较所述业务实例请求消息分别在所述源节点和所述云节点的执行时延期望,得到执行时延期望最小的节点作为执行节点,并通过所述执行节点执行所述业务实例请求消息,其公式为:The second scheduling module is used to start single-cloud unilateral collaborative scheduling, specifically including: comparing the expected execution delays of the service instance request message at the source node and the cloud node respectively through the following optimization problem function, obtaining the node with the smallest expected execution delay as the execution node, and executing the service instance request message through the execution node, wherein the formula is:
s.t.i=FNg或i=Cldsti=FN gor i=Cld
式中,i为节点,E[l(g,i)]为执行时延期望,g为业务实例请求消息,FNg为源节点,Cld为云节点;Where i is the node, E[l(g,i)] is the expected execution delay, g is the service instance request message, FNg is the source node, and Cld is the cloud node;
所述第三调度模块,用于启动进行全局云边协同调度,具体包括:通过所述云节点向其它边缘节点发送计算请求,在其他边缘节点根据所述预设的延时期望计算公式计算相应的边缘节点的执行延时期望,并将所述边缘节点的执行延时期望发送至所述云节点;The third scheduling module is used to start global cloud-edge collaborative scheduling, specifically including: sending a calculation request to other edge nodes through the cloud node, calculating the execution delay expectation of the corresponding edge node according to the preset delay expectation calculation formula at the other edge node, and sending the execution delay expectation of the edge node to the cloud node;
执行模块,用于通过所述云节点根据所述优化问题函数比较所述业务实例请求消息分别在所述源节点、所述云节点和其它所述边缘节点的执行时延期望,得到执行时延期望最小的节点作为执行节点,并通过所述执行节点执行所述业务实例请求消息。An execution module is used to compare the expected execution delays of the business instance request message at the source node, the cloud node and other edge nodes respectively through the cloud node according to the optimization problem function, obtain the node with the smallest expected execution delay as the execution node, and execute the business instance request message through the execution node.
优选地,所述预设的延时期望计算公式为:Preferably, the preset delay expectation calculation formula is:
E[l(g,i)]=lsch(g,i)+E[lque(g,i)]+ltr(g,i)+lcpt(g,i)E[l(g,i)]=l sch (g,i)+E[l que (g,i)]+l tr (g,i)+l cpt (g,i)
式中,lsch(g,i)为调度时延,ltr(g,i)为传输时延,E[lque(g,i)]为排队时延期望,lcpt(g,i)为计算时延;其中,Where l sch (g,i) is the scheduling delay, l tr (g,i) is the transmission delay, E[l que (g,i)] is the expected queuing delay, and l cpt (g,i) is the calculation delay.
式中,EmerG(g)为业务实例请求消息的实时性要求等级,ε为单次通信的时延;Where Emer G(g) is the real-time requirement level of the service instance request message, and ε is the delay of a single communication;
式中,DG(g)为业务实例请求消息的数据量,为源节点与节点i之间的传输速度;Where D G(g) is the data volume of the service instance request message, is the transmission speed between the source node and node i;
式中,wG(g)为业务实例请求消息的计算负荷,ci为节点i的算力容量;Where w G(g) is the computing load of the service instance request message, and ci is the computing capacity of node i;
式中,τ为单位时间,为卸载系数,为卸载系数的期望,E[ni,G]为单位时间τ内接收业务案例种类G请求数量的期望。In the formula, τ is the unit time, is the unloading coefficient, is the unloading factor E[ni ,G ] is the expectation of the number of requests for business case type G received within a unit time τ.
从以上技术方案可以看出,本发明具有以下优点:It can be seen from the above technical solutions that the present invention has the following advantages:
本发明通过应用云边协同系统,实现云节点、多个边缘节点和多个电力终端设备之间的通信连接,通过电力终端设备向相应的边缘节点发送业务实例请求消息,解析业务实例请求消息,得到其相应的实时性要求等级,考虑了配电网计算业务的实时性要求差异,基于预设的调度程序,根据实时性要求等级获得调度最优的节点作为执行节点,并通过执行节点执行所述业务实例请求消息,从而提供与配电网结构相适应的分布式结构,能够适应部分配电网自动化业务的高实时性要求,能够避免业务超时率高,提高了云边节点资源利用率。The present invention realizes the communication connection between cloud nodes, multiple edge nodes and multiple power terminal devices by applying the cloud-edge collaborative system, sends a business instance request message to the corresponding edge node through the power terminal device, parses the business instance request message, and obtains its corresponding real-time requirement level, taking into account the differences in real-time requirements of distribution network computing services, based on a preset scheduling program, obtains the best scheduling node as the execution node according to the real-time requirement level, and executes the business instance request message through the execution node, thereby providing a distributed structure compatible with the distribution network structure, which can adapt to the high real-time requirements of some distribution network automation services, can avoid high business timeout rate, and improve the resource utilization of cloud edge nodes.
附图说明BRIEF DESCRIPTION OF THE DRAWINGS
图1为本发明实施例提供的云边协同系统的结构示意图;FIG1 is a schematic diagram of the structure of a cloud-edge collaboration system provided by an embodiment of the present invention;
图2为本发明实施例提供的一种云边协同的配电网自动化计算任务调度方法的流程图;FIG2 is a flow chart of a method for scheduling automatic computing tasks in a distribution network in a cloud-edge collaborative manner provided by an embodiment of the present invention;
图3为本发明实施例提供的一种云边协同的配电网自动化计算任务调度系统的结构示意图。Figure 3 is a structural diagram of a cloud-edge collaborative distribution network automation computing task scheduling system provided by an embodiment of the present invention.
具体实施方式DETAILED DESCRIPTION
为了使本技术领域的人员更好地理解本发明方案,下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。In order to enable those skilled in the art to better understand the scheme of the present invention, the technical scheme in the embodiments of the present invention will be clearly and completely described below in conjunction with the drawings in the embodiments of the present invention. Obviously, the described embodiments are only part of the embodiments of the present invention, not all of the embodiments. Based on the embodiments of the present invention, all other embodiments obtained by ordinary technicians in this field without creative work are within the scope of protection of the present invention.
本发明提供的一种基于云边协同的配电网自动化计算任务调度方法,应用云边协同系统,如图1所示,云边协同系统包括云节点、核心通信网、边缘层和多个电力终端设备,其中,边缘层包括多个与多个电力终端设备一一对应的边缘节点,每个边缘节点之间通信连接,边缘层通过核心通信网与云节点通信连接,电力终端设备与其对应的边缘节点通信连接;The present invention provides a distribution network automation computing task scheduling method based on cloud-edge collaboration, which applies a cloud-edge collaboration system. As shown in FIG1 , the cloud-edge collaboration system includes a cloud node, a core communication network, an edge layer, and a plurality of power terminal devices, wherein the edge layer includes a plurality of edge nodes corresponding one-to-one to a plurality of power terminal devices, each edge node is communicatively connected, the edge layer is communicatively connected to the cloud node through the core communication network, and the power terminal device is communicatively connected to its corresponding edge node;
如图2所示,基于云边协同的配电网自动化计算任务调度方法,包括以下步骤:As shown in Figure 2, the distribution network automation computing task scheduling method based on cloud-edge collaboration includes the following steps:
S1、通过电力终端设备向相应的边缘节点发送业务实例请求消息,并将其相应的边缘节点初始化为源节点;S1. Send a service instance request message to the corresponding edge node through the power terminal device, and initialize its corresponding edge node as the source node;
其中,源节点指接到该业务实例请求的边缘节点。The source node refers to the edge node that receives the service instance request.
S2、通过源节点解析业务实例请求消息,得到其相应的实时性要求等级;S2. Parse the service instance request message through the source node to obtain its corresponding real-time requirement level;
其中,业务实例请求消息包含实时性要求等级,实时性要求等级由高至低依次表示为0、1和2。The service instance request message includes a real-time requirement level, and the real-time requirement level is represented as 0, 1, and 2 from high to low.
S3、基于预设的调度程序,根据实时性要求等级获得调度最优的节点作为执行节点,并通过执行节点执行业务实例请求消息。S3. Based on the preset scheduling program, the optimally scheduled node is obtained as the execution node according to the real-time requirement level, and the service instance request message is executed through the execution node.
业务实例请求消息到达边缘节点后,启动调度程序。根据实时性要求等级,调度程序进入不同的分支。在本发明的调度架构中,高实时性要求业务会抢占低实时性要求业务,同等实时性要求业务按照先到先服务原则执行。抢占过程只发生在排队阶段,即“加塞”。其中,执行节点是指执行业务实例请求消息的节点。After the service instance request message arrives at the edge node, the scheduling program is started. According to the level of real-time requirement, the scheduling program enters different branches. In the scheduling architecture of the present invention, high real-time requirement services will preempt low real-time requirement services, and services with equal real-time requirements are executed on a first-come, first-served basis. The preemption process only occurs in the queuing stage, i.e., "queuing". Among them, the execution node refers to the node that executes the service instance request message.
需要说明的是,本实施例提供了一种基于云边协同的配电网自动化计算任务调度方法,通过应用云边协同系统,实现云节点、多个边缘节点和多个电力终端设备之间的通信连接,通过电力终端设备向相应的边缘节点发送业务实例请求消息,解析业务实例请求消息,得到其相应的实时性要求等级,考虑了配电网计算业务的实时性要求差异,基于预设的调度程序,根据实时性要求等级获得调度最优的节点作为执行节点,并通过执行节点执行业务实例请求消息,从而提供与配电网结构相适应的分布式结构,能够适应部分配电网自动化业务的高实时性要求,能够避免业务超时率高,提高了云边节点资源利用率。It should be noted that this embodiment provides a distribution network automation computing task scheduling method based on cloud-edge collaboration. By applying the cloud-edge collaboration system, the communication connection between cloud nodes, multiple edge nodes and multiple power terminal devices is realized. The business instance request message is sent to the corresponding edge node through the power terminal device, and the business instance request message is parsed to obtain its corresponding real-time requirement level. The real-time requirement differences of the distribution network computing service are taken into account. Based on the preset scheduling program, the optimal scheduling node is obtained as the execution node according to the real-time requirement level, and the business instance request message is executed by the execution node, thereby providing a distributed structure adapted to the distribution network structure, which can adapt to the high real-time requirements of some distribution network automation services, can avoid high service timeout rates, and improve the resource utilization of cloud edge nodes.
在一个具体实施例中,步骤S3具体包括:In a specific embodiment, step S3 specifically includes:
S301、通过源节点识别实时性要求等级,若实时性要求等级为0,则将源节点作为执行节点,并就地执行业务实例请求消息;若实时性要求等级为1或2,则执行步骤S302;S301, identifying the real-time requirement level through the source node. If the real-time requirement level is 0, the source node is used as the execution node and the service instance request message is executed locally; if the real-time requirement level is 1 or 2, executing step S302;
S302、通过源节点根据预设的延时期望计算公式计算源节点的执行延时期望,并将业务实例请求消息、实时性要求等级和源节点的执行延时期望发送至云节点;S302, calculating the execution delay expectation of the source node according to a preset delay expectation calculation formula by the source node, and sending the service instance request message, the real-time requirement level and the execution delay expectation of the source node to the cloud node;
S303、通过云节点根据预设的延时期望计算公式计算云节点的执行延时期望;S303, calculating the expected execution delay of the cloud node according to a preset expected delay calculation formula through the cloud node;
S304、通过云节点识别实时性要求等级,若实时性要求等级为1,则执行步骤S305,若实时性要求等级为2,则执行步骤S306;S304, identifying the real-time requirement level through the cloud node, if the real-time requirement level is 1, executing step S305, if the real-time requirement level is 2, executing step S306;
S305、启动单云单边协同调度,具体包括:通过下式的优化问题函数比较业务实例请求消息分别在源节点和云节点的执行时延期望,得到执行时延期望最小的节点作为执行节点,并通过执行节点执行业务实例请求消息,其公式为:S305, starting single-cloud unilateral collaborative scheduling, specifically including: comparing the expected execution delays of the service instance request message at the source node and the cloud node respectively through the following optimization problem function, obtaining the node with the smallest expected execution delay as the execution node, and executing the service instance request message through the execution node, wherein the formula is:
s.t.i=FNg或i=Cldsti=FN gor i=Cld
式中,i为节点,E[l(g,i)]为执行时延期望,g为业务实例请求消息,FNg为源节点,Cld为云节点;Where i is the node, E[l(g,i)] is the expected execution delay, g is the service instance request message, FNg is the source node, and Cld is the cloud node;
其中,单云单边协同调度也即,比较业务实例请求消息分别在源节点和云节点执行时的时延期望,选择较小者进行调度。Among them, single-cloud unilateral collaborative scheduling means comparing the expected latency of the business instance request message when it is executed on the source node and the cloud node respectively, and selecting the smaller one for scheduling.
S306、启动进行全局云边协同调度,具体包括:通过云节点向其它边缘节点发送计算请求,在其他边缘节点根据预设的延时期望计算公式计算相应的边缘节点的执行延时期望,并将边缘节点的执行延时期望发送至云节点;S306, starting global cloud-edge collaborative scheduling, specifically including: sending a calculation request to other edge nodes through the cloud node, calculating the execution delay expectation of the corresponding edge node according to a preset delay expectation calculation formula at the other edge node, and sending the execution delay expectation of the edge node to the cloud node;
其中,全局云边协同调度,也即比较业务实例请求消息分别在源节点、其他边缘节点和云节点执行时的时延期望,选择较小者进行调度。Among them, global cloud-edge collaborative scheduling is to compare the expected delays of business instance request messages when they are executed at the source node, other edge nodes, and cloud nodes, and select the smaller one for scheduling.
S307、通过云节点根据优化问题函数比较业务实例请求消息分别在源节点、云节点和其它边缘节点的执行时延期望,得到执行时延期望最小的节点作为执行节点,并通过执行节点执行业务实例请求消息。S307. Compare the expected execution delays of the business instance request message at the source node, cloud node and other edge nodes respectively through the cloud node according to the optimization problem function, obtain the node with the smallest expected execution delay as the execution node, and execute the business instance request message through the execution node.
在一个具体实施例中,预设的延时期望计算公式为:In a specific embodiment, the preset delay expectation calculation formula is:
E[l(g,i)]=lsch(g,i)+E[lque(g,i)]+ltr(g,i)+lcpt(g,i)E[l(g,i)]=l sch (g,i)+E[l que (g,i)]+l tr (g,i)+l cpt (g,i)
式中,lsch(g,i)为调度时延,ltr(g,i)为传输时延,E[lque(g,i)]为排队时延期望,lcpt(g,i)为计算时延;其中,Where l sch (g,i) is the scheduling delay, l tr (g,i) is the transmission delay, E[l que (g,i)] is the expected queuing delay, and l cpt (g,i) is the calculation delay.
式中,EmerG(g)为业务实例请求消息的实时性要求等级,ε为单次通信的时延;Where Emer G(g) is the real-time requirement level of the service instance request message, and ε is the delay of a single communication;
其中,上述过程需要节点间多次通信,带来调度时延。由于调度阶段通信传输的内容较少,因此单次通信的时延较小,记为ε。The above process requires multiple communications between nodes, which results in scheduling delay. Since the content of communication transmission in the scheduling stage is relatively small, the delay of a single communication is relatively small, which is recorded as ε.
式中,DG(g)为业务实例请求消息的数据量,为源节点与节点i之间的传输速度;Where D G(g) is the data volume of the service instance request message, is the transmission speed between the source node and node i;
其中,数据量指为完成该业务实例请求消息需要传输多少数据。The data volume refers to how much data needs to be transmitted to complete the request message of the service instance.
式中,wG(g)为业务实例请求消息的计算负荷,ci为节点i的算力容量;Where w G(g) is the computing load of the service instance request message, and ci is the computing capacity of node i;
其中,计算负荷指需要多少指令才能完成该业务实例请求消息。The computing load refers to how many instructions are needed to complete the service instance request message.
式中,τ为单位时间,为卸载系数,为卸载系数的期望,E[ni,G]为单位时间τ内接收业务案例种类G请求数量的期望。In the formula, τ is the unit time, is the unloading coefficient, is the unloading factor E[ni ,G ] is the expectation of the number of requests for business case type G received within a unit time τ.
其中,卸载系数指在该边缘节点有多大比例实时性要求为1的业务案例请求信息被卸载到云节点。The offloading coefficient refers to the proportion of business case request information with a real-time requirement of 1 at the edge node that is offloaded to the cloud node.
以上为本发明提供的一种基于云边协同的配电网自动化计算任务调度方法的实施例的详细描述,以下为本发明提供的一种基于云边协同的配电网自动化计算任务调度系统的实施例的详细描述。The above is a detailed description of an embodiment of a distribution network automation computing task scheduling method based on cloud-edge collaboration provided by the present invention. The following is a detailed description of an embodiment of a distribution network automation computing task scheduling system based on cloud-edge collaboration provided by the present invention.
本发明提供的一种基于云边协同的配电网自动化计算任务调度系统,应用云边协同系统,如图1所示,云边协同系统包括云节点、核心通信网、边缘层和多个电力终端设备,其中,边缘层包括多个与多个电力终端设备一一对应的边缘节点,每个边缘节点之间通信连接,边缘层通过核心通信网与云节点通信连接,电力终端设备与其对应的边缘节点通信连接。The present invention provides a distribution network automation computing task scheduling system based on cloud-edge collaboration, which applies the cloud-edge collaboration system. As shown in Figure 1, the cloud-edge collaboration system includes a cloud node, a core communication network, an edge layer and multiple power terminal devices, wherein the edge layer includes multiple edge nodes corresponding one-to-one to multiple power terminal devices, and each edge node is communicatively connected to each other. The edge layer is communicatively connected to the cloud node through the core communication network, and the power terminal device is communicatively connected to its corresponding edge node.
为了便于理解,请参阅图3,其调度系统包括:For ease of understanding, please refer to Figure 3, the scheduling system includes:
请求发送模块100,用于通过电力终端设备向相应的边缘节点发送业务实例请求消息,并将其相应的边缘节点初始化为源节点;The request sending module 100 is used to send a service instance request message to a corresponding edge node through a power terminal device, and initialize its corresponding edge node as a source node;
解析模块200,用于通过源节点解析业务实例请求消息,得到其相应的实时性要求等级;The parsing module 200 is used to parse the service instance request message through the source node to obtain its corresponding real-time requirement level;
调度模块300,用于基于预设的调度程序,根据实时性要求等级获得调度最优的节点作为执行节点,并通过执行节点执行业务实例请求消息。The scheduling module 300 is used to obtain the optimally scheduled node as the execution node based on the preset scheduling program and the real-time requirement level, and execute the service instance request message through the execution node.
在一个具体实施例中,业务实例请求消息包含实时性要求等级,实时性要求等级由高至低依次表示为0、1和2。In a specific embodiment, the service instance request message includes a real-time requirement level, where the real-time requirement levels are represented as 0, 1, and 2 from high to low.
在一个具体实施例中,调度模块具体包括:In a specific embodiment, the scheduling module specifically includes:
第一调度模块,用于通过源节点识别实时性要求等级,若实时性要求等级为0,则将源节点作为执行节点,并就地执行业务实例请求消息;若实时性要求等级为1或2,则执行第一计算模块;A first scheduling module is used to identify the real-time requirement level through a source node. If the real-time requirement level is 0, the source node is used as an execution node and the service instance request message is executed locally; if the real-time requirement level is 1 or 2, the first calculation module is executed;
第一计算模块,用于通过源节点根据预设的延时期望计算公式计算源节点的执行延时期望,并将业务实例请求消息、实时性要求等级和源节点的执行延时期望发送至云节点;A first calculation module, configured to calculate the execution delay expectation of the source node according to a preset delay expectation calculation formula through the source node, and send the service instance request message, the real-time requirement level and the execution delay expectation of the source node to the cloud node;
第二计算模块,用于通过云节点根据预设的延时期望计算公式计算云节点的执行延时期望;A second calculation module is used to calculate the execution delay expectation of the cloud node according to a preset delay expectation calculation formula through the cloud node;
调度判断模块,用于通过云节点识别实时性要求等级,若实时性要求等级为1,则执行第二调度模块,若实时性要求等级为2,则执行第三调度模块;A scheduling judgment module is used to identify the real-time requirement level through the cloud node. If the real-time requirement level is 1, the second scheduling module is executed; if the real-time requirement level is 2, the third scheduling module is executed;
第二调度模块,用于启动单云单边协同调度,具体包括:通过下式的优化问题函数比较业务实例请求消息分别在源节点和云节点的执行时延期望,得到执行时延期望最小的节点作为执行节点,并通过执行节点执行业务实例请求消息,其公式为:The second scheduling module is used to start single-cloud unilateral collaborative scheduling, which specifically includes: comparing the expected execution delays of the service instance request message at the source node and the cloud node respectively through the optimization problem function of the following formula, obtaining the node with the smallest expected execution delay as the execution node, and executing the service instance request message through the execution node, and the formula is:
s.t.i=FNg或i=Cldsti=FN gor i=Cld
式中,i为节点,E[l(g,i)]为执行时延期望,g为业务实例请求消息,FNg为源节点,Cld为云节点;Where i is the node, E[l(g,i)] is the expected execution delay, g is the service instance request message, FNg is the source node, and Cld is the cloud node;
第三调度模块,用于启动进行全局云边协同调度,具体包括:通过云节点向其它边缘节点发送计算请求,在其他边缘节点根据预设的延时期望计算公式计算相应的边缘节点的执行延时期望,并将边缘节点的执行延时期望发送至云节点;The third scheduling module is used to start global cloud-edge collaborative scheduling, specifically including: sending a calculation request to other edge nodes through the cloud node, calculating the execution delay expectation of the corresponding edge node according to a preset delay expectation calculation formula at the other edge node, and sending the execution delay expectation of the edge node to the cloud node;
执行模块,用于通过云节点根据优化问题函数比较业务实例请求消息分别在源节点、云节点和其它边缘节点的执行时延期望,得到执行时延期望最小的节点作为执行节点,并通过执行节点执行业务实例请求消息。The execution module is used to compare the expected execution delays of the business instance request message at the source node, cloud node and other edge nodes respectively through the cloud node according to the optimization problem function, obtain the node with the smallest expected execution delay as the execution node, and execute the business instance request message through the execution node.
在一个具体实施例中,预设的延时期望计算公式为:In a specific embodiment, the preset delay expectation calculation formula is:
E[l(g,i)]=lsch(g,i)+E[lque(g,i)]+ltr(g,i)+lcpt(g,i)E[l(g,i)]=l sch (g,i)+E[l que (g,i)]+l tr (g,i)+l cpt (g,i)
式中,lsch(g,i)为调度时延,ltr(g,i)为传输时延,E[lque(g,i)]为排队时延期望,lcpt(g,i)为计算时延;其中,Where l sch (g,i) is the scheduling delay, l tr (g,i) is the transmission delay, E[l que (g,i)] is the expected queuing delay, and l cpt (g,i) is the calculation delay.
式中,EmerG(g)为业务实例请求消息的实时性要求等级,ε为单次通信的时延;Where Emer G(g) is the real-time requirement level of the service instance request message, and ε is the delay of a single communication;
式中,DG(g)为业务实例请求消息的数据量,为源节点与节点i之间的传输速度;Where D G(g) is the data volume of the service instance request message, is the transmission speed between the source node and node i;
式中,wG(g)为业务实例请求消息的计算负荷,ci为节点i的算力容量;Where w G(g) is the computing load of the service instance request message, and ci is the computing capacity of node i;
式中,τ为单位时间,为卸载系数,为卸载系数的期望,E[ni,G]为单位时间τ内接收业务案例种类G请求数量的期望。In the formula, τ is the unit time, is the unloading coefficient, is the unloading factor E[ni ,G ] is the expectation of the number of requests for business case type G received within a unit time τ.
所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的系统的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。Those skilled in the art can clearly understand that, for the convenience and brevity of description, the specific working process of the system described above can refer to the corresponding process in the aforementioned method embodiment, and will not be repeated here.
在本发明所提供的几个实施例中,应该理解到,所揭露的装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。In the several embodiments provided by the present invention, it should be understood that the disclosed devices and methods can be implemented in other ways. For example, the device embodiments described above are only schematic, for example, the division of units is only a logical function division, and there may be other division methods in actual implementation, such as multiple units or components can be combined or integrated into another system, or some features can be ignored or not executed. Another point is that the mutual coupling or direct coupling or communication connection shown or discussed can be through some interfaces, indirect coupling or communication connection of devices or units, which can be electrical, mechanical or other forms.
作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。The units described as separate components may or may not be physically separated, and the components shown as units may or may not be physical units, that is, they may be located in one place or distributed on multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
另外,在本发明各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。In addition, each functional unit in each embodiment of the present invention may be integrated into one processing unit, or each unit may exist physically separately, or two or more units may be integrated into one unit. The above-mentioned integrated unit may be implemented in the form of hardware or in the form of software functional units.
以上实施例仅用以说明本发明的技术方案,而非对其限制;尽管参照前述实施例对本发明进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本发明各实施例技术方案的精神和范围。The above embodiments are only used to illustrate the technical solutions of the present invention, rather than to limit the same. Although the present invention has been described in detail with reference to the aforementioned embodiments, a person skilled in the art should understand that the technical solutions described in the aforementioned embodiments may still be modified, or some of the technical features may be replaced by equivalents. However, these modifications or replacements do not deviate the essence of the corresponding technical solutions from the spirit and scope of the technical solutions of the embodiments of the present invention.
Claims (8)
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202211688277.9A CN116033025A (en) | 2022-12-27 | 2022-12-27 | Distribution network automation computing task scheduling method and system based on cloud-edge collaboration |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202211688277.9A CN116033025A (en) | 2022-12-27 | 2022-12-27 | Distribution network automation computing task scheduling method and system based on cloud-edge collaboration |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| CN116033025A true CN116033025A (en) | 2023-04-28 |
Family
ID=86075342
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202211688277.9A Withdrawn CN116033025A (en) | 2022-12-27 | 2022-12-27 | Distribution network automation computing task scheduling method and system based on cloud-edge collaboration |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN116033025A (en) |
Cited By (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN116360954A (en) * | 2023-05-31 | 2023-06-30 | 北京百星电子系统有限公司 | Industrial Internet of things management and control method and system based on cloud edge cooperative technology |
| CN116402318A (en) * | 2023-06-07 | 2023-07-07 | 北京智芯微电子科技有限公司 | Multi-stage computing power resource distribution method and device for power distribution network and network architecture |
-
2022
- 2022-12-27 CN CN202211688277.9A patent/CN116033025A/en not_active Withdrawn
Cited By (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN116360954A (en) * | 2023-05-31 | 2023-06-30 | 北京百星电子系统有限公司 | Industrial Internet of things management and control method and system based on cloud edge cooperative technology |
| CN116360954B (en) * | 2023-05-31 | 2023-12-29 | 中轻(贵州)工业互联网有限公司 | Industrial Internet of things management and control method and system based on cloud edge cooperative technology |
| CN116402318A (en) * | 2023-06-07 | 2023-07-07 | 北京智芯微电子科技有限公司 | Multi-stage computing power resource distribution method and device for power distribution network and network architecture |
| CN116402318B (en) * | 2023-06-07 | 2023-12-01 | 北京智芯微电子科技有限公司 | Multi-stage computing power resource distribution method and device for power distribution network and network architecture |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN109947574B (en) | Fog network-based vehicle big data calculation unloading method | |
| CN116033025A (en) | Distribution network automation computing task scheduling method and system based on cloud-edge collaboration | |
| CN103309946A (en) | Method, device and system for processing multimedia file | |
| CN109474681A (en) | Resource allocation method, system and server system for mobile edge computing server | |
| WO2020192649A1 (en) | Data center management system | |
| JP7725135B2 (en) | Computer-implemented method, computer system, and computer program product (Fault management in edge computing environments) | |
| CN113742389A (en) | Service processing method and device | |
| CN108600344A (en) | A kind of network access request dispatching method, device and storage medium | |
| CN110557432B (en) | Cache pool balance optimization method, system, terminal and storage medium | |
| CN112561301A (en) | Work order distribution method, device, equipment and computer readable medium | |
| CN116661960A (en) | Batch task processing method, device, equipment and storage medium | |
| US20090132582A1 (en) | Processor-server hybrid system for processing data | |
| CN112799851B (en) | Data processing method and related device in multiparty security calculation | |
| CN114265692A (en) | Service scheduling method, device, equipment and storage medium | |
| CN115640113A (en) | Multi-plane flexible scheduling method | |
| CN105187483B (en) | Distribute the method and device of cloud computing resources | |
| Banerjee et al. | An approach toward amelioration of a new cloudlet allocation strategy using Cloudsim | |
| CN115134243A (en) | Industrial control task distributed deployment method and system | |
| CN107357663A (en) | The method and explorer of resource-sharing | |
| WO2013138982A1 (en) | A parallel processing method and apparatus | |
| CN114090247A (en) | Method, apparatus, device and storage medium for processing data | |
| CN103095790A (en) | Data transmission system and data transmission method using the same | |
| CN106528488A (en) | Computation cluster system in single machine and control method | |
| CN107465743B (en) | Method and device for processing request | |
| CN115766878B (en) | Data transmission method, system, medium and electronic device for computing power hub |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| WW01 | Invention patent application withdrawn after publication | ||
| WW01 | Invention patent application withdrawn after publication |
Application publication date: 20230428 |