CN119544603A - A method, device, terminal equipment and computer-readable storage medium for repairing power distribution communication network faults - Google Patents
A method, device, terminal equipment and computer-readable storage medium for repairing power distribution communication network faults Download PDFInfo
- Publication number
- CN119544603A CN119544603A CN202411720767.1A CN202411720767A CN119544603A CN 119544603 A CN119544603 A CN 119544603A CN 202411720767 A CN202411720767 A CN 202411720767A CN 119544603 A CN119544603 A CN 119544603A
- Authority
- CN
- China
- Prior art keywords
- node
- network
- flow
- model
- power distribution
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L45/00—Routing or path finding of packets in data switching networks
- H04L45/28—Routing or path finding of packets in data switching networks using route fault recovery
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/044—Recurrent networks, e.g. Hopfield networks
- G06N3/0442—Recurrent networks, e.g. Hopfield networks characterised by memory or gating, e.g. long short-term memory [LSTM] or gated recurrent units [GRU]
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L41/00—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
- H04L41/06—Management of faults, events, alarms or notifications
- H04L41/0654—Management of faults, events, alarms or notifications using network fault recovery
- H04L41/0659—Management of faults, events, alarms or notifications using network fault recovery by isolating or reconfiguring faulty entities
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L41/00—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
- H04L41/14—Network analysis or design
- H04L41/147—Network analysis or design for predicting network behaviour
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L41/00—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
- H04L41/16—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks using machine learning or artificial intelligence
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L43/00—Arrangements for monitoring or testing data switching networks
- H04L43/08—Monitoring or testing based on specific metrics, e.g. QoS, energy consumption or environmental parameters
- H04L43/0876—Network utilisation, e.g. volume of load or congestion level
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L45/00—Routing or path finding of packets in data switching networks
- H04L45/70—Routing based on monitoring results
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y04—INFORMATION OR COMMUNICATION TECHNOLOGIES HAVING AN IMPACT ON OTHER TECHNOLOGY AREAS
- Y04S—SYSTEMS INTEGRATING TECHNOLOGIES RELATED TO POWER NETWORK OPERATION, COMMUNICATION OR INFORMATION TECHNOLOGIES FOR IMPROVING THE ELECTRICAL POWER GENERATION, TRANSMISSION, DISTRIBUTION, MANAGEMENT OR USAGE, i.e. SMART GRIDS
- Y04S10/00—Systems supporting electrical power generation, transmission or distribution
- Y04S10/50—Systems or methods supporting the power network operation or management, involving a certain degree of interaction with the load-side end user applications
Landscapes
- Engineering & Computer Science (AREA)
- Signal Processing (AREA)
- Computer Networks & Wireless Communication (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Theoretical Computer Science (AREA)
- Software Systems (AREA)
- Physics & Mathematics (AREA)
- Computing Systems (AREA)
- Biomedical Technology (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computational Linguistics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Mathematical Physics (AREA)
- Biophysics (AREA)
- Data Mining & Analysis (AREA)
- Environmental & Geological Engineering (AREA)
- Life Sciences & Earth Sciences (AREA)
- Health & Medical Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Databases & Information Systems (AREA)
- Medical Informatics (AREA)
- Data Exchanges In Wide-Area Networks (AREA)
Abstract
The invention discloses a power distribution communication network fault repairing method, a device, a terminal device and a computer readable storage medium, wherein the method comprises the steps of obtaining first historical network flow time sequence data of each node in a power distribution communication network, and predicting a first flow prediction value of the next time step through a flow prediction model; the method comprises the steps of determining the risk level of each node through a network state sensing model, determining the nodes with the risk level belonging to a preset level as fault nodes, determining all nodes except the fault nodes in a power distribution communication network as normal nodes, determining an optimal routing path through a routing repair model according to the positions of the normal nodes, the connection relation and network flow data, and updating the connection relation of each node according to the optimal routing path. By implementing the invention, potential fault nodes can be found in advance, the normal operation of a communication link is ensured, and the reliability and stability of the power distribution communication network are improved.
Description
Technical Field
The present invention relates to the field of communications technologies of smart power grids, and in particular, to a method, an apparatus, a terminal device, and a computer readable storage medium for repairing a power distribution communication network fault.
Background
With the development of new power systems and the wide access of distributed energy sources, the power distribution communication network faces a series of challenges such as scale expansion, service type increase, network architecture complicating and the like. Conventional fault determination methods rely solely on current network data, which may result in faults being delayed to discover and longer repair preparation times. Thus, even if repair is complete, the power distribution communication network often already suffers considerable losses.
Therefore, in order to address these challenges, a more intelligent power distribution communication network management system is needed to perform intelligent prediction and intelligent repair of the network so as to ensure safe and stable operation of the power distribution communication network.
Disclosure of Invention
The embodiment of the invention provides a power distribution communication network fault repairing method, a device, terminal equipment and a computer readable storage medium, which can discover potential fault nodes in advance, ensure the normal operation of a communication link and improve the reliability and stability of the power distribution communication network.
An embodiment of the present invention provides a method for repairing a fault in a power distribution communication network, including:
Acquiring first historical network flow time series data of each node in a power distribution communication network;
For each node in the power distribution communication network, inputting the first historical network flow time series data into a trained flow prediction model, so that the flow prediction model determines a first flow prediction value of the next time step according to the first historical network flow time series data;
Inputting the first flow predicted value of each node into a trained network state sensing model so that the network state sensing model determines the risk level of each node according to the first flow predicted value of each node;
Determining nodes with risk levels belonging to preset levels as fault nodes, and determining all nodes except the fault nodes in the power distribution communication network as normal nodes;
acquiring the position, connection relation and network flow data of a normal node;
Inputting the position, the connection relation and the network traffic data of the normal node into a trained route repair model, so that the route repair model determines an optimal route according to the position, the connection relation and the network traffic data of the normal node;
and updating the connection relation of each node by the power distribution communication network according to the optimal routing path.
Further, the power distribution communication network fault repairing method further comprises the following steps:
And inputting the first flow predicted value of the fault node into the trained fault diagnosis model so that the fault diagnosis model determines a fault label of the fault node according to the first flow predicted value of the fault node.
Further, the flow prediction model is determined by:
The method comprises the steps of obtaining a plurality of first training samples, wherein the first training samples comprise second historical network flow time series data of each node in a power distribution communication network and actual flow values of the next time step;
And respectively inputting each first training sample into an initial long-short-term memory neural network model for iterative training until the first loss function converges to obtain a trained flow prediction model, wherein during each training, the second historical network flow time series data of each node in the current first training sample is transmitted forward for a plurality of times to obtain a second flow prediction value, and the first loss function is calculated according to the second flow prediction value and the actual flow value corresponding to the next time step in the first training sample.
Further, the network state awareness model is determined by:
The method comprises the steps of obtaining a plurality of second training samples, wherein the second training samples comprise flow values of each node in a power distribution communication network and risk grades of the corresponding nodes;
And respectively inputting each second training sample into an initial automatic encoder model for iterative training until a second loss function converges to obtain a trained network state perception model, wherein during each training, the flow value of each node in the current second training sample is compressed and reduced in dimension through an encoder arranged in the automatic encoder model to obtain a first low-dimension feature in a potential space, the first low-dimension feature is clustered through a graph automatic encoder arranged in the automatic encoder model to obtain a predicted clustering result, and the second loss function is calculated according to the predicted clustering result and the risk level of the corresponding node in the corresponding second training sample.
Further, the encoder built in the automatic encoder model is determined by:
And respectively inputting each second training sample into an encoder to be trained and a decoder to be trained in the automatic encoder model for iterative training until the reconstruction loss converges to obtain an encoder arranged in the automatic encoder model, wherein during each training, the flow value of each node in the current second training sample is compressed and reduced in dimension through the encoder to be trained in the automatic encoder model to obtain a second low-dimension feature in a potential space, reconstructing the second low-dimension feature through the decoder to be trained in the automatic encoder model to obtain reconstruction data, and calculating the reconstruction loss according to the reconstruction data and the flow value of each node in the corresponding second training sample.
Further, the fault diagnosis model is determined by:
The method comprises the steps of obtaining a plurality of third training samples, wherein the third training samples comprise second historical network flow time series data of each node in a power distribution communication network and fault labels of the corresponding nodes;
And respectively inputting each third training sample into a fault diagnosis model to be trained for iterative training until a third loss function converges to obtain a trained fault diagnosis model, wherein during each training, feature mapping and label classification are carried out on second historical network flow time series data of each node in the current third training sample to obtain a prediction label, and the third loss function is calculated according to the prediction label and the corresponding fault label.
Further, the route repair model is determined by:
The method comprises the steps of obtaining a plurality of fourth training samples, wherein the fourth training samples comprise the position, the connection relation and the network flow data of each node in a power distribution communication network;
And respectively inputting each fourth training sample into a route repair model to be trained for iterative training until a fourth loss function converges to obtain a trained route repair model, wherein a topology matrix is generated according to the position, the connection relation and the network flow data of each node in the current fourth training sample during each training, a route strategy is generated by the topology matrix through reinforcement learning and a sequencing mechanism in a single-agent algorithm, and the fourth loss function is calculated according to the route strategy and a preset reward function.
Based on the method embodiment, the invention correspondingly provides a device embodiment, which comprises a flow data acquisition module, a flow prediction module, a node risk determination module, a node definition module, a route data acquisition module, an optimal route path determination module and a path update module;
the flow data acquisition module is used for acquiring first historical network flow time series data of each node in the power distribution communication network;
The flow prediction module is used for inputting the first historical network flow time series data into the trained flow prediction model for each node in the power distribution communication network, so that the flow prediction model determines a first flow prediction value of the next time step according to the first historical network flow time series data;
the node risk determining module is used for inputting the first flow predicted value of each node into the trained network state sensing model so that the network state sensing model determines the risk level of each node according to the first flow predicted value of each node;
The node definition module is used for determining nodes with risk levels belonging to preset levels as fault nodes and determining all nodes except the fault nodes in the power distribution communication network as normal nodes;
the route data acquisition module is used for acquiring the position, the connection relation and the network flow data of the normal node;
The optimal route path determining module is used for inputting the position, the connection relation and the network traffic data of the normal node into the trained route repair model so that the route repair model determines an optimal route path according to the position, the connection relation and the network traffic data of the normal node;
And the path updating module is used for updating the connection relation of each node according to the optimal routing path by the power distribution communication network.
On the basis of the method item embodiment, the invention correspondingly provides a terminal equipment item embodiment, which comprises a processor, a memory and a computer program stored in the memory and configured to be executed by the processor, wherein the steps of the power distribution communication network fault restoration method are realized when the processor executes the computer program.
On the basis of the embodiment of the method item, the invention correspondingly provides an embodiment of a computer readable storage medium item, which comprises a stored computer program, wherein when the computer program runs, equipment in which the computer readable storage medium is positioned is controlled to execute the steps of the fault repairing method of the power distribution communication network.
Compared with the prior art, the beneficial effect of this scheme embodiment lies in:
According to the invention, historical network flow time series data of each node in the power distribution communication network are obtained, the flow prediction value of the next time step is predicted through the regularity and the trend of the historical data learned by the flow prediction model, then the risk level of each node is determined through the flow prediction value of each node, then the nodes with the risk level belonging to the preset level are determined as fault nodes, all nodes except the fault nodes in the power distribution communication network are determined as normal nodes, so that the network state and the fault possibility in a period of time in the future are predicted, and potential fault nodes are found in advance. And then, based on the prediction results, corresponding preventive measures and repair plans are formulated, and the positions, the connection relations and the network flow data of the normal nodes are input into a trained route repair model, so that the route repair model determines an optimal route path according to the positions, the connection relations and the network flow data of the normal nodes, and the connectivity of the communication link under the condition of normal node operation is ensured.
In summary, the method and the device discover potential fault nodes in advance by predicting future network traffic, determine the optimal routing path through the routing repair model, ensure the normal operation of the communication link and improve the reliability and stability of the power distribution communication network.
Drawings
FIG. 1 is a flow chart of a method for repairing a fault in a power distribution communication network according to an embodiment of the present invention;
FIG. 2 is a flow chart illustrating a flow prediction model training process according to an embodiment of the present invention;
FIG. 3 is a flow chart illustrating a training process of a network state aware model according to an embodiment of the present invention;
FIG. 4 is a flow chart of a route repair model training process according to an embodiment of the present invention;
Fig. 5 is a schematic structural diagram of a fault repairing device for a power distribution communication network according to an embodiment of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
In the description of the present invention, it should be understood that the terms "first," "second," and the like are used for descriptive purposes only and are not to be construed as indicating or implying a relative importance or number of technical features indicated.
As shown in fig. 1, an embodiment of the present invention provides a method for repairing a fault in a power distribution communication network, which at least includes the following steps:
step S1, acquiring first historical network flow time series data of each node in a power distribution communication network;
For step S1, first historical network traffic time series data X i t is obtained for each node in the power distribution communication network, assuming N nodes are present, then Wherein the matrixRepresenting the sequence input features, D representing the feature dimension for each time step, S representing the length of the sequence.
Note that a node in a power distribution communication network is actually a port in an assigned electrical communication network. The first historical network traffic time series data may include previous traffic observations, such as traffic data of the last hours or days.
S2, for each node in the power distribution communication network, inputting the first historical network flow time series data into a trained flow prediction model so that the flow prediction model determines a first flow prediction value of the next time step according to the first historical network flow time series data;
For step S2, the first historical network traffic time series data of each node obtained in step S1 is processed The node flow change rule can be identified by analyzing the characteristics and the modes in the historical data, and the flow prediction model can give out a first flow prediction value in the next time step based on the learned modes and the learned trends.
The following is a detailed description of the training process for the flow prediction model:
As shown in fig. 2, the training process of the flow prediction model includes the following steps:
step S201, a plurality of first training samples are obtained, wherein the first training samples comprise second historical network flow time series data of each node in the power distribution communication network and actual flow values of the next time step;
For step S201, to train the traffic prediction model, first, some first training samples are acquired, where the samples include second historical network traffic time series data of each node in the power distribution communication network, and corresponding actual traffic values of the next time step.
Step S202, each first training sample is respectively input into an initial long-short-period memory neural network model for iterative training until a first loss function converges to obtain a trained flow prediction model, wherein during each training, second historical network flow time series data of each node in the current first training sample are subjected to forward propagation for a plurality of times to obtain a second flow prediction value, and the first loss function is calculated according to the second flow prediction value and an actual flow value corresponding to the next time step in the first training sample.
For step S202, to train an accurate flow prediction model, an initial Long Short-term Memory (LSTM) neural network model is first constructed, and LSTM is a recurrent neural network that is specially used for processing sequence data and has the ability to memorize and learn Long-term dependencies. After constructing the LSTM model, training the long-term memory neural network model by using the first training sample obtained in the step S201, thereby obtaining the flow prediction model of the invention.
Construction of initial long-short-term memory neural network model, specifically, firstly, constructing a time sequence embedding layer to obtain time sequence embedding of traffic data, and historical traffic time sequence data of each port i at time step tInputting LSTM network and hiding last stateSequential embedding as port traffic(Note that,) The method is specifically expressed as follows:
Et=LSTM(Xt)
Wherein, The traffic timing embedding representing all ports, U representing the embedding size (i.e., the number of hidden units of the LSTM), is then passed to the LSTM layer, which consists of a plurality of LSTM units, including forget gates, input gates, and output gates.
The forget gate is calculated by:
ft=σ(Wf·[ht-1,xt]+bf)
Wherein W f represents a weight matrix of the forgetting gate, b f represents a bias of the forgetting gate, σ represents a sigmoid activation function of the forgetting gate, the forgetting gate integrates the input x t and the hidden layer output h t-1 at the previous moment into a vector, processes the vector through a sigmoid neural layer, and finally multiplies the vector by the unit state c t-1 at the previous moment point to point.
The input gate is calculated by:
Ct′=tanh(Wc·[ht-1,xt]+bc)
it=σ(Wi·[ht-1,xt]+bi)
Wherein W c represents the weight matrix of the candidate memory cell, b c represents the bias of the candidate memory cell, tanh represents the activation function of the candidate memory cell, W i represents the weight matrix of the input gate, b i represents the bias of the input gate, and σ represents the activation function of the input gate.
The output gate is calculated by:
ot=σ(Wo·[ht-1,xt]+bo)
ht=ot⊙tanh(ct)
Where W o represents the weight matrix of the output gate, b o represents the bias of the output gate, and σ represents the sigmoid activation function. The current input value x t and the output value h t-1 at the last moment are integrated into a vector, and the vector is passed through a sigmoid function. The current cell state is then mapped into the interval (-1, 1) by the tanh function.
Secondly, a relation embedding layer is built, topological relation and time sequence embedding of equipment of the power distribution communication network are input, a graph convolution layer is added through a graph convolution network (Graph Convolutional Networks, GCN), corrected relation embedding of flow data is obtained, and specifically, the relation between two ports in the network is encoded into a plurality of binary vectorsRepresenting relationships of all ports as a matrixK is the number of port relation categories, only the port topology connection relation is considered, so k=1, wherein the element of the ith row and the jth column is a ij.
Assume that each node i in the topology has a feature vectorRepresenting the characteristics thereof, wherein d is the dimension of the characteristic, and there is an adjacency matrixThe connection relation of the graph is represented, where n is the number of nodes. Element a ij of the adjacency matrix indicates whether there is an edge connection between node i and node j. Given a node i whose neighbor node set is N i, the update rule for node i can be expressed as:
Wherein, Representing the characteristic representation of the node i at the layer l+1, sigma represents a nonlinear activation function, W (l) represents a weight matrix of the layer l, and c ij represents a normalized coefficient. A graph roll layer can be constructed by applying the update rules to all nodes. Assuming K convolution kernels, then the node characteristics of layer l+1 can be expressed as:
wherein H (l) represents the node feature matrix of the first layer, Representing the addition of a adjacency matrix for the self-join,Represents the diagonal degree matrix and W (l) represents the weight matrix of the first layer.
Next, a prediction layer is built, the time sequence embedding and the corrected relation are embedded into an input full connection layer, and a mean square error (Mean Square Error, MSE) is used as a loss function:
Where n represents the number of samples, y i represents the true flow value, Representing the corresponding predicted flow value.
And finally, respectively inputting each first training sample into the initial model, and performing iterative training until the loss mean square error gradually decreases and tends to be stable, so as to obtain a trained flow prediction model. In the training process, an Adam optimizer is used for calculating a first moment estimation and a second moment estimation of the gradient, the optimizer performs parameter updating according to the learning rate, and errors are transmitted back to parameters of the model from an output layer through a back propagation algorithm, so that a loss function is gradually reduced, and the model can learn the mode and trend of flow data. The training process is repeated until the loss mean square error gradually decreases and tends to stabilize, and when the loss value drops to a certain extent and no longer changes significantly, the model can be considered to be trained well enough, training can be stopped, and further training can lead to overfitting or performance degradation.
S3, inputting the first flow predicted value of each node into a trained network state sensing model so that the network state sensing model determines the risk level of each node according to the first flow predicted value of each node;
for step S3, the first traffic prediction value of each node obtained in step S2 is input into a trained network state sensing model, which is a model for determining risk level of each node. The current state of the node can be judged according to the first flow predicted value by learning the relation between the node state and the flow predicted value in the historical data.
The risk level of a node may be used to evaluate the state of the node. In this embodiment, the risk levels may be classified into a first risk level, a second risk level, a third risk level, and a zero risk level. The first risk level indicates that the node is in a high risk state and serious network problems or abnormal behaviors can exist, the second risk level indicates that the node has a certain risk but does not reach a serious degree, the third risk level indicates that the risk of the node is relatively low and still needs to be monitored and managed, and the zero risk level indicates that the node is in a normal state and no obvious risk or abnormality exists.
The following is a detailed description of the training process of the network state awareness model:
as shown in fig. 3, the training process of the network state awareness model includes the following steps:
Step 301, acquiring a plurality of second training samples, wherein the second training samples comprise flow values of each node in the power distribution communication network and risk grades of the corresponding nodes;
For step S301, to perform training of the network state awareness model, first, some second training samples are required to be obtained, where the samples include a flow value of each node in the power distribution communication network, and a risk level of the corresponding node.
Step S302, each second training sample is respectively input into an initial automatic encoder model for iterative training until a second loss function converges to obtain a trained network state perception model, wherein during each training, the flow value of each node in the current second training sample is compressed and reduced in dimension through an encoder arranged in the automatic encoder model to obtain a first low-dimension feature in a potential space, the first low-dimension feature is clustered through a graph automatic encoder arranged in the automatic encoder model to obtain a predicted clustering result, and the second loss function is calculated according to the predicted clustering result and the risk level of a corresponding node in the corresponding second training sample.
In a preferred embodiment, the encoder built into the automatic encoder model is determined by:
And respectively inputting each second training sample into an encoder to be trained and a decoder to be trained in the automatic encoder model for iterative training until the reconstruction loss converges to obtain an encoder arranged in the automatic encoder model, wherein during each training, the flow value of each node in the current second training sample is compressed and reduced in dimension through the encoder to be trained in the automatic encoder model to obtain a second low-dimension feature in a potential space, reconstructing the second low-dimension feature through the decoder to be trained in the automatic encoder model to obtain reconstruction data, and calculating the reconstruction loss according to the reconstruction data and the flow value of each node in the corresponding second training sample.
For step S302, in the present embodiment, an initial automatic encoder model is used for training of network state awareness. An automatic encoder is an unsupervised learning model that can reconstruct the original data by learning a hidden representation of the input data. And respectively inputting each second training sample into the automatic encoder model, and performing iterative training until the second loss function converges, so as to obtain a trained network state perception model.
First, an initial auto-encoder model is constructed, which consists of auto-encoders and fully-connected layers, for learning hierarchical attribute information from communication data, and embedding the learned content into a compact low-dimensional feature representation. While an automatic encoder consists of two important parts, an encoder function and a decoder function. The encoder projects the raw attribute data matrix into the potential space:
X′=fe(We,X)
The conversion result X 'is called attribute feature representation, the dimension of which satisfies d' < d, f e denotes an encoder function, and the W e table weight matrix can perform linear transformation on the original attribute data, and for simplicity, all the weight matrices { W } in the embodiment have the bias { b }. The decoder is designed after the encoder, and the original data is reconstructed based on X' as
Where f d denotes a decoder function, and W d denotes a weight matrix of the decoder. To obtain hierarchical attribute information of data, we construct an encoder and decoder through L fully connected layers, where a specific depth layer adaptively processes the corresponding hierarchical information hidden in the data. In this encoder, the hierarchical attribute information representation learned by the first (0. Ltoreq.l) th neural layer can be expressed as
Wherein, Representing the weight matrix of layer i, σ represents a nonlinear activation function, such as ReLU or Tanh.Representing the attribute information representation of the previous layer. In particular the number of the elements to be processed,Representing the original attribute data X,Representing the encoder output X', the decoder function f d after the encoder similarly contains L fully connected layers, the hierarchical attribute information representation of the first layer being denoted as
Wherein, Representing the weight matrix of the layer i decoder layer. Also, in particular, the number of the elements,Representing the characteristic representation X' generated by the encoder,Representing decoder output, i.e. reconstructed data
The representation of each neural layer shows the hierarchical nature of the feature, so the attribute information extracted by the auto-encoder layers at different network depths is tagged with hierarchical meaning and nature. In this way the first and second components,Rich semantic information is provided for distinguishing fault nodes from other nodes in downstream clustering tasks. To ensure the validity of the data representation in potential space, the present embodiment defines the reconstruction loss of the basic AE by means of the Mean Square Error (MSE):
where N represents the number of device nodes in the power distribution communication network. The reconstruction loss is used as an objective function to minimize the difference between the reconstructed data and the original input data.
Next, a graph attention network layer (Graph Attention Network, GAT) is built as a graph automation encoder. In this process, the attribute information from the basic automatic encoder AE is representedNode relation representation with automatic encoder GAT from graphA combination is performed to enhance the overall feature representation capability, the specific process of this combination is as follows:
In order to more effectively fuse the information of AE and GAT, the weights of the two modules are balanced with λ, and this weight actually reflects what proportion of the characteristic attribute in each neural layer of AE is transferred into the corresponding layer of GAT. By such a design, the hierarchical attribute information can be integrated into the graphics encoder layer by layer and efficiently. And then use As input to the first layer in GAT to produce a representation of the next layer:
In this way, the AE is not only able to pass on the attribute information it extracts on the different layers to the graph model, but also helps the graph model learn more fully about the effective feature representations, which can then be used for clustering tasks, further enabling accurate classification of the risk areas of the faulty device.
After the initial automatic encoder model is built, each second training sample is respectively input into the initial automatic encoder model for iterative training, the model is optimized through repeated training, so that potential characteristics in data can be more accurately captured and represented, parameters of the model are continuously adjusted in the iterative training process to reduce the value of a loss function, the process continues until the loss function converges to a relatively stable value, which means that the model has learned enough information, and further training is unlikely to bring about significant performance improvement. When the loss function converges, a trained network state awareness model is obtained. This model can accurately capture and represent the potential features of the data, providing powerful support for the fault level classification task of step S3.
S4, determining nodes with risk levels belonging to preset levels as fault nodes, and determining all nodes except the fault nodes in the power distribution communication network as normal nodes;
And (4) matching the risk level of each node obtained in the step (3) with a preset level, determining the node with the risk level belonging to the preset level as a fault node, in the embodiment, determining the node with the risk level belonging to the first risk level or the second risk level as the fault node, and determining the fault node and simultaneously regarding all nodes except the fault node in the power distribution communication network as normal nodes. These normal nodes are considered to have higher stability and reliability with lower probability of failure under current risk assessment.
In a preferred embodiment, the power distribution communication network fault repair method further comprises:
And inputting the first flow predicted value of the fault node into the trained fault diagnosis model so that the fault diagnosis model determines a fault label of the fault node according to the first flow predicted value of the fault node.
In a preferred embodiment, the fault diagnosis model is determined by:
The method comprises the steps of obtaining a plurality of third training samples, wherein the third training samples comprise second historical network flow time series data of each node in a power distribution communication network and fault labels of the corresponding nodes;
And respectively inputting each third training sample into a fault diagnosis model to be trained for iterative training until a third loss function converges to obtain a trained fault diagnosis model, wherein during each training, feature mapping and label classification are carried out on second historical network flow time series data of each node in the current third training sample to obtain a prediction label, and the third loss function is calculated according to the prediction label and the corresponding fault label.
In an embodiment of the present invention, after determining the fault node, the fault type of the fault node may be further determined, specifically, the first traffic prediction value of the fault node is input into a trained fault diagnosis model, and the fault diagnosis model may infer the fault type of the fault node according to the input data and in combination with the learned mapping relationship between the fault feature and the traffic feature, so as to obtain the fault label of the fault node. The failure tag is a specific identifier that indicates the failure type of the failed node (e.g., hardware failure, software failure, network congestion, etc.).
The following is a detailed description of the training process of the fault diagnosis model:
first, data preparation is performed using CWRU bearing datasets containing bearing vibration signals acquired under different conditions. The signals are used for recording vibration conditions of the bearing during the operation of the shaft through the sensor, and are mainly used for researching three fault types, namely bearing inner ring faults, bearing rolling body faults and bearing outer ring faults. Distribution network traffic data is collected, which records the amount of traffic collected by each port of the switch at a sampling frequency daily during the period from month 6 of 2023 to month 11 of 2023.
The data are used for model migration learning and fault prediction, specifically, firstly, a feature extractor is constructed, distribution network flow data are mapped to a D-dimensional feature vector f epsilon R D, wherein W f and b f are weight matrixes and bias of the feature extractor, the designed feature extractor comprises three modules, each module comprises a one-dimensional convolution layer, a batch normalization layer and a maximum pooling layer, reLu is adopted as an activation function, and finally, the feature vector is output through an adaptive average pooling layer. The input data is subjected to successive convolution operations by applying a one-dimensional convolution window of size 3. And then, carrying out batch normalization operation on the data subjected to convolution processing, adjusting statistical parameters of the data to enable the data to conform to normal distribution, reducing internal covariate offset, accelerating convergence speed of a deep network, introducing nonlinearity by using ReLu (·) activation function, increasing expression capacity of the network, enabling the network to learn and represent more complex functional relationships, simultaneously avoiding the problem of gradient disappearance, and finally, reserving strongest response of input features through a maximum pooling layer, and highlighting significant features.
Secondly, constructing a domain classifier, learning the characteristic representation of the source domain and the target domain data according to the characteristic vector f output by the characteristic extractor, performing two classification on the source domain and the target domain data, outputting a domain classification label d, considering that the data with the domain label of 0 comes from the source domain, and the data with the domain label of 1 comes from the target domain, so as to predict the data source. The domain classifier consists of a plurality of full-connection layers, f is subjected to full-connection layer, all parameters in the feature vector are learned, then a batch normalization layer is adopted, reLu activation functions are introduced to realize nonlinear conversion of data, the representing dimensions of the data are enriched, the feature is converted into category probability through the full-connection layer, and the probability is normalized by using a sigmoid (·) function.
Then, a label classifier is built, and the corresponding class label is predicted according to the characteristics of the input data. In order to reduce the overfitting phenomenon, a plurality of Dropout layers are introduced in the training process of the label classifier, neurons in the network are randomly discarded in the training process, so that the neurons do not play a role in forward and backward propagation, a small network is simulated, different sub-network characteristics are learned in each training, the complex co-adaptability among the neurons is reduced, and the network learns more generalized characteristics. And all neurons are activated during the test, so that the expressive power of the model is improved. The last layer of the tag classifier uses LogSigmoid functions as activation functions, logSigmoid functions are defined as follows:
The difference between the general Sigmoid function and the method is that the input value is mapped into a negative infinity to 0 interval, so that the dynamic range of the gradient in the back propagation process is wider, the gradient disappearance problem is relieved, and the numerical stability can be improved.
Then, a gradient inversion layer is constructed, the core function of which is to keep the input unchanged in the forward propagation process of the model, and multiply the gradient by a negative number in the backward propagation process, so as to change the direction of parameter updating, and improve the performance of the model by reducing the loss of a label classifier in the backward propagation process, and simultaneously maximize the loss of a domain classifier to realize domain migration.
Inputting the data to a feature extractor, mapping the data to a high-dimensional feature space, and outputting the data as:
f=Gf(x,Wf,bf)
The feature vectors output by the feature extractor are respectively input into a domain classifier and a label classifier to obtain corresponding domain classification labels and fault classification labels, and the output is as follows:
d=Gd(Gd(x);Wd,bd)=Gd(f;Wd,bd)
y=Gy(Gf(x);Wf,bf)=Gd(f;Wf,bf)
A gradient inversion layer (GRL) is introduced between the feature extractor and the domain classifier, and is used for inverting the gradient of the domain classifier so as to achieve two aims in the training process of the feature extractor, namely, the performance of the tag classifier is optimized, and the cross-domain generalization capability of the features is enhanced through blurring the domain boundary. So that the feature extractor not only needs to consider the loss of the tag classifier but also how to 'spoof' the domain classifier when updating the parameters, the classification accuracy between the source domain and the target domain approaches the level of random guess, making it difficult to distinguish the feature sources. The domain classifier penalty is as follows:
GRL takes a unique strategy to intercept the gradient from the later layer and reverse the gradient direction by multiplying by a specific coefficient- λ, the parameter λ being defined by the following formula:
Where max and min determine the variation interval of the parameter λ, i.e. the maximum and minimum values in the training, α determines the variation rate of the parameter λ, i and n represent the number of samples and the total number of samples in the training, respectively. In the early stage of training, as i is relatively small, the value of λ is relatively close to the set minimum value min, and as training progresses, λ gradually increases toward the maximum value max. The design can reduce the influence from noise in the early training stage, accelerate network convergence in the later stage and learn the domain invariant feature.
By minimizing the loss function (such as cross entropy loss function) of the tag classifier, the model can fully utilize source domain data, improve classification performance, update parameters of the feature extractor and the tag classifier, and gradually converge to an optimal state. Tag classification loss is as follows:
s5, acquiring the position, connection relation and network flow data of a normal node;
for step S5, after the fault node and the normal node are distinguished in step S4, the position, the connection relationship and the network traffic data of the normal node are obtained, so as to facilitate the subsequent establishment of a recovery scheme of the route.
S6, inputting the position, the connection relation and the network traffic data of the normal node into a trained route repair model so that the route repair model determines an optimal route according to the position, the connection relation and the network traffic data of the normal node;
For step S6, after the normal node and the failure node are distinguished, the position, connection relation and network traffic data of the normal node are input into a trained route repair model, and the route repair model is based on a deep learning algorithm, and can predict and optimize a route path in the network by analyzing and learning a normal behavior pattern in the network. When the network fails, the route repair model can automatically remove the failed node according to the data of the normal node, and calculate the optimal route path, thereby recovering the connectivity and performance of the network.
Specifically, the route repair model firstly generates a complete topology matrix embedded with node traffic data according to the input positions, connection relations and network traffic data of normal nodes, in the matrix, N nodes are assumed to be arranged in the network, each node is endowed with a tuple containing the bandwidth utilization rate, the traffic, the abscissa and the ordinate, and the tuples are skillfully spliced into a high-dimensional information matrix. This information matrix is then used as input data for the neural network.
After receiving the information, the neural network performs a series of complex calculations and processes, and finally outputs a vector with dimension N. Each element in this vector corresponds to a node in the network, which represents the embedded value of the node. The embedded values are obtained by a dimension reduction coding technology based on the topological structure of the full graph and the performance index of the nodes of the neural network.
Next, we use a single agent network to convert the original node topology into a weighted topology. In the weighted graph, the weight value of each edge is calculated by the neural network according to the topological structure of the whole graph and the performance index of the node. By the design, the model can more accurately capture the connection strength between the nodes and the overall performance of the network.
After the calculation of the node embedded values and the construction of the weighted topological graph are completed, the route repair model enters the final stage of strategy generation. At this point, the trained single agent algorithm within the model begins to function, which fuses reinforcement learning and ordering mechanisms to generate the optimal routing strategy. The reinforcement learning section enables trial and error learning of an monomer in a simulated network environment, which selects a routing path according to the current network state (i.e., weighted topology and node embedded values), and receives feedback from the environment (e.g., improvement or degradation of network performance). By constantly learning and adjusting, single agents gradually learn how to select optimal routing paths under different network conditions. The ordering mechanism is then used to select an optimal one of a plurality of possible routing paths. The method sorts the candidate paths based on the performance evaluation of the neural network on the nodes and the connections, so as to select the routing path with optimal performance, strongest connectivity and most meeting the current network requirements.
Finally, the route repair model generates an optimal route strategy which considers both network performance and connectivity by combining reinforcement learning and ordering mechanisms. The strategy can effectively bypass the fault node, and ensure the efficient and stable transmission of the data in the network by utilizing the position and connection relation of the normal node and the network flow data.
The following is a detailed description of the training process of the route repair model:
as shown in fig. 4, the training process of the route repair model includes the following steps:
Step S601, acquiring a plurality of fourth training samples, wherein the fourth training samples comprise the position, the connection relation and the network flow data of each node in the power distribution communication network;
for the training of the route repair model in step S601, first, a number of fourth training samples are acquired, where the fourth training samples include the location, connection relationship, and network traffic data of each node in the power distribution communication network.
It should be noted that the routing repair model is an unsupervised deep reinforcement learning model, for which the "environment" can simulate a power distribution communication network, where each node and connection represents a part of the network, and the task of the routing repair model is to find an optimal routing path in this simulated environment, and to achieve this goal, the routing repair model needs to continuously try different paths and receive feedback according to the network performance (e.g. delay, throughput, etc.). These feedback will be used as reward or penalty signals to guide the learning process of the model.
Step S602, each fourth training sample is respectively input into a route repair model to be trained for iterative training until a fourth loss function converges to obtain a trained route repair model, wherein a topology matrix is generated according to the position, the connection relation and the network flow data of each node in the current fourth training sample during each training, the topology matrix generates a route strategy through reinforcement learning and a sorting mechanism in a single-agent algorithm, and the fourth loss function is calculated according to the route strategy and a preset reward function.
For step S602, each fourth training sample obtained in step S601 is respectively input into the route repair model to be trained for iterative training until the fourth loss function converges, so as to obtain a trained route repair model. Specifically, first, the entire grid system is considered as an agent that is responsible for finding the optimal routing path in the grid
The length of the routing path, the sum of ingress bandwidth utilization and ingress traffic for all nodes on the path, is denoted (D r,Rt,Ot) as a state space (STATE SPACE), where D r represents the sum of paths, R t represents the sum of ingress bandwidth utilization, and O t represents the sum of ingress traffic.
The weight of each node is denoted as (I 1,I2,...,IN) and is taken as an action space (action space), where N is the number of nodes.
The bonus function is defined as follows:
Where clip (x, min, max) is a clipping function, meaning that x is clipped to the range of (min, max). When the network outputs a routing strategy, the algorithm uses the detection function to judge whether the routing is dead, if so, a-2 rewarding value is given, otherwise, the algorithm calculates the negative product of the sum of the incoming bandwidth utilization rate and the sum of the incoming traffic of all nodes on the routing and cuts the negative product into the range of (-2, 0). The purpose of clipping is to enhance the stability of the training process. Meanwhile, the method multiplies-R tOt by a coefficient to ensure that most values are originally in the (-2, 0) range. By the method, different rewards can be given to different states, so that the model can reflect the output effect of the neural network more accurately.
Next, parameters θ μ of the Actor network and parameters of the Critic network are initializedI e {1,2}. And the initialization of the target Actor network and the target Critic network parameters is performed by copying the parameters of the corresponding Actor network and Critic network, namely theta μ′=θμ,At the same time, the volume of the replay buffer, the discount factor (discount factor) y, the learning rate, the soft update coefficient τ, the maximum step size N steps and the maximum number of rounds N episode are determined.
Inputting the initial state of the system into an Actor network to obtain the initial state:
at=clip(μ(st|θμ)+κ,aLow,aHigh)
Wherein,
The rewards r t for the system feedback, the actions taken a t, the current state s t, and the next state s t+1 are stored in a replay buffer. When updating DNN, randomly extracting small batches of experience samples in a replay buffer area, and then calculating target actions through a target Actor network:
at+1=clip.μ'.st+1|θμ')+clip(κ′,-c,c),aLow,aHigh)
Wherein,
After the target action is obtained, the target rewarding value is calculated by selecting the minimum value output by the target Critic network:
Updating parameters of the Critic network by adopting a gradient descent method to minimize a fourth loss function:
Where i ε {1,2}.
In addition, the update frequency of the target network and the Actor network is lower than the Critic network in the algorithm. Thus, per training step t 0, gradient ramp up is used to update the Actor network:
wherein a t=μ(st|θμ).
Meanwhile, per training step t 0, the target network is updated with soft update coefficients:
θμ'←τθμ+(1-τ)θμ'
Where i ε {1,2}.
And independently sequencing the distances from the neighbor nodes of the nodes to the end points and the weights output by the neural network. And then, adding the respective ordered results bit by bit to obtain a comprehensive ordered list, and selecting the next neighbor node by using the comprehensive ordered list. For example, node 0 has three neighbors of node N L = [1,2,3] that are each d= [1200,1000,1100] away from the end point, and then the distance sorted list is O d = [2,0,1] (order from small to large). Similarly, the method also sorts the neighbor node weight tables, and if the weights of the neighbor node weight tables are w= [0.9,0.2,0.4], the weight sorting table is O w = [2,0,1] (sorting from small to large). Next, the results of the distance sorting and the weight sorting are added item by item, and the integrated sorting of the addition is O a = [4,0,2] based on the above assumption. Finally, the minimum index of the sum is selected as the index of the next hop node, e.g., the minimum value of O a is 0 and its index is 1. Therefore, according to the above neighbor node list N L = [1,2,3], then node 2 is selected as the next-hop node of node 0.
And S7, updating the connection relation of each node by the power distribution communication network according to the optimal routing path.
For step S7, the optimal routing path obtained in step S6 is based on the connection information of the normal nodes, and the existence of the fault nodes is not considered at all, so that according to the optimal routing path, the power distribution communication network reconfigures the connection relation of each node, and therefore, all the normal nodes can be ensured to communicate with each other efficiently and stably, and meanwhile, the connection with the fault nodes is completely disconnected, so that the overall stability of the network is maintained.
The power distribution communication network predicts future flow values by analyzing the historical flow data, so that the network can identify the nodes which are likely to have faults in advance, and necessary repair measures are taken before the faults actually occur.
As shown in fig. 5, on the basis of the above-mentioned method item embodiments, a corresponding apparatus item embodiment is provided;
The embodiment of the invention provides a power distribution communication network fault repairing device, which comprises a flow data acquisition module, a flow prediction module, a node risk determination module, a node definition module, a route data acquisition module, an optimal route path determination module and a path updating module;
the flow data acquisition module is used for acquiring first historical network flow time series data of each node in the power distribution communication network;
The flow prediction module is used for inputting the first historical network flow time series data into the trained flow prediction model for each node in the power distribution communication network, so that the flow prediction model determines a first flow prediction value of the next time step according to the first historical network flow time series data;
the node risk determining module is used for inputting the first flow predicted value of each node into the trained network state sensing model so that the network state sensing model determines the risk level of each node according to the first flow predicted value of each node;
The node definition module is used for determining nodes with risk levels belonging to preset levels as fault nodes and determining all nodes except the fault nodes in the power distribution communication network as normal nodes;
the route data acquisition module is used for acquiring the position, the connection relation and the network flow data of the normal node;
The optimal route path determining module is used for inputting the position, the connection relation and the network traffic data of the normal node into the trained route repair model so that the route repair model determines an optimal route path according to the position, the connection relation and the network traffic data of the normal node;
And the path updating module is used for updating the connection relation of each node according to the optimal routing path by the power distribution communication network.
It can be understood that the above-mentioned embodiment of the apparatus corresponds to the embodiment of the method of the present invention, and may implement the method for repairing a fault of a power distribution communication network provided by any one of the above-mentioned embodiments of the method of the present invention.
It should be noted that the above-described embodiment of the apparatus is merely illustrative, and some or all of the modules may be selected according to actual needs to achieve the purpose of the embodiment solution. In addition, in the drawings of the embodiment of the device provided by the invention, the connection relation between the modules represents that the modules have communication connection, and can be specifically implemented as one or more communication buses or signal lines. Those of ordinary skill in the art will understand and implement the present invention without undue burden.
On the basis of the embodiment of the power distribution communication network fault repairing method, another embodiment of the invention provides a terminal device, which comprises a processor, a memory and a computer program stored in the memory and configured to be executed by the processor, wherein the processor executes the computer program to realize the power distribution communication network fault repairing method according to any embodiment of the invention.
Illustratively, in this embodiment the computer program may be partitioned into one or more modules, which are stored in the memory and executed by the processor to perform the present invention. The one or more module elements may be a series of computer program instruction segments capable of performing a specific function describing the execution of the computer program in the terminal device.
The terminal equipment can be computing equipment such as a desktop computer, a notebook computer, a palm computer, a cloud server and the like. The terminal device may include, but is not limited to, a processor, a memory.
The Processor may be a central processing unit (Central Processing Unit, CPU), other general purpose Processor, digital signal Processor (DIGITAL SIGNAL Processor, DSP), application SPECIFIC INTEGRATED Circuit (ASIC), off-the-shelf Programmable gate array (Field-Programmable GATE ARRAY, FPGA) or other Programmable logic device, discrete gate or transistor logic device, discrete hardware components, or the like. The general purpose processor may be a microprocessor or the processor may be any conventional processor or the like, which is a control center of the terminal device, and which connects various parts of the entire terminal device using various interfaces and lines.
On the basis of the method item embodiment, another embodiment of the invention provides a computer readable storage medium, which comprises a stored computer program, wherein the computer program is used for controlling equipment where the computer readable storage medium is located to execute the power distribution communication network fault restoration method according to any one of the method item embodiments.
Wherein the modules/units of the power distribution communication network fault resilient means/terminal device integration, if implemented in the form of software functional units and sold or used as separate products, may be stored in a computer readable storage medium. Based on such understanding, the present invention may implement all or part of the flow of the method of the above embodiment, or may be implemented by a computer program to instruct related hardware, where the computer program may be stored in a computer readable storage medium, and when the computer program is executed by a processor, the computer program may implement the steps of each of the method embodiments described above. Wherein the computer program comprises computer program code which may be in source code form, object code form, executable file or some intermediate form etc. The computer readable medium may include any entity or device capable of carrying the computer program code, a recording medium, a U disk, a removable hard disk, a magnetic disk, an optical disk, a computer Memory, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), an electrical carrier signal, a telecommunications signal, a software distribution medium, and so forth.
While the foregoing is directed to the preferred embodiments of the present invention, it will be appreciated by those skilled in the art that changes and modifications may be made without departing from the principles of the invention, such changes and modifications are also intended to be within the scope of the invention.
Claims (10)
1. A method for repairing a fault in a power distribution communication network, comprising:
Acquiring first historical network flow time series data of each node in a power distribution communication network;
For each node in the power distribution communication network, inputting first historical network flow time series data into a trained flow prediction model, so that the flow prediction model determines a first flow prediction value of the next time step according to the first historical network flow time series data;
inputting the first flow predicted value of each node into a trained network state sensing model, so that the network state sensing model determines the risk level of each node according to the first flow predicted value of each node;
Determining nodes with risk levels belonging to preset levels as fault nodes, and determining all nodes except the fault nodes in the power distribution communication network as normal nodes;
acquiring the position, connection relation and network flow data of a normal node;
Inputting the position, the connection relation and the network traffic data of the normal node into a trained route repair model, so that the route repair model determines an optimal route according to the position, the connection relation and the network traffic data of the normal node;
and updating the connection relation of each node by the power distribution communication network according to the optimal routing path.
2. The power distribution communication network fault remediation method of claim 1 further comprising:
And inputting the first flow predicted value of the fault node into a trained fault diagnosis model, so that the fault diagnosis model determines a fault label of the fault node according to the first flow predicted value of the fault node.
3. The power distribution communication network fault remediation method according to claim 2, wherein the flow prediction model is determined by:
The method comprises the steps of obtaining a plurality of first training samples, wherein the first training samples comprise second historical network flow time series data of each node in a power distribution communication network and actual flow values of the next time step;
And respectively inputting each first training sample into an initial long-short-term memory neural network model for iterative training until the first loss function converges to obtain a trained flow prediction model, wherein during each training, the second historical network flow time series data of each node in the current first training sample is transmitted forward for a plurality of times to obtain a second flow prediction value, and the first loss function is calculated according to the second flow prediction value and the actual flow value corresponding to the next time step in the first training sample.
4. A method of fault remediation of a power distribution communication network according to claim 3 wherein the network state awareness model is determined by:
obtaining a plurality of second training samples, wherein the second training samples comprise flow values of each node in the power distribution communication network and risk grades of the corresponding nodes;
And respectively inputting each second training sample into an initial automatic encoder model for iterative training until a second loss function converges to obtain a trained network state perception model, wherein during each training, the flow value of each node in the current second training sample is compressed and reduced in dimension through an encoder arranged in the automatic encoder model to obtain a first low-dimension feature in a potential space, the first low-dimension feature is clustered through a graph automatic encoder arranged in the automatic encoder model to obtain a predicted clustering result, and the second loss function is calculated according to the predicted clustering result and the risk level of a corresponding node in the corresponding second training sample.
5. The method of claim 4, wherein the encoder built into the automatic encoder model is determined by:
And respectively inputting each second training sample into an encoder to be trained and a decoder to be trained in the automatic encoder model for iterative training until the reconstruction loss converges to obtain an encoder arranged in the automatic encoder model, wherein during each training, the flow value of each node in the current second training sample is compressed and reduced in dimension through the encoder to be trained in the automatic encoder model to obtain a second low-dimensional feature in a potential space, reconstructing the second low-dimensional feature through the decoder to be trained in the automatic encoder model to obtain reconstruction data, and calculating the reconstruction loss according to the reconstruction data and the flow value of each node in the corresponding second training sample.
6. The method of claim 5, wherein the fault diagnosis model is determined by:
The method comprises the steps of obtaining a plurality of third training samples, wherein the third training samples comprise second historical network flow time series data of each node in a power distribution communication network and fault labels of the corresponding nodes;
and respectively inputting each third training sample into a fault diagnosis model to be trained for iterative training until a third loss function converges to obtain a trained fault diagnosis model, wherein during each training, feature mapping and label classification are carried out on second historical network flow time series data of each node in the current third training sample to obtain a prediction label, and the third loss function is calculated according to the prediction label and the corresponding fault label.
7. The method of claim 6, wherein the route repair model is determined by:
obtaining a plurality of fourth training samples, wherein the fourth training samples comprise the position, the connection relation and the network flow data of each node in a power distribution communication network;
And respectively inputting each fourth training sample into a route repair model to be trained for iterative training until a fourth loss function converges to obtain a trained route repair model, wherein a topology matrix is generated according to the position, the connection relation and the network flow data of each node in the current fourth training sample during each training, the topology matrix generates a route strategy through reinforcement learning and a sequencing mechanism in a single-agent algorithm, and the fourth loss function is calculated according to the route strategy and a preset reward function.
8. The power distribution communication network fault repairing device is characterized by comprising a flow data acquisition module, a flow prediction module, a node risk determination module, a node definition module, a route data acquisition module, an optimal route path determination module and a path updating module;
The flow data acquisition module is used for acquiring first historical network flow time series data of each node in the power distribution communication network;
The flow prediction module is used for inputting the first historical network flow time series data into the trained flow prediction model for each node in the power distribution communication network, so that the flow prediction model determines a first flow prediction value of the next time step according to the first historical network flow time series data;
the node risk determining module is used for inputting the first flow predicted value of each node into the trained network state sensing model so that the network state sensing model determines the risk level of each node according to the first flow predicted value of each node;
the node definition module is used for determining nodes with risk levels belonging to preset levels as fault nodes and determining all nodes except the fault nodes in the power distribution communication network as normal nodes;
the route data acquisition module is used for acquiring the position, the connection relation and the network flow data of the normal node;
The optimal routing path determining module is used for inputting the position, the connection relation and the network traffic data of the normal node into the trained routing repair model so that the routing repair model determines an optimal routing path according to the position, the connection relation and the network traffic data of the normal node;
and the path updating module is used for updating the connection relation of each node according to the optimal routing path by the power distribution communication network.
9. A terminal device comprising a processor, a memory and a computer program stored in the memory and configured to be executed by the processor, the processor implementing the power distribution communication network fault remediation method according to any one of claims 1 to 7 when the computer program is executed.
10. A computer readable storage medium comprising a stored computer program, wherein the computer program, when run, controls a device in which the computer readable storage medium is located to perform the method of fault remediation of a power distribution communication network according to any one of claims 1 to 7.
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202411720767.1A CN119544603A (en) | 2024-11-28 | 2024-11-28 | A method, device, terminal equipment and computer-readable storage medium for repairing power distribution communication network faults |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202411720767.1A CN119544603A (en) | 2024-11-28 | 2024-11-28 | A method, device, terminal equipment and computer-readable storage medium for repairing power distribution communication network faults |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| CN119544603A true CN119544603A (en) | 2025-02-28 |
Family
ID=94710790
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202411720767.1A Pending CN119544603A (en) | 2024-11-28 | 2024-11-28 | A method, device, terminal equipment and computer-readable storage medium for repairing power distribution communication network faults |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN119544603A (en) |
Cited By (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN120358567A (en) * | 2025-06-23 | 2025-07-22 | 北京前景无忧电子科技股份有限公司 | Communication route optimization method, system and equipment suitable for power distribution network |
Citations (8)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN111970163A (en) * | 2020-06-30 | 2020-11-20 | 网络通信与安全紫金山实验室 | Network flow prediction method of LSTM model based on attention mechanism |
| CN112231142A (en) * | 2020-09-22 | 2021-01-15 | 南方电网调峰调频发电有限公司信息通信分公司 | System backup recovery method and device, computer equipment and storage medium |
| CN114219045A (en) * | 2021-12-30 | 2022-03-22 | 国网北京市电力公司 | Dynamic early warning method, system and device for risk of power distribution network and storage medium |
| CN115801549A (en) * | 2023-01-28 | 2023-03-14 | 中国人民解放军国防科技大学 | Adaptive network recovery method, device and equipment based on key node identification |
| CN118520379A (en) * | 2024-07-23 | 2024-08-20 | 国网陕西省电力有限公司电力科学研究院 | A GIS disconnect switch risk assessment method, device, equipment and storage medium |
| CN118631513A (en) * | 2024-06-05 | 2024-09-10 | 广州市杰青计算机有限公司 | An intelligent network integrated optimization system |
| CN118660014A (en) * | 2024-08-19 | 2024-09-17 | 苏州爱雄斯通信技术有限公司 | Dynamic load balancing method and system for optical communication device |
| CN119024097A (en) * | 2024-08-14 | 2024-11-26 | 广东电网有限责任公司 | A distribution network fault diagnosis method, device, terminal equipment and storage medium |
-
2024
- 2024-11-28 CN CN202411720767.1A patent/CN119544603A/en active Pending
Patent Citations (8)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN111970163A (en) * | 2020-06-30 | 2020-11-20 | 网络通信与安全紫金山实验室 | Network flow prediction method of LSTM model based on attention mechanism |
| CN112231142A (en) * | 2020-09-22 | 2021-01-15 | 南方电网调峰调频发电有限公司信息通信分公司 | System backup recovery method and device, computer equipment and storage medium |
| CN114219045A (en) * | 2021-12-30 | 2022-03-22 | 国网北京市电力公司 | Dynamic early warning method, system and device for risk of power distribution network and storage medium |
| CN115801549A (en) * | 2023-01-28 | 2023-03-14 | 中国人民解放军国防科技大学 | Adaptive network recovery method, device and equipment based on key node identification |
| CN118631513A (en) * | 2024-06-05 | 2024-09-10 | 广州市杰青计算机有限公司 | An intelligent network integrated optimization system |
| CN118520379A (en) * | 2024-07-23 | 2024-08-20 | 国网陕西省电力有限公司电力科学研究院 | A GIS disconnect switch risk assessment method, device, equipment and storage medium |
| CN119024097A (en) * | 2024-08-14 | 2024-11-26 | 广东电网有限责任公司 | A distribution network fault diagnosis method, device, terminal equipment and storage medium |
| CN118660014A (en) * | 2024-08-19 | 2024-09-17 | 苏州爱雄斯通信技术有限公司 | Dynamic load balancing method and system for optical communication device |
Cited By (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN120358567A (en) * | 2025-06-23 | 2025-07-22 | 北京前景无忧电子科技股份有限公司 | Communication route optimization method, system and equipment suitable for power distribution network |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| WO2023279674A1 (en) | Memory-augmented graph convolutional neural networks | |
| CN110601777B (en) | Method for estimating satellite-ground downlink co-channel interference under low-orbit mobile satellite constellation | |
| CN114116995B (en) | Session recommendation method, system and medium based on enhanced graph neural network | |
| CN118214718B (en) | Congestion control method, electronic device, storage medium, and program product | |
| CN112613227A (en) | Model for predicting remaining service life of aero-engine based on hybrid machine learning | |
| Urgun et al. | Composite power system reliability evaluation using importance sampling and convolutional neural networks | |
| CN119544603A (en) | A method, device, terminal equipment and computer-readable storage medium for repairing power distribution communication network faults | |
| CN117591722A (en) | Public opinion propagation prediction method and system based on social network public opinion dynamics model | |
| CN114662658B (en) | A hotspot prediction method for on-chip optical networks based on LSTM neural network | |
| CN118429004B (en) | Commodity order prediction method in supply chain network and related products | |
| CN119355538B (en) | Battery state of charge estimation method based on fuzzy mathematics and genetic algorithm | |
| CN115841179A (en) | Power system situation sensing method based on graph digital twins | |
| KR20250068149A (en) | Method of data imputation for multivariate time series | |
| CN115800274B (en) | 5G distribution network feeder automation self-adaptation method, device and storage medium | |
| CN113704570B (en) | Large-scale complex network community detection method based on self-supervision learning type evolution | |
| JP3757722B2 (en) | Multi-layer neural network unit optimization method and apparatus | |
| Etefaghi et al. | AdaInNet: an adaptive inference engine for distributed deep neural networks offloading in IoT-FOG applications based on reinforcement learning | |
| CN117454750A (en) | Temperature prediction method, device, equipment and storage medium | |
| CN117332818A (en) | Fault diagnosis method based on self-adaptive graph neural network multi-source data fusion | |
| Shi | A method of optimizing network topology structure combining Viterbi algorithm and Bayesian algorithm | |
| Shu et al. | Link prediction based on 3D convolutional neural network | |
| Mahmoudabadi et al. | Online one pass clustering of data streams based on growing neural gas and fuzzy inference systems | |
| CN120499024A (en) | Communication network fault early warning method and device, terminal equipment and storage medium | |
| CN119151703B (en) | Multi-target large-scale community detection method based on agent model self-adaptive selection | |
| KR102875852B1 (en) | Method and System for Training Dynamic Sub-Modules for Network Management based on Reinforcement Learning with Parametric Reward |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination |