Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention will be further described in detail with reference to the accompanying drawings and examples. It should be understood that the detailed description and specific examples, while indicating the scope of the invention, are intended for purposes of illustration only and are not intended to limit the scope of the invention.
Fig. 1 is an overall flowchart of a knowledge representation learning method based on a dual-agent reinforcement learning path search according to an embodiment of the present invention. Referring to fig. 1, the embodiment provides a knowledge representation learning method based on a dual-agent reinforcement learning path search, which includes three stages of data preprocessing, path search and knowledge representation learning.
Data preprocessing stage
In the data preprocessing stage, the redundant relationship in the knowledge base, the pre-training entity and the relationship vector are mainly deleted, as shown in fig. 2, the specific process is as follows:
step 1-1: and inputting a knowledge base KB and deleting the redundancy relation.
Knowledge in the knowledge base KB is represented in the form of triples (h, r, t), where h represents the head entity, r represents the relationship, and t represents the tail entity. h and t belong to an entity set E, R belongs to a relation set R, the triple (h, R, t) reflects the existence of the relation R between the entity h and the entity t, and the redundant relation in the knowledge base KB is deleted to obtain the processed knowledge base.
Step 1-2: vectors of entities and relationships in the knowledge base KB are pre-trained using an existing translation-based model (e.g., TransE).
The path searcher needs to utilize vectors of entities and relationships, and therefore the vectors of entities and relationships in the knowledge base KB are pre-trained using a translation-based model.
Taking TransE as an example: TransE learns a vector for each entity and relation in the knowledge base, and in a triplet (h, r, t), the vectors h, r and t corresponding to the head entity h, the relation r and the tail entity t should satisfy:
h+r=t (1)
and the vectors of the entities and the relations are learned by taking the vectors as targets.
Path search phase
The path search stage mainly realizes searching a plurality of multi-hop relationships between the entity pairs of each triple in the knowledge base according to the vectors of the entities and the relationships, and transmits the multi-hop relationships finally reaching the tail entity to the knowledge representation learning stage, as shown in fig. 3, the specific flow is as follows:
step 2-1: the triples in the knowledge base KB are divided into batches.
The invention trains the path searcher in a batch processing mode. Triples in KB are randomly divided into batches according to a predefined defined batch size.
Step 2-2: taking one batch, searching the multi-hop relationship between the entity pairs of each triple in the batch through a path searcher.
The path searcher comprises a relation agent and an entity agent, starting from the head entity of the given triple, the relation agent calculates the probability distribution of all relations contained in the current entity, and selects one relation; and the entity agent calculates the probability distribution of all tail entities corresponding to the current entity and the selected relation and selects one entity. This process is continued until the tail entity of a given triplet is reached or the maximum number of steps is reached.
The path searcher is based on an enhanced learning model, and is composed of two agents, called a relational agent and an entity agent. The process of searching for a multi-hop relationship between (h, t) in (h, r, t) is as follows: starting from a head entity h, at the t step, the relation agent starts from a current entity etIncluding selection of one relation r from all relationstEntity proxy slave etAnd rtAnd selecting one entity from all corresponding tail entities, and carrying out the process until the tail entity t is reached or the step number reaches a preset maximum step number.
The environment of the path searcher can be viewed as a Markov decision process, represented by a four-tuple (S, A, T, R), where S represents a set of states, A represents a set of actions, T represents a transition, and R represents a reward.
At step t, the status of the relational agent is denoted Srel,t=(etR, t) in which etIs the current entity etR is a vector representation of the relationship r in the triplet, the vector representation of the tail entity t in the t triplet; the state of the entity agent is denoted Sent,t=(et,r,t,rt),rtRelationship r being a selection of a relationship agenttIs represented by a vector of (a).
At step t, the action set of the relational agent is the current entity etAll relationships contained, denoted Arel,t={r|(etR, e) belongs to KB }; the action set of the entity proxy is the current entity etRelationship r with relationship agent selectiontAll corresponding tail entities, denoted Aent,t={e|(et,rt,e)∈KB}。
At step t, the status of the relational agent is from (e)tR, t) to (e)t+1R, T), the transition of the relational agent is denoted Trel((et,r,t),rt)=(et+1R, t); state of entity agent from (e)t,r,t,rt) Become (e)t+1,r,t,rt+1) The transfer of the physical agent is denoted as Tent((et,r,t,rt),et+1)=(et+1,r,t,rt+1)。
One multi-hop relationship p ═ (r)1,r2,…,rn) The reward of (1) is composed of two parts of overall precision and path weight. Wherein the overall accuracy Rg(p) is represented by:
path weight Rw(p) is represented by:
where W is the weight matrix and p is the vector representation of the multihop relationship p:
the total reward for the multi-hop relationship p is then expressed as:
R(p)=Rg(p)+Rw(p) (5)
both the relational agent and the entity agent compute a probability distribution over the decision network for performing each action. The input to the decision network contains both historical information and status. Vector d for history information at t-th steptThat the present invention obtains d by training an RNNt:
dt=RNN(dt-1,[et-1,rt-1]) (6)
Wherein [,]representing the concatenation of two vectors. The inputs of the decision networks corresponding to the relational agent and the entity agent are respectively represented as Xrel,t=[dt,Srel,t]And Xent,t=[dt,Sent,t]。
The structure of the decision network is a fully-connected neural network comprising two hidden layers, and each hidden layer is connected with a ReLU nonlinear layer.
The output of the decision network corresponding to the relation agent and the entity agent is Arel,tAnd Aent,tProbability distribution of each action in (1):
Prel(Xrel,t)=softmax(Arel,tOrel,t) (7)
Pent(Xent,t)=softmax(Aent,tOent,t) (8)
wherein A isrel,tAnd Aent,tRespectively represent by Arel,t、Aent,tA matrix formed by vectors of all the relations and entities; o isrel,tAnd Oent,tRespectively representing the outputs of the second ReLU layer of the decision network corresponding to the relational and entity agents. The relationship agent and the entity agent, when selecting an entity or a relationship, will make a random selection based on the calculated probability distribution.
For each triplet in a batch, several multi-hop relationships are searched using the path searcher described above.
Step 2-3: and updating the parameters and the weight matrixes of the relation agents and the entity agents by utilizing the multi-hop relations searched in the batch.
The relevant parameters of the path search phase are updated by maximizing the expected cumulative reward, the parameters including the parameters of the two decision networks, the parameters of the RNN calculating the history information and the weight matrix W. The desired jackpot is defined as:
wherein
Represents the state S
tAnd action a
tReward of P (a | X)
t(ii) a θ) represents X at a given input
tProbability of time action a, the present invention updates the parameters by a monte carlo gradient, the gradient of J (θ) is expressed as:
for a searched multi-hop relationship p, when updating parameters, each of the processes of searching the multi-hop relationship
Are all equal to R (p).
Step 2-4: step 2-2 and step 2-3 are repeated until all batches in KB are processed.
And repeating the step 2-2 and the step 2-3, searching the multi-hop relationship between the entity pairs of all the triples in the KB in batch, and updating the related parameters in the path searching stage.
Knowledge representation learning phase
In the knowledge representation learning stage, a single-hop relationship and a multi-hop relationship are simultaneously utilized to learn the entity and the relationship vector, as shown in fig. 4, the specific process is as follows:
step 3-1: the knowledge base divides the triples in KB into batches.
The invention trains a knowledge representation learning model in a batch processing mode, and randomly divides triples in the KB into a plurality of batches according to a preset defined batch size.
Step 3-2: taking one batch, and calculating the weight of all multi-hop relations of each triple.
Given a triplet (h, r, t), the set of all multi-hop relationships is { p }1,…,pK}, a multihop relationship piThe weight of (d) is defined as:
wherein:
ηi=tanh(Wpi) (12)
wherein W is a weight matrix which is the same matrix as the weight matrix in the path search stage reward.
A multi-hop relationship as used herein is a multi-hop relationship that ultimately reaches the tail entity, searched in the path search phase.
And calculating the weight of all multi-hop relations of each triplet in the batch according to the formula.
Step 3-3: and calculating energy functions and losses of all the triples in the batch by using the single-hop relation and the multi-hop relation, and updating the vectors and the weight matrix of the entities and the relations.
Given a triplet (h, r, t), the set of all multi-hop relationships is { p }1,…,pK-knowledge-representation learning phase energy function is defined as:
from the energy function, a loss function can be defined for the knowledge representation learning phase:
wherein γ is a predefined boundary [ ·]+Represents 0 and [. cndot]The value in the rule is the maximum value, T is a positive sample set, namely the set of all triples in the knowledge base; t is-Is a set of negative examples, expressed as:
T-={(h,r′,t)|r′∈R},(h,r,t)∈T (15)
the negative examples are obtained by replacing the relationship r in the triples with another relationship r' in the knowledge base.
The loss of all triples in the batch is calculated and the vector of entities and relationships and the weight matrix W are updated by minimizing the loss.
Step 3-4: step 3-2 and step 3-3 are repeated until all batches in KB are processed.
And repeating the step 3-2 and the step 3-3, calculating the weight, the energy function and the loss function of the multi-hop relation corresponding to all the triples in the KB in batches, and updating the related parameters of the knowledge representation learning stage.
Step 3-5: if the iteration reaches the preset maximum times, outputting the vectors of the entities and the relations; otherwise, go to step 2-2.
And iteratively performing a path searching stage and a knowledge representation learning stage until iteration reaches a preset maximum number, and outputting vectors of the entities and the relations.
In the knowledge representation learning method based on the double-agent reinforcement learning path search, the path searcher searches high-quality multi-hop relationships among entities by using the entities and the relationship vectors trained by the knowledge representation learning model, and uses two agents to make decisions in the searching process, so that the state and the information can be considered more comprehensively; knowledge representation the learning model learns the vectors of entities and relationships using both single-hop relationships and the searched multi-hop relationships, and uses an attention mechanism to weigh the weight of each multi-hop relationship. The rewarding in the path searcher and the weight in the knowledge representation model share a part of parameters, so that the part of parameters can not only measure the weight of the multi-hop relationship, but also guide the path searcher to search the multi-hop relationship which is more useful for the knowledge representation learning process.
The above-mentioned embodiments are intended to illustrate the technical solutions and advantages of the present invention, and it should be understood that the above-mentioned embodiments are only the most preferred embodiments of the present invention, and are not intended to limit the present invention, and any modifications, additions, equivalents, etc. made within the scope of the principles of the present invention should be included in the scope of the present invention.