Attention mechanism-based knowledge graph relation prediction method and device

文档序号:135784 发布日期:2021-10-22 浏览:10次 中文

阅读说明:本技术 一种基于注意力机制的知识图谱关系预测方法及装置 (Attention mechanism-based knowledge graph relation prediction method and device ) 是由 李弼程 李佳乐 杜文倩 皮慧娟 王华珍 王成 于 2021-08-11 设计创作,主要内容包括:本发明公开了一种基于注意力机制的知识图谱关系预测方法及装置,利用Trans模型获得三元组实体的嵌入,将三元组中的关系当作头实体和尾实体间翻译操作,得到三元组向量表示,针对知识图谱中三元组实体描述的全部文本信息采用Doc2Vec模型进行嵌入,得到实体描述向量表示,通过Trans模型得到的三元组向量表示与实体层次类型映射矩阵结合,得到实体类型向量表示,采用融合三元组向量表示、实体描述向量表示以及实体类型向量表示的三元组实体向量作为编码器输入,编码器基于知识图谱设计注意力机制,得到关系层次、实体层次、三元组层次的权重,解码器则利用ConvKB模型重构知识图谱,进行关系预测。本发明可用于知识图谱推理,根据已知的知识推理出未知的潜在的知识。(The invention discloses a knowledge graph relation prediction method and a device based on an attention mechanism, which utilize a Trans model to obtain the embedding of a triple entity, take the relation in the triple as the translation operation between a head entity and a tail entity to obtain triple vector representation, embed all text information described by the triple entity in a knowledge graph by adopting a Doc2Vec model to obtain entity description vector representation, combine the triple vector representation obtained by the Trans model with an entity level type mapping matrix to obtain entity type vector representation, adopt a triple entity vector represented by fusing the triple vector representation, the entity description vector representation and the entity type vector representation as the input of an encoder, the encoder designs the attention mechanism based on the knowledge graph to obtain the weights of the relation level, the entity level and the triple level, and the decoder reconstructs the knowledge graph by utilizing a ConvKB model, and (6) performing relation prediction. The invention can be used for knowledge graph reasoning to deduce unknown potential knowledge according to known knowledge.)

1. A knowledge graph relation prediction method based on an attention mechanism is characterized by comprising the following steps:

s1, acquiring a triplet vector representation by using a Trans model based on triplets in the knowledge graph;

s2, embedding the text information described by the entity by adopting a Doc2Vec model to the entity description information to obtain an entity description vector representation;

s3, combining the triple vector representation with an entity level type mapping matrix to obtain an entity type vector representation;

s4, connecting the triple vector representation, the entity description vector representation and the entity type vector representation to obtain a triple entity vector;

s5, constructing an encoder based on an attention mechanism and a graph neural network, inputting the triplet entity vector into the encoder, updating the embedded representation of the entity and the relationship, and outputting to obtain a triplet vector representation based on the hierarchy;

and S6, using a ConvKB model as a decoder, inputting the level-based triple vector representation into the decoder to reconstruct the knowledge graph, outputting scores of the triples, and judging whether the relation of the triples in the knowledge graph is established or not based on the scores of the triples.

2. The attention mechanism-based knowledge graph relationship prediction method of claim 1, wherein the Trans model in step S1 comprises a TransE model or a TransR model.

3. The attention mechanism-based knowledge-graph relationship prediction method according to claim 1, wherein the step S2 specifically comprises:

randomly generating the entity description information into an N-dimensional document vector xparagraph-idAnd a word vector x in the form of a one-hot (one-hot) for each word in the N-dimensional documenti-m,...,i+mWhere m refers to the window size, i refers to the index of the current headword predicted by the context,

for N-dimensional document vector xparagraph-idSum word vector xi-m,...,i+mAnd (3) performing dimensionality reduction:

vi-m=Vxi-m,vi-m+1=Vxi-m+1,...,vi+m=Vxi+m,vparagrap h-id=Vxparagrap h-id(ii) a Where V is an identity matrix of N rows and N columns, N is much less than N,

obtaining a central word vector y through the reduced word vector and the document vectori

Wherein, U is an N-row and N-column unit matrix, and the central word vector is further normalized through a softmax function:

taking the word vector in the form of one-hot encoding in initialization as the true value,as the predicted value, the pair of logistic functions is usedTraining is carried out, and an objective function is minimized through a random gradient descent method, wherein the objective function is as follows:

updating and outputting the entity description vector representation.

4. The attention mechanism-based knowledge-graph relationship prediction method according to claim 1, wherein the step S3 specifically comprises:

let k be the number of all entity types of entity e, c for each entity type c, cjRepresenting the jth type to which entity e belongs,is cjOf the mapping matrix, αjIs cjCorresponding weight, αiCan belong to c through an entity ejThe frequency of (d) is obtained, and for a particular triplet (h, r, t), the head entity mapping matrix is calculated as:

wherein, CrhRepresenting, given a relationship r, a set of relationship types for the head entity,

in the same way, CrtFor a given relationship r, a set of relationship types, M, for the tail entitycIs a projection matrix of type c, McIs defined as:

where m is the number of layers of the hierarchy type,denotes the ith sub-type cjThe mapping matrix of (2);

will Mrh、MrtAnd multiplying the triple vector representation obtained by TransE or TransR to obtain the entity type vector representation.

5. The attention mechanism-based knowledge-graph relationship prediction method according to claim 1, wherein the step S4 specifically comprises:

the first loss function that connects the triplet vector representation, the entity description vector representation and the entity type vector representation is:

wherein gamma is a hyper-parameter, the boundary of the correct triples and the error triples is measured,

T'={(h',r,t)|h'∈E}∪{(h,r',t)|r'∈R}∪{(h,r,t')|t'∈E};

wherein, T is a positive-case triple set, T' is a negative-case triple set, which is obtained by randomly replacing a head entity or a tail entity or a relationship of the positive-case triple, and d (h + r, T) is a distance measure of h + r and T:

d(h+r,t)=||h+r-t||;

concatenating the triplet vector representation, the entity description vector representation, and the entity type vector representation, final entity embedding is defined as:

wherein e iss、edAnd etRespectively the triplet vector representation, entity description vector representation and entity type vector representation,in order to join the operators, the operator is connected,i.e. e ═ es||ed||et],

And carrying out random gradient descent on the first loss function to obtain a final entity embedding e, and synthesizing a triplet entity vector by the final entity embedding e through an energy function, wherein the energy function is as follows:

E(h,r,t)=||h+r-t||。

6. the attention mechanism-based knowledge-graph relationship prediction method of claim 5, wherein the step S5 specifically comprises:

calculating the weight of the neighbor node relation of the entity h of the triplet entity vector:

ah,r=W1[h||r];

wherein, | | represents splicing operation;respectively representing the embedded representation of an entity h and a relation r, and d representing the embedded dimension;is a training parameter, NhRepresents the neighbor set of the entity h, sigma is the LeakyReLU function, ah,rFor the vector representation of the triplet (h, r, t) at the relational level, αh,rFor the relationship level attention score of the neighboring node,

the relationship between the head and tail entities is embedded by vrCan be expressed as:

vr=αh,rr;

calculating weights for the neighbor entities:

bh,r,t=W2[h||vr||t];

wherein the content of the first and second substances,representsAn embedded representation of entity t; rhtRepresenting a set of relationships between an entity h and an entity t;represents a training parameter; bh,r,tIs the vector representation of the triplet (h, r, t) at the entity level, and the finally obtained betah,r,tIs the entity level attention score of the neighboring node;

calculating to obtain the score of the three-tuple level:

ηh,r,t=αh,r·βh,r,t

wherein eta ish,r,tRepresents the weight of the triplet (h, r, t) when representing the entity h,

by computing the relationship attention, the neighbor node attention, and the triplet attention, the entity h is represented as:

wherein the content of the first and second substances,representing the embedded representation of the entity h after adding the local neighborhood weights, bh,r',t'Representing the vector representation of entity h after adding local neighborhood weights, said hierarchy-based triplet vector representation of the output of said encoder beingWherein e is2Is composed of Is r', e1Is t'.

7. The attention mechanism-based knowledge-graph relationship prediction method of claim 6, wherein the step S6 specifically comprises:

definition ofSemantically matching a triplet representation of the ConvKB model for scores of triples, represented by a plurality of said hierarchy-based triplet vectorsIs formed by connectingA convolutional layer input to the ConvKB model on which a plurality of filters are used to generate different feature maps, the scoring function of a feature map being represented as:

wherein, wmRepresents mthThe convolutional layer filter of (1); omega is a hyperparameter and represents the number of the filters;representing a linear transformation matrix, and k represents embedding dimensions of h, r and t;

the second loss function corresponding to the decoder is defined as:

wherein, S is a set of positive-case triples, S' is a constructed negative-case triplet, and is obtained by randomly replacing head entities or tail entities of the positive-case triples and the negative-case triples, that is:

the positive and negative example triplets are distinguished by:

and judging whether the relation of the triples in the knowledge graph is established or not according to the scores of the triples.

8. An attention mechanism-based knowledge-graph relationship prediction apparatus, comprising:

the triplet vector representation module is configured to obtain triplet vector representation by utilizing a Trans model based on triples in the knowledge graph;

the entity description vector representation module is configured to embed text information described by the entity into the entity description information by adopting a Doc2Vec model to obtain the representation of the entity description vector;

an entity type vector representation module configured to combine the triple vector representation with an entity level type mapping matrix to obtain an entity type vector representation;

the connection module is configured to connect the triple vector representation, the entity description vector representation and the entity type vector representation to obtain a triple entity vector;

the encoder module is configured to construct an encoder based on an attention mechanism and a graph neural network, input the triplet entity vector into the encoder, update the embedded representation of the entity and the relationship, and output to obtain a triplet vector representation based on a hierarchy;

a decoder module configured to use a ConvKB model as a decoder, input the hierarchy-based triplet vector representation into the decoder to reconstruct the knowledgegraph, output scores of triples, and determine whether a relationship of triples in the knowledgegraph is established based on the scores of triples.

9. An electronic device, comprising:

one or more processors;

a storage device for storing one or more programs,

when executed by the one or more processors, cause the one or more processors to implement the method of any one of claims 1-7.

10. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the method according to any one of claims 1-7.

Technical Field

The invention relates to the field of triple relation prediction, in particular to a knowledge graph relation prediction method and device based on an attention mechanism.

Background

In 2012, google proposed the concept of a knowledge graph and applied it to a search engine. Then, the construction of the large-scale knowledge graph is greatly advanced, and a large number of knowledge graphs are developed, wherein the representative knowledge graphs are YAGO, DBpedia, FreeBase and the like. At present, the knowledge graph plays an important role in many artificial intelligence applications, such as intelligent question answering, information recommendation, web page search, and the like. A knowledge graph is a structured semantic network that stores a large number of fact triples (head, relationship, tail), usually reduced to (h, r, t).

However, as the scale of the knowledge graph is gradually enlarged, the data types are gradually diversified, the relationships between entities are more and more complicated, and the traditional symbol and logic-based method makes the knowledge graph application challenging due to the low computational efficiency of the traditional symbol and logic-based method. To solve this problem, expression learning is proposed and developed vigorously.

The goal of representation learning is to map entities and relationships in the knowledge-graph triples into a low-dimensional dense vector space, converting traditional logic and sign-based operations into numerical-based vector computations. The representation learning model based on the energy function obtains better results on tasks such as link prediction, triple classification and the like due to the simplicity and the high efficiency, and is widely applied to the fields of knowledge map completion, entity alignment and the like. However, most of the models only consider triple information of the knowledge graph, the fusion degree of rich text information and type information in the knowledge graph is low, the fusion mode is single, and the information is important for reducing the fuzzy degree of entities and relations and improving the accuracy of inference prediction.

At present, the knowledge graph representation learning methods mainly comprise the following three types: a model based on tensor decomposition, a model based on translation operation, and a model fusing multi-source information. The representation learning model based on tensor decomposition is represented by a RESCAL model, the knowledge graph is encoded into a tensor, if the triplet exists in the knowledge graph, the value in the corresponding tensor is set to be 1, and otherwise, the value is 0. However, the RESCAL model requires a large number of parameters and is computationally inefficient. The translation operation-based representation learning model is represented by a TransE model, which considers the relationship in the triplet as a translation operation between the head entity and the tail entity, with the basic assumption that the true triplet (h, r, t) should satisfy the equation h + r-t. TransE is effective in one-to-one type of relationships, but has certain problems in dealing with one-to-many, many-to-one, and many-to-many problems. Many models improve the TransE, but only the triple structure information in the knowledge graph is considered, and a large amount of other information related to the entities and the relations is not effectively applied, so that the semantic information of the entities and the relations is not clear. In the aspect of representation learning of multi-source information fusion, a knowledge representation learning model of entity description and representation learning of text and knowledge base fusion are mainly considered, and information sources and fusion means of the models are very limited. In addition, the entity distribution in the knowledge graph shows a long tail distribution phenomenon, and part of entities do not have corresponding description texts in heterogeneous data sources. And the entity type is used as a hidden variable and can be used as the supplementary information of the text, so that the semantics of the entity and the relation are enriched.

However, whether a knowledge graph is constructed automatically or manually, it is somewhat incomplete. The relation prediction is carried out on the knowledge graph, implicit and unknown knowledge is deduced, and the method is a research hotspot at present. The graph neural network model can be used for modeling nodes and relations on a spectrogram structure of a knowledge graph, and further carrying out unknown relation prediction.

Disclosure of Invention

The technical problem mentioned above is addressed. An object of the embodiments of the present application is to provide a method and an apparatus for predicting a relationship between knowledge maps based on an attention mechanism, so as to solve the technical problems mentioned in the above background.

In a first aspect, an embodiment of the present application provides an attention mechanism-based knowledge graph relation prediction method, including the following steps:

s1, acquiring a triplet vector representation by using a Trans model based on triplets in the knowledge graph;

s2, embedding the text information described by the entity by adopting a Doc2Vec model to the entity description information to obtain an entity description vector representation;

s3, combining the triple vector representation with the entity level type mapping matrix to obtain entity type vector representation;

s4, connecting the triple vector representation, the entity description vector representation and the entity type vector representation to obtain a triple entity vector;

s5, constructing an encoder based on an attention mechanism and a graph neural network, inputting the triplet entity vector into the encoder, updating the embedded representation of the entity and the relationship, and outputting to obtain a triplet vector representation based on the hierarchy;

and S6, using the ConvKB model as a decoder, inputting the triple vector representation based on the hierarchy into the decoder to reconstruct the knowledge graph, outputting scores of the triples, and judging whether the relation of the triples in the knowledge graph is established or not based on the scores of the triples.

In some embodiments, the Trans model in step S1 comprises a TransE model or a TransR model.

In some embodiments, step S2 specifically includes:

randomly generating N-dimensional document vector x from entity description informationparagraph-idAnd a word vector x in the form of a one-hot (one-hot) for each word in the N-dimensional documenti-m,...,i+mWherein m is the window size and i is from aboveThe reference numbers of the current headword predicted below,

for N-dimensional document vector xparagraph-idSum word vector xi-m,...,i+mAnd (3) performing dimensionality reduction:

vi-m=Vxi-m,vi-m+1=Vxi-m+1,...,vi+m=Vxt+m,vparagraph-id=Vxparagraph-id(ii) a Where V is an identity matrix of N rows and N columns, N is much less than N,

obtaining a central word vector y through the reduced word vector and the document vectori

Wherein, U is an N-row and N-column unit matrix, and the central word vector is further normalized through a softmax function:

taking the word vector in the form of one-hot encoding in initialization as the true value,as the predicted value, the pair of logistic functions is usedTraining is carried out, and an objective function is minimized through a random gradient descent method, wherein the objective function is as follows:

and updating and outputting the entity description vector representation.

In some embodiments, step S3 specifically includes:

let k be the number of all entity types of entity e, c for each entity type c, ciRepresenting the jth type to which entity e belongs,is cjOf the mapping matrix, αjIs cjCorresponding weight, αiCan belong to c through an entity ejThe frequency of (d) is obtained, and for a particular triplet (h, r, t), the head entity mapping matrix is calculated as:

wherein, CrhRepresenting, given a relationship r, a set of relationship types for the head entity,

in the same way, CrtFor a given relationship r, a set of relationship types, M, for the tail entitycIs a projection matrix of type c, McIs defined as:

where m is the number of layers of the hierarchy type,denotes the ith sub-type cjThe mapping matrix of (2);

will Mrh、MrtAnd multiplying the triple vector representation obtained by TransE or TransR to obtain an entity type vector representation.

In some embodiments, step S4 specifically includes:

the first loss function that connects the triplet vector representation, the entity description vector representation and the entity type vector representation is:

wherein gamma is a hyper-parameter, the boundary of the correct triples and the error triples is measured,

T′={(h′,r,t)|h′∈E}∪{(h,r′,t)|r′∈R}∪{(h,r,t′)|t′∈E};

wherein, T is a positive-case triple set, T' is a negative-case triple set, which is obtained by randomly replacing a head entity or a tail entity or a relationship of the positive-case triple, and d (h + r, T) is a distance measure of h + r and T:

d(h+r,t)=||h+r-t||;

concatenating the triplet vector representation, the entity description vector representation, and the entity type vector representation, final entity embedding is defined as:

wherein e iss、edAnd etA triplet vector representation, an entity description vector representation and an entity type vector representation respectively,in order to join the operators, the operator is connected,i.e. e ═ es||ed||et],

And carrying out random gradient descent on the first loss function to obtain a final entity embedding e, and synthesizing a triplet entity vector by the final entity embedding e through an energy function, wherein the energy function is as follows:

E(h,r,t)=||h+r-t||。

in some embodiments, step S5 specifically includes:

calculating the weight of the neighbor node relation of the entity h of the triplet entity vector:

ah,r=W1[h||r].

wherein, | | represents splicing operation;respectively representing the embedded representation of an entity h and a relation r, and d representing the embedded dimension;is a training parameter, NhRepresents the neighbor set of the entity h, sigma is the LeakyReLU function, ah,rFor the vector representation of the triplet (h, r, t) at the relational level, αh,rFor the relationship level attention score of the neighboring node,

the relationship between the head and tail entities is embedded by vrCan be expressed as:

vr=αh,rr;

calculating weights for the neighbor entities:

bh,r,t=W2[h||vr||t];

wherein the content of the first and second substances,representing an embedded representation of an entity t; rhtRepresenting a set of relationships between an entity h and an entity t; represents a training parameter; bh,r,tIs the vector representation of the triplet (h, r, t) at the entity level, and the finally obtained betah,r,tIs the entity level attention score of the neighboring node;

calculating to obtain the score of the three-tuple level:

ηh,r,t=αh,r·βh,r,t

wherein eta ish,r,tRepresents the weight of the triplet (h, r, t) when representing the entity h,

by computing the relationship attention, neighbor node attention, and triplet attention, entity h is represented as:

wherein the content of the first and second substances,representing the embedded representation of the entity h after adding the local neighborhood weights, bh,r′,t′Representing the vector representation of the entity h after adding the local neighborhood weights, the hierarchy-based triplet vector representation of the encoder's output isWherein e is2Is composed of Is r', e1Is t'.

In some embodiments, step S6 specifically includes:

definition ofSemantically matching a triplet representation of the ConvKB model for scores of triples, represented by a plurality of hierarchy-based triplet vectorsIs formed by connectingThe convolutional layer input to the ConvKB model, on which a plurality of filters are used to generate different feature maps, is represented by the scoring function:

wherein, wmRepresents mthThe convolutional layer filter of (1); omega is a hyperparameter and represents the number of the filters;representing a linear transformation matrix, and k represents embedding dimensions of h, r and t;

the corresponding second penalty function of the decoder is defined as:

wherein, S is a set of positive-case triples, S' is a constructed negative-case triplet, and is obtained by randomly replacing head entities or tail entities of the positive-case triples and the negative-case triples, that is:

the positive and negative example triplets are distinguished by:

and judging whether the relation of the triples in the knowledge graph is established or not according to the scores of the triples.

In a second aspect, an embodiment of the present application provides an attention-based knowledge-graph relationship prediction apparatus, including:

the triplet vector representation module is configured to obtain triplet vector representation by utilizing a Trans model based on triples in the knowledge graph;

the entity description vector representation module is configured to embed text information described by the entity into the entity description information by adopting a Doc2Vec model to obtain the representation of the entity description vector;

an entity type vector representation module configured to combine the triple vector representation with the entity level type mapping matrix to obtain an entity type vector representation;

the connection module is configured to connect the triple vector representation, the entity description vector representation and the entity type vector representation to obtain a triple entity vector;

the encoder module is configured to construct an encoder based on an attention mechanism and a graph neural network, input the triplet entity vector into the encoder, update the embedded representation of the entity and the relationship, and output to obtain a triplet vector representation based on the hierarchy;

and the decoder module is configured to adopt a ConvKB model as a decoder, input the triple vector representation based on the hierarchy into the decoder to reconstruct the knowledge graph spectrum, output scores of the triples, and judge whether the relation of the triples in the knowledge graph is established or not based on the scores of the triples.

In a third aspect, embodiments of the present application provide an electronic device comprising one or more processors; storage means for storing one or more programs which, when executed by one or more processors, cause the one or more processors to carry out a method as described in any one of the implementations of the first aspect.

In a fourth aspect, embodiments of the present application provide a computer-readable storage medium on which a computer program is stored, which, when executed by a processor, implements the method as described in any of the implementations of the first aspect.

Compared with the prior art, the invention has the following beneficial effects:

(1) the invention integrates the knowledge graph representation learning model, and fuses the triple information, the entity description information and the entity type information of the knowledge graph, thereby reducing the fuzzy degree of the entity and the relation;

(2) according to the invention, through entity description and entity type embedding, all semantic information of the entity description is considered, a Doc2Vec model is used for representing description information, so that the semantic information represented by a knowledge graph triple entity is improved, the hierarchical type information is represented in consideration of the fact that the triple entities are of various types and the types are hierarchical, and the hierarchical type information is spliced by combining translation model embedding, so that a representation learning model is trained to improve the application performance of the knowledge graph;

(3) according to the invention, on the basis of a graph neural network, an attention mechanism is added, and different weights are respectively given to the relationship nodes and the neighbor nodes, so that the problem of different contribution degrees of the relationship entities and the neighbor entities is solved.

Drawings

In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.

FIG. 1 is an exemplary device architecture diagram in which one embodiment of the present application may be applied;

FIG. 2 is a schematic flow chart of a method for attention-based knowledge-graph relationship prediction according to an embodiment of the present invention;

FIG. 3 is a diagram of an overall model of a knowledge-graph relationship prediction method based on an attention mechanism according to an embodiment of the present invention;

FIG. 4 is a schematic diagram of a triplet vector representation acquisition of an attention-based knowledge-graph relationship prediction method according to an embodiment of the present invention;

FIG. 5 is a schematic diagram of entity type vector representation acquisition for an attention-based knowledge-graph relationship prediction method according to an embodiment of the present invention;

FIG. 6 is a schematic diagram of entity type vector representation acquisition for an attention-based knowledge-graph relationship prediction method according to an embodiment of the present invention;

FIG. 7 is a multi-headed attention map of the attention mechanism-based knowledge-graph relationship prediction method of an embodiment of the present invention;

FIG. 8 is a ConvKB model diagram of the attention-based knowledge-graph relationship prediction method according to the embodiment of the present invention;

FIG. 9 is a schematic diagram of an attention mechanism based knowledge-map relationship prediction apparatus according to an embodiment of the present invention;

fig. 10 is a schematic structural diagram of a computer device suitable for implementing an electronic apparatus according to an embodiment of the present application.

Detailed Description

In order to make the objects, technical solutions and advantages of the present invention clearer, the present invention will be described in further detail with reference to the accompanying drawings, and it is apparent that the described embodiments are only a part of the embodiments of the present invention, not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.

Fig. 1 illustrates an exemplary device architecture 100 to which the attention mechanism-based knowledge-graph relationship prediction method or the attention mechanism-based knowledge-graph relationship prediction device of the embodiments of the present application may be applied.

As shown in fig. 1, the apparatus architecture 100 may include terminal devices 101, 102, 103, a network 104, and a server 105. The network 104 serves as a medium for providing communication links between the terminal devices 101, 102, 103 and the server 105. Network 104 may include various connection types, such as wired, wireless communication links, or fiber optic cables, to name a few.

The user may use the terminal devices 101, 102, 103 to interact with the server 105 via the network 104 to receive or send messages or the like. Various applications, such as data processing type applications, file processing type applications, etc., may be installed on the terminal apparatuses 101, 102, 103.

The terminal apparatuses 101, 102, and 103 may be hardware or software. When the terminal devices 101, 102, 103 are hardware, they may be various electronic devices including, but not limited to, smart phones, tablet computers, laptop portable computers, desktop computers, and the like. When the terminal apparatuses 101, 102, 103 are software, they can be installed in the electronic apparatuses listed above. It may be implemented as multiple pieces of software or software modules (e.g., software or software modules used to provide distributed services) or as a single piece of software or software module. And is not particularly limited herein.

The server 105 may be a server that provides various services, such as a background data processing server that processes files or data uploaded by the terminal devices 101, 102, 103. The background data processing server can process the acquired file or data to generate a processing result.

It should be noted that the attention-based knowledge graph relation prediction method provided in the embodiment of the present application may be executed by the server 105, or may be executed by the terminal devices 101, 102, and 103, and accordingly, the attention-based knowledge graph relation prediction apparatus may be disposed in the server 105, or may be disposed in the terminal devices 101, 102, and 103.

It should be understood that the number of terminal devices, networks, and servers in fig. 1 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation. In the case where the processed data does not need to be acquired from a remote location, the above device architecture may not include a network, but only a server or a terminal device.

Fig. 2 illustrates a method for predicting a knowledge-graph relationship based on an attention mechanism according to an embodiment of the present application, and an overall flowchart structural diagram is shown in fig. 3, which includes the following steps:

and S1, obtaining the vector representation of the triples by using a Trans model based on the triples in the knowledge graph.

In a specific embodiment, the Trans model in step S1 includes a TransE model or a TransR model.

1) The specific steps of obtaining the representation of the triad vector through the TransE model are as follows:

firstly, a vector representation of a three-tuple head entity, a relation and a tail entity is randomly generated, an entity e is represented by (h, r, t), h is a head entity, t is a tail entity, and the relation r is regarded as from the head entity h to the tail entity t and is respectively represented by h, r and t, as shown in fig. 4.

Secondly, based on the idea that the relationship is a translation operation between the head entity and the tail entity, negative sample data (h ', r ', t ') is randomly generated using the following formula. Wherein E, R represent the set of entities and the set of relationships of the knowledge-graph, respectively.

T'={(h',r,t)|h'∈E}∪{(h,r',t)|r'∈R}∪{(h,r,t')|t'∈E};

Wherein, T is a positive example triple set, T' is a negative example triple set, and the positive example triple set is obtained by randomly replacing a head entity or a tail entity or a relationship of the positive example triple set.

Finally, the objective function L (h, r, t) of the following formula is optimized to obtain a triplet vector representation based on the TransE model, which is used to obtain the entity type vector representation.

Wherein d (h + r, t) | | h + r-t | |, d (h + r, t) is the distance measurement of h + r and t, and gamma is a hyperparameter, and the boundary of the correct triple and the error triple is measured.

Regarding the relation r in each triplet instance (h, r, t) as a translation operation (translation) from the head entity h to the tail entity t, continuously adjusting the vector representations h, r and t of h, r and t by optimizing an objective function, so that h + r is approximately equal to t, and finally obtaining the triplet vector representation (h, r, t).

2) The specific steps of obtaining the representation of the triad vector through the TransR model are as follows:

the TransE model assumes that entities and relationships are in the same semantic space, so that similar entities have similar positions in space, however, each entity can have many aspects, and different relationships concern different aspects of the entity. Therefore, the TransR model establishes respective relationship spaces for different relationships, and the entity is mapped to the relationship spaces for calculation.

First, for each relationship, there is a transformation matrix Mr and a representation vector r in its own space vector. Vector representations of the head entity and the tail entity are mapped to a relation space through a transformation matrix, namely Mr is multiplied by vectors of the head entity and the relation entity to obtain a triple vector representation based on a TransR model in the relation space.

Specifically, the entity representation space and the relationship representation space are split, and a matrix M is mapped through the relationshiprMapping the head and tail entities to a relation vector space to obtain hrAnd trNamely:

hr=hMr

tr=tMr

and calculating the triple scores by using head and tail entity vectors projected to the relational expression space:

a triplet vector representation (h, r, t) results.

Then, negative sample data is generated.

And finally, optimizing an objective function, wherein the objective function is as follows:

wherein d (h + r, t) | | h + r-t | |, d (h + r, t) is the distance measurement of h + r and t, and gamma is a hyperparameter, and the boundary of the correct triple and the error triple is measured.

Triple vector representations based on either the TransE model or the TransR model may be used to obtain the entity type vector representation.

And S2, embedding the text information described by the entity by adopting a Doc2Vec model to the entity description information to obtain an entity description vector representation.

After extracting the keywords from the entity description information, embedding the text information of the entity description, as shown in fig. 5, step S2 specifically includes:

randomly generating N-dimensional document vector x from entity description informationparagraph-idAnd a word vector x in the form of a one-hot (one-hot) for each word in the N-dimensional documenti-m,,i+mWhere m refers to the window size, i refers to the index of the current headword predicted by the context,

for N-dimensional document vector xparagraph-id and the word vector xi-m,...,i+mAnd (3) performing dimensionality reduction:

vi-m=Vxi-m,vi-m+1=Vxi-m+1,...,vi+m=Vxi+m,vparagraph-id=Vxparagraph-id(ii) a Wherein V is an N-row and N-column identity matrix, N is far smaller than N, and the document vector and the word vector are reduced to N dimensions.

Obtaining a central word vector y through the reduced word vector and the document vectori

Wherein, U is an N-row and N-column unit matrix, and the central word vector is further normalized through a softmax function:

taking the word vector in the form of one-hot encoding in initialization as the true value,as the predicted value, the pair of logistic functions is usedTraining is carried out, and an objective function is minimized through a random gradient descent method, wherein the objective function is as follows:

and updating and outputting the entity description vector representation.

And S3, combining the triple vector representation with the entity level type mapping matrix to obtain entity type vector representation.

Fig. 6 is a diagram illustrating a triplet vector representation combined with an entity level type mapping matrix, where entity types are hierarchical. Therefore, the entity under the entity type needs to be mapped first. Then, in the complex relationship schema of 1-N, N-1 and N-N, the entities have different representations under different relationships. In order to better perform complex relationship prediction, entities under a specific relationship need to be mapped, and finally, an entity type vector representation of a fusion level type is obtained.

In a specific embodiment, step S3 specifically includes:

let k be the number of all entity types of the entity e, for each entity type c, ci represents the jth type to which the entity e belongs,is cjOf the mapping matrix, αjIs cjCorresponding weight, αiCan belong to c through an entity ejThe frequency of (2) is obtained. In the embodiment of the present application, α is setjFor a particular triplet (h, r, t), the head entity mapping matrix is computed as:

wherein, CrhRepresenting, given a relationship r, a set of relationship types for the head entity,

in the same way,CrtFor a given relationship r, a set of relationship types, M, for the tail entitycIs a projection matrix of type c.

Then, in the projection process, the entities are first mapped to a more general subtype space and then to a more accurate subtype space. McIs defined as:

where m is the number of layers of the hierarchy type,denotes the ith sub-type cjThe mapping matrix of (2);

finally, M isrh、MrtAnd multiplying the triple vector representation obtained by TransE or TransR to obtain an entity type vector representation.

And subsequently, taking the triple information, the entity description information and the entity type information of the fusion knowledge graph as triple entity vectors as input of an encoder, and updating in the encoder.

And S4, connecting the triple vector representation, the entity description vector representation and the entity type vector representation to obtain a triple entity vector.

In a specific embodiment, step S4 specifically includes:

the penalty function for concatenating the triplet vector representation, the entity description vector representation, and the entity type vector representation is:

wherein gamma is a hyper-parameter, the boundary of the correct triples and the error triples is measured,

T'={(h',r,t)|h'∈E}∪{(h,r',t)|r'∈R}∪{(h,r,t')|t'∈E};

wherein, T is a positive-case triple set, T' is a negative-case triple set, which is obtained by randomly replacing a head entity or a tail entity or a relationship of the positive-case triple, and d (h + r, T) is a distance measure of h + r and T:

d(h+r,t)=||h+r-t||;

connecting the triple vector representation, the entity description vector representation and the entity type vector representation to obtain the final entity embedding:

wherein e iss、edAnd etA triplet vector representation, an entity description vector representation and an entity type vector representation respectively,in order to join the operators, the operator is connected,i.e. e ═ es||ed||et],

And carrying out random gradient descent on the loss function to obtain a final entity embedding e, and synthesizing a triplet entity vector by the final entity embedding e through an energy function, wherein the energy function is as follows:

E(h,r,t)=||h+r-t||。

and performing optimization training through the energy function.

And S5, constructing an encoder based on the attention mechanism and the graph neural network, inputting the triplet entity vector into the encoder, updating the embedded representation of the entity and the relationship, and outputting to obtain the triplet vector representation based on the hierarchy.

To avoid model overfitting, embodiments of the present application use a multi-head attention mechanism to obtain more information, as shown in FIG. 7 for a multi-head attention map. The triple entity vector obtained by fusing the representation of the triple vector, the representation vector of the entity and the representation vector of the entity type is used as the input of an encoder, and an attention mechanism is designed for calculating the weight of the relation, the neighbor node and the triple of the entity in the encoder for further useEmbedded representation of new entities and relationships. Triplet entity vector as input to the encoder, for an entity e1Input is (e)1,r3,e2) The output of the encoder isThe encoder establishes a hierarchy during this time. Investigating the local structure of the knowledge-graph with e1A diverging sub-structure centered at e2、e3、e4、e5The 4 entities respectively pass through r1、r2、r3Equal 3 relationships (edges) with e1Connected to update e in information transmission mode1Is shown embedded.

In a specific embodiment, step S5 specifically includes:

when representing an entity, the edges (relationships) connected to the entity have different weights. Thus, the weight is computed for the neighbor node relationship of entity h of the triplet entity vector:

ah,r=W1[h||r];

wherein, | | represents splicing operation;respectively representing the embedded representation of an entity h and a relation r, and d representing the embedded dimension;is a training parameter, NhRepresents the neighbor set of the entity h, sigma is the LeakyReLU function, ah,rFor the vector representation of the triplet (h, r, t) at the relational level, αh,rFor the relationship level attention scores of the neighboring nodes, the weights of the relationship levels connected with the entity h can be obtained through the two formulas.

Then, the relationship between the head entity and the tail entity embeds vrCan be expressed as:

vr=αh,rr;

obtaining a relationship embedding v between a head entity and a tail entityrThen, considering the relationship characteristics of the entity, calculating the weight of the neighbor entity:

bh,r,t=W2[h||vr||t];

wherein the content of the first and second substances,representing an embedded representation of an entity t; rhtRepresenting a set of relationships between an entity h and an entity t; represents a training parameter; bh,r,tFor the vector representation of the triplet (h, r, t) at the entity level, the fusion of the triplet embedding of the TransE model and the triplet embedding of the TransR model is compared respectively. Beta obtained finallyh,r,tIs the entity level attention score of the neighboring node.

After the attention of the relationship hierarchy and the attention of the entity hierarchy are obtained, the score of the three-tuple hierarchy is calculated:

ηh,r,t=αh,r·βh,r,t

wherein eta ish,r,tRepresents the weight of the triplet (h, r, t) when representing the entity h.

By computing the relationship attention, neighbor node attention, and triplet attention, entity h is represented as:

wherein the content of the first and second substances,representing the embedded representation of the entity h after adding the local neighborhood weights, bh,r',t'Representing the vector representation of the entity h after adding the local neighborhood weights, the hierarchy-based triplet vector representation of the encoder's output isWherein e is2Is composed of Is r', e1Is t'.

In order to avoid model overfitting, the embodiment of the application uses a multi-head attention mechanism to obtain more information, linear transformation is firstly carried out on a head entity, an edge entity and a tail entity, then M times of expanding point multiplication are carried out, splicing operation is carried out on the result, and finally the attention value obtained through one linear transformation operation is the result of multi-head attention.

And S6, using the ConvKB model as a decoder, inputting the triple vector representation based on the hierarchy into the decoder to reconstruct the knowledge graph, outputting scores of the triples, and judging whether the relation of the triples in the knowledge graph is established or not based on the scores of the triples.

In a specific embodiment, step S6 specifically includes:

definition ofSemantically matching a triplet representation of the ConvKB model for scores of triples, represented by a plurality of hierarchy-based triplet vectorsConnected to form, for a given tripletThe purpose of the convolutional layer is to analyze the tripletsEmbedding properties on all dimensions of the whole knowledge graph, and meanwhile normalizing the triple vector representation based on the hierarchy obtained by the encoder so as to mine more features. ConvKB model As shown in FIG. 8, eachAre represented as a 3-column matrix, where each column vector represents a ternary element.

Will be provided withThe convolutional layer input to the ConvKB model, i.e. the 3-column matrix is input to the convolutional layer, on which a plurality of filters are used to generate different signatures, the scoring function of the signatures being expressed as:

wherein, wmRepresents mthThe convolutional layer filter of (1); omega is a hyperparameter and represents the number of the filters;representing a linear transformation matrix, and k represents embedding dimensions of h, r and t;

the corresponding penalty function of the decoder is defined as:

wherein, S is a set of positive-case triples, S' is a constructed negative-case triplet, and is obtained by randomly replacing head entities or tail entities of the positive-case triples and the negative-case triples, that is:

the positive and negative example triplets are distinguished by:

and judging whether the relation of the triples in the knowledge graph is established or not according to the scores of the triples.

The feature map is concatenated into a single feature vector representing an input triplet, and the feature vector is subjected to a dot product operation with a weight vector W1And multiplying the scores of the triples to return the triples as the final output result of the decoder, and indicating whether the prediction result of the knowledge graph relation is effective or not.

The idea of the invention is that for a given triplet (h, r, t), when a head entity h or a tail entity t is missing, the objective of the relationship prediction task is to predict the missing head or tail entity. The scores of the negative example triples are first calculated, then ranked in descending order, and finally the ranking of the correct entity is recorded. Therefore, the performance of the model was evaluated using the following 3 evaluation strategies:

(1) hits @ N (N ═ 1,3, 10): the proportion of the first N correct entities;

(2) mean Rank (Mean Rank, MR): the correct entity is ranked n if it is ranked n. The quotient of the ranking of all correct entities and the number of all entities is the final MR value;

(3) reciprocal Mean Rank (MRR): if the predicted correct entity is ranked at the nth position, its matching score is 1/n. The sum of all entity scores is the final MRR value.

Lower MR values and higher Hits @ N or MRR values generally indicate better performance of the model.

To evaluate the validity of the proposed model, the invention was validated using the following 4 data sets:

(1) WN18 RR: a subset of the large-scale knowledge base WordNet, containing 11 relations and 40,943 entities;

(2) FB 15K-237: a subset of the large-scale knowledge base, FreeBase, containing 237 relationships and 14,541 entities;

(3) NELL-995: a subset of NELL data set containing 200 relationships and 75,492 entities;

(4) kinship: the data set is a relativity data set, consists of 24 unique names in two families, has the same structure and comprises 25 relationships of wife, father and the like.

In the experiment, 4 data sets are divided into a training set, a verification set and a test set, and detailed data statistics of the data sets are shown in table 1.

Table 1 data set data statistics

To verify the effectiveness of the method of the invention, the invention was compared with the following 8 models as baseline:

(1) TransE: one of the most widely used relational prediction models;

(2) ConvE: a popular CNN-based model;

(3) ConvKB: the best current model based on CNN;

(4) DistMult: a tensor decomposition model for calculating the triple score by using a bilinear scoring function;

(5) ComplEx: an extension model of the DistMult model;

(6) R-GCN: an extended model of a Graph Convolutional neural Network (GCN) models a neighbor entity under a specific relationship;

(7) n-hopGAT: performing weight calculation on different triplets based on Graph Attention Network (GAT);

(8) A2N: based on the GNN model, the learning entity bases on the representation of the query.

The evaluation criteria are as follows:

for the encoder, the input vector dimension and the output vector dimension of the model are both 200 dimensions, and the number of heads of the multi-head attention mechanism is 2; for the decoder, the relationship and the vector dimension of the entity are also set to 200. Further, the learning rate is set to 0.005. The results of the experiments on the respective data sets are shown in tables 2 and 3, with the best results shown in bold and the next best results shown in underlined for each data set.

TABLE 2 Experimental comparison of data sets FB15k-237 and WN18RR

TABLE 3 comparison of the experiments on data sets NELL-995 and kinship

To further verify the validity of the model, the influence of entity neighbor nodes with different numbers of relationships was analyzed, and the training set of FB15k-237 and WN18RR was divided into 3 subsets according to the following steps:the degree of entities in (a) is located in the first 10% of the training set,comprises 10% -50%, the rest is contained inIn (1). A new test set was then obtained as follows: assume a triplet (h, r, t) ifThenObtained in the same wayThe data statistics of (a) are shown in table 4.

TABLE 4Data statistics of

Method Accuracy (%)
TransE 82.6
TransR 83.4
DKRL 86.3
TKRL 85.7
DISMULT 80.8
ComplEx 81.8
Analogy 82.1
SimplE 81.5
AutoKGE 82.7
The invention model (TransE fusion) 87.2
The invention model (TransR fusion) 88.7

The experiment was performed only with the decoder ConvKB and the results are shown in tables 5 and 6, with the best results for each index shown in bold.

TABLE 5FB15k-237Experimental comparison results of

Table 6WN18RRExperimental comparison results of

As can be seen from tables 5 and 6, the entity with higher degree has more neighbors connected with it, which is significant for updating the entity. In addition, experimental results show that the method effectively captures the local graph structure information of the entity, obviously improves the indexes of the relation prediction task, and has better performance.

With further reference to fig. 9, as an implementation of the methods shown in the above figures, the present application provides an embodiment of a knowledge-graph relationship prediction apparatus based on an attention mechanism, which corresponds to the embodiment of the method shown in fig. 2, and which can be applied to various electronic devices.

The embodiment of the application provides a knowledge graph relation prediction device based on an attention mechanism, which comprises:

the triplet vector representation module 1 is configured to obtain triplet vector representation by using a Trans model based on triplets in the knowledge graph;

the entity description vector representation module 2 is configured to embed text information described by the entity into the entity description information by adopting a Doc2Vec model to obtain the representation of the entity description vector;

an entity type vector representation module 3 configured to combine the triple vector representation with the entity level type mapping matrix to obtain an entity type vector representation;

the connection module 4 is configured to connect the triple vector representation, the entity description vector representation and the entity type vector representation to obtain a triple entity vector;

the encoder module 5 is configured to construct an encoder based on an attention mechanism and a graph neural network, input the triplet entity vector into the encoder, update the embedded representation of the entity and the relationship, and output to obtain a triplet vector representation based on a hierarchy;

and the decoder module 6 is configured to adopt a ConvKB model as a decoder, input the triple vector representation based on the hierarchy into the decoder to reconstruct the knowledge graph spectrum, output scores of the triples, and judge whether the relation of the triples in the knowledge graph is established or not based on the scores of the triples.

In summary, the present invention provides a method and an apparatus for predicting a relation of a knowledge graph based on attention mechanism by using an encoder-decoder architecture. At the decoder, attention mechanisms are designed for computing the weights of the relationships of entities, neighbor nodes, triples, for updating the embedded representation of entities and relationships. At the decoder, the conjckb model is used for the reconstruction of the knowledge map. And finally, performing a relation prediction task on 4 data sets, wherein the result shows that the model used by the invention has good performance. Because the invention uses the graph neural network to extract the structure characteristics of the knowledge graph spectrogram and uses the convolution neural network to decode, the model has higher time complexity.

Referring now to fig. 10, a schematic diagram of a computer apparatus 1000 suitable for implementing an electronic device (e.g., the server or the terminal device shown in fig. 1) according to an embodiment of the present application is shown. The electronic device shown in fig. 10 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present application.

As shown in fig. 10, the computer apparatus 1000 includes a Central Processing Unit (CPU)1001 and a Graphics Processor (GPU)1002, which can perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM)1003 or a program loaded from a storage section 1009 into a Random Access Memory (RAM) 1004. In the RAM 1004, various programs and data necessary for the operation of the apparatus 1000 are also stored. The CPU 1001, GPU1002, ROM 1003, and RAM 1004 are connected to each other via a bus 1005. An input/output (I/O) interface 1006 is also connected to bus 1005.

The following components are connected to the I/O interface 1006: an input section 1007 including a keyboard, a mouse, and the like; an output portion 1008 including a display such as a Liquid Crystal Display (LCD) and a speaker; a storage portion 1009 including a hard disk and the like; and a communication section 1010 including a network interface card such as a LAN card, a modem, or the like. The communication section 1010 performs communication processing via a network such as the internet. The driver 1011 may also be connected to the I/O interface 1006 as needed. A removable medium 1012 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 1011 as necessary, so that a computer program read out therefrom is mounted in the storage portion 1009 as necessary.

In particular, according to an embodiment of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network through the communication section 1010, and/or installed from the removable medium 1012. The computer programs, when executed by a Central Processing Unit (CPU)1001 and a Graphics Processor (GPU)1002, perform the above-described functions defined in the methods of the present application.

It should be noted that the computer readable medium described herein can be a computer readable signal medium or a computer readable medium or any combination of the two. The computer readable medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor device, apparatus, or any combination of the foregoing. More specific examples of the computer readable medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present application, a computer readable medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution apparatus, device, or apparatus. In this application, however, a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution apparatus, device, or apparatus. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wire, fiber optic cable, RF, etc., or any suitable combination of the foregoing.

Computer program code for carrying out operations for aspects of the present application may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).

The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of apparatus, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based devices that perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.

The modules described in the embodiments of the present application may be implemented by software or hardware. The modules described may also be provided in a processor.

As another aspect, the present application also provides a computer-readable medium, which may be contained in the electronic device described in the above embodiments; or may exist separately without being assembled into the electronic device. The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: acquiring triple vector representation based on triples in the knowledge graph by using a Trans model; embedding text information described by the entity by adopting a Doc2Vec model to the entity description information to obtain an entity description vector representation; combining the triple vector representation with the entity level type mapping matrix to obtain entity type vector representation; connecting the triple vector representation, the entity description vector representation and the entity type vector representation to obtain a triple entity vector; constructing an encoder based on an attention mechanism and a graph neural network, inputting the triplet entity vector into the encoder, updating the embedded representation of the entity and the relationship, and outputting to obtain a triplet vector representation based on the hierarchy; and adopting a ConvKB model as a decoder, representing the triple vector based on the hierarchy, inputting the triple vector into the decoder to reconstruct the knowledge graph spectrum, outputting scores of the triples, and judging whether the relation of the triples in the knowledge graph is established or not based on the scores of the triples.

The above description is only a preferred embodiment of the application and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the invention herein disclosed is not limited to the particular combination of features described above, but also encompasses other arrangements formed by any combination of the above features or their equivalents without departing from the spirit of the invention. For example, the above features may be replaced with (but not limited to) features having similar functions disclosed in the present application.

30页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:一种对象标签确定方法及装置

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!