Joint information extraction method based on weak supervised learning

文档序号:1465982 发布日期:2020-02-21 浏览:5次 中文

阅读说明:本技术 一种基于弱监督学习的联合信息抽取方法 (Joint information extraction method based on weak supervised learning ) 是由 宫法明 司朋举 李昕 马玉辉 唐昱润 于 2019-11-12 设计创作,主要内容包括:本发明涉及一种基于弱监督学习的联合信息抽取方法,属于自然语言处理领域。本发明为了解决基于有监督学习的信息抽取所造成的数据集标注耗时费力,以及双子任务(信息抽取通常分为两个子任务,实体识别和关系抽取)造成的误差传播问题。我们的信息抽取方法通过将信息抽取转化为序列化数据标注的任务,结合知识表示学习方法,采用联合信息抽取的形式,利用公开知识库结合少量数据集,实现弱监督学习联合信息抽取。我们希望训练一个可以对文本进行更准确的信息抽取模型。通过弱监督学习结合联合信息抽取的策略,经过端到端网络的训练,提高信息抽取的准确率和召回率,与当前的信息抽取方法相比在性能方面有了实质性的改进。(The invention relates to a joint information extraction method based on weak supervised learning, and belongs to the field of natural language processing. The invention aims to solve the problems that the labeling of a data set is time-consuming and labor-consuming due to the information extraction based on supervised learning, and the error propagation is caused by a double-subtask (the information extraction is generally divided into two subtasks, entity identification and relationship extraction). The information extraction method combines a knowledge representation learning method through a task of converting information extraction into serialized data labeling, adopts a combined information extraction mode, and combines a small number of data sets by using a public knowledge base to realize weak supervision learning combined information extraction. It is desirable to train a model that allows more accurate information extraction of text. Through the strategy of combining weak supervision learning and joint information extraction, the accuracy and recall rate of information extraction are improved through the training of an end-to-end network, and compared with the current information extraction method, the method has substantial improvement on the performance aspect.)

1. A joint information extraction based on weak supervised learning is characterized by comprising the following specific steps:

s1, preprocessing the text information obtained by the web crawler, eliminating useless information, aligning the useless information with the text of an external knowledge base, and automatically labeling;

s2, screening the automatically labeled text set, and labeling the obtained partial text by a Mutiple-BIO labeling method based on serialization labeling;

s3, dividing the data sets generated in S1 and S2 into a training set and a testing set, converting the training set into an embedded vector form, wherein the pre-training model in the step adopts a classic TransE model in representation learning;

s4, coding the vector by using a bidirectional long-short term memory neural network, and effectively capturing semantic information of each word;

s5, generating a prediction label sequence by using an LSTM decoding layer;

s6, inputting the label prediction vector generated in the previous step into a Softmax layer, combining with TransE link prediction probability, carrying out label classification, and outputting the probability of the entity label;

and S7, performing iterative optimization on an objective function by maximizing a logarithm based on the probability that the predicted label and the real label are equal under a certain sentence condition to obtain an information extractor with higher accuracy.

2. The method for extracting joint information based on unsupervised learning as claimed in claim 1, wherein for step S2, the invention manually labels the partial data in step S1 by means of Mutiple-BIO labeling method based on serialized labels, each word is assigned with a label helpful for extracting information, the word irrelevant to the extraction result is labeled as "O", other labels are composed of three parts of word position, relationship type and relationship role in the entity, and if an entity belongs to a plurality of triples, the entity comprises a plurality of the three parts, and exists in parallel.

3. The method for extracting joint information based on weakly supervised learning as recited in claim 1, wherein for step S3, the invention adopts a knowledge representation training TransE model, randomly initializes a training set into a vector, takes the vector as an input, and generates word vectors corresponding to an entity set and a predefined relationship set in the training set as an output, because the operation is mainly to adjust the error between a correct triplet and an incorrect triplet, the output entity relationship word vector will change according to the change of a positive sample vector in the adjustment process, after the entity set, the relationship set and the training set are given, a negative sample is constructed by randomly replacing a head entity or a tail entity of the training set, the distance between the correct triplet entity and the relationship, the distance between the entity relationships in the negative sample, and the error between the two is adjusted, so as to represent the entity relationships into vectors conforming to the real relationships, the TransE loss function is as follows:

Figure FDA0002269600220000011

in the formula (1), the loss function of the TransE is divided into the sum of two parts of a hyperparameter, a positive sample distance and a negative sample distance, gamma represents the hyperparameter, f (h, r, t) represents the distance of the positive sample, f (h ', r', t ') represents the distance of the negative sample, delta represents a positive sample set, delta' represents a negative sample set, [ x ] x]+Represents max (0, x), where the distance formula is:

f(h,r,t)=(h+r-t)2(2)

in the formula (2), h represents a head entity, r represents a relationship, and t represents a tail entity.

4. Root of herbaceous plantThe method for extracting joint information based on weakly supervised learning as claimed in claim 1, wherein for step S4, the training set random vector is initialized first, then the Bi-LSTM Bi-directional long short term memory neural network of the present invention takes the vector generated by the random initialization as input, and generates the predicted vector of the target word as output, the iterative module operation mainly includes a vector layer, a forward long short term memory network layer, a backward long short term memory network layer and a connection layer, and the output vector will change according to the output of the forward long short term memory network layer and the output of the backward long short term memory network layer. Given the training set, the forward LSTM considers context information in front of the target word, i.e., from ω1To omegatObtaining a prediction vector c of the target wordtThe specific calculation is as follows:

w ═ ω in formula (3)1,...ωtt+1...ωnDenotes a sequence of words, ωt∈RdA vector representation representing the t-th word in a certain sentence, the word vector being a d-dimensional word vector, n representing the number of words in the sentence, ht-1Representing the pre-hidden vector in the memory module in Bi-LSTM, ct-1Representing a previous original vector in the memory module;

at the same time, the target word is calculated backward LSTM, and the context information behind the target word is considered, namely from omegat+1To omeganTo obtain another prediction vector otThe specific calculation is as follows:

ot=δ(Wωoωt+Whoht-1+Wcoct+bo) (4)

the two vectors c generated simultaneously are then combinedtAnd otInputting the connection layer, and obtaining a prediction vector h of the target word by using a hyperbolic tangent functiontThe specific calculation is as follows:

ht=ottanh(ct) (5)

in the formula (5) ctAnd otThe predicted target word vectors generated by equations (3) and (4) are shown, htA prediction vector representing the target word.

5. The method for extracting joint information based on weakly supervised learning as recited in claim 1, wherein for step S5, the method uses the long-short term memory network to take the vector obtained in step S4 as input, generate the sequence label as output, and the key operation is to take the final predicted vector h generated in step S4 as outputtAnd multiplying the forward LSTM prediction vector by the position serial number of the word, updating and connecting the forward LSTM prediction vector and the word, finally multiplying the prediction vector obtained by hyperbolic tangent operation by the position vector of the word and adding a corresponding deviation value to obtain a prediction label vector as output, wherein the specific calculation is as follows:

Figure FDA0002269600220000031

t in formula (6)t-1The previous predicted tag vector is expressed, and the predicted tag vector T is obtained by calculating the previous predicted tag vector, the position information and the deviation valuet

6. The method for extracting joint information based on weakly supervised learning as claimed in claim 1, wherein for step S6, the predicted label vector generated in step S5 is input into Softmax layer, label classification is performed, the entity label probability generated by the label classification is added to the trans link predicted label probability value, and then the probability value is normalized, and the probability of the entity label is output, which is specifically calculated as follows:

Figure FDA0002269600220000032

w in formula (7)yIs a matrix of a Softmax layer, NtIndicates the number of labels, TtRepresenting a prediction tag vector, ytRepresenting entity relationship label probabilitiesTo finally obtain

Figure FDA0002269600220000033

7. The method for extracting joint information based on weak supervised learning as recited in claim 1, wherein for step S7, the network of the present invention is based on the combination of weak supervised learning and joint information extraction, and it is expected that a better model for extracting information from text can be trained by iteratively optimizing the following objective function, so as to obtain a diversified and integrated information extractor. The overall loss function is as follows:

Figure FDA0002269600220000034

in the formula (8), | D | represents a training set, LjDenotes xjThe length of the word is such that,

Figure FDA0002269600220000035

Technical Field

The invention belongs to the field of natural language processing, and particularly relates to a joint information extraction method based on weak supervised learning.

Background

With the rapid development of the internet and the rapid increase of the number of people using the internet, the internet has become the largest, richest and available information source at present. But because of the lack of semantically related information in internet data, computers or programs cannot understand these rich data resources, especially unstructured information. Information extraction is an important research subject in the field of natural language processing, information contained in a text can be structured and changed into an organization form similar to a table style, an original text is input into an information extraction system, the original text contains contents such as webpage data and single plain characters, effective information points in a fixed format are output, the information points are extracted from various texts, and then the effective information points are integrated in a uniform format. It is apparent that information extraction techniques can extract valid knowledge for building knowledge-based services.

The traditional information extraction method needs to define the type of entity relationship in advance, then needs to label a training set manually, and finally adopts a machine learning method to train a classifier to carry out entity identification and relationship extraction. This creates the problem that the pre-definition of entity relationships cannot be comprehensive and that it is time consuming and laborious to manually build large-scale training sets. Although researchers at home and abroad make breakthrough progress on subtasks of information extraction, namely named entity identification and relationship extraction, extraction of effective information from unstructured texts always needs to be carried out in two steps, namely named entity identification is carried out first and then relationship extraction is carried out, or relationship extraction is carried out first and then named entity identification is carried out. However, no matter how the two subtasks are performed in sequence, the error propagation from the first subtask to the second subtask cannot be avoided, and finally, the accuracy of information extraction is directly influenced.

Disclosure of Invention

In order to solve the problems, the invention provides a joint information extraction method based on weak supervised learning, which combines a knowledge representation learning method, adopts a joint information extraction form, and utilizes a public knowledge base to combine a small number of data sets, so that the extraction accuracy of an information extractor on an unstructured text is improved. The method comprises the following specific steps:

s1, preprocessing the text information obtained by the web crawler, eliminating useless information, aligning the useless information with the text of an external knowledge base, and automatically labeling;

s2, screening the automatically labeled text set, and labeling the obtained partial text by a Mutiple-BIO labeling method based on serialization labeling;

s3, dividing the data sets generated in S1 and S2 into a training set and a testing set, converting the training set into an embedded vector form, wherein the pre-training model in the step adopts a TransE model representing the statics in learning;

s4, coding the vector by using a bidirectional long-short term memory neural network, and effectively capturing semantic information of each word;

s5, generating a prediction label sequence by using an LSTM decoding layer;

s6, inputting the label prediction vector generated in the previous step into a Softmax layer, combining with TransE link prediction probability, carrying out label classification, and outputting the probability of the entity label;

and S7, performing iterative optimization on an objective function by maximizing a logarithm based on the probability that the predicted label and the real label are equal under a certain sentence condition to obtain an information extractor with higher accuracy.

The technical scheme of the invention is characterized by comprising the following steps:

for step S2, the present invention manually labels the partial data in step S1 by means of a serialized label-based Mutiple-BIO labeling method, each word is assigned a label that is helpful for extracting information, the word that is irrelevant to the extraction result is labeled as "O", other labels are composed of three parts of word position, relationship type and relationship role in an entity, and if an entity belongs to a plurality of triples, the entity contains a plurality of the three parts, and exists in a parallel manner.

For step S3, the invention uses a transme model in knowledge representation learning, randomly initializes a training set to a vector, takes a vector form as input, and generates word vectors corresponding to an entity set and a predefined relationship set in the training set as output, because the operation mainly adjusts the error between a correct triplet and an incorrect triplet, the output entity relationship word vector will change according to the change of a positive sample vector in the adjustment process, gives the entity set, the relationship set, and the training set, constructs a negative sample by randomly replacing a head entity or a tail entity with the training set, calculates the distance between the correct triplet and the relationship, the distance between the entity relationship in the negative sample, and adjusts the error between the two entities, and represents the entity relationship to a vector conforming to a real relationship, and the transme loss function is as follows:

in the formula (1), the loss function of the TransE is divided into the sum of two parts of a hyperparameter, a positive sample distance and a negative sample distance, gamma represents the hyperparameter, f (h, r, t) represents the distance of the positive sample, f (h ', r', t ') represents the distance of the negative sample, delta represents a positive sample set, delta' represents a negative sample set, [ x ] x]+Represents max (0, x), where the distance formula is:

f(h,r,t)=(h+r-t)2(2)

in the formula (2), h represents a head entity, r represents a relationship, and t represents a tail entity.

For step S4, a training set random vector is initialized first, then the Bi-LSTM Bi-directional long-short term memory neural network of the present invention takes the vector generated by the random initialization as input, and generates a prediction vector for the target word as output, the iterative module operation mainly includes a vector layer, a forward long-short term memory network layer, a backward long-short term memory network layer, and a connection layer, and the output vector will change according to the output of the forward long-short term memory network layer and the output of the backward long-short term memory network layer. Given the training set, forward LSTM considers context information in front of the target word, i.e., from ω1To omegatObtaining a prediction vector c of the target wordtThe specific calculation is as follows:

Figure BDA0002269600230000031

w ═ ω in formula (3)1,...ωtt+1...ωnDenotes a sequence of words, ωt∈RdA vector representation representing the t-th word in a certain sentence, the word vector being a d-dimensional word vector, n representing the number of words in the sentence, ht-1Representing the pre-hidden vector in the memory module in Bi-LSTM, ct-1Representing a previous original vector in the memory module;

meanwhile, the target word is calculated to the LSTM after passing through the backward direction, and the context information behind the target word is considered, namely from omegat+1To omeganTo obtain another prediction vector otThe specific calculation is as follows:

ot=δ(Wωoωt+Whoht-1+Wcoct+bo) (4)

the two vectors c generated simultaneously are then combinedtAnd otInputting the connection layer, and obtaining a prediction vector h of the target word by using a hyperbolic tangent functiontThe specific calculation is as follows:

ht=ottanh(ct) (5)

in the formula (5) ctAnd otRepresenting the target word vector, h, generated by equations (3) and (4)tA prediction vector representing the target word.

For step S5, the invention uses the long-short term memory network to take the vector obtained in step S4 as input, and generate the sequence label as output, and the key operation is to take the final prediction vector h generated in step S4 as outputtAnd multiplying the forward LSTM prediction vector by the position serial number of the word, updating and connecting the forward LSTM prediction vector and the position serial number of the word, finally multiplying the prediction vector obtained by hyperbolic tangent operation by the position vector of the prediction vector and adding the deviation value of the prediction vector to obtain a prediction label vector as output, and specifically calculating as follows:

Figure BDA0002269600230000032

t in formula (6)t-1The previous predicted tag vector is expressed, and the predicted tag vector T is obtained by calculating the previous predicted tag vector, the position information and the deviation valuet

In step S6, the predicted tag vector generated in step S5 is input to the Softmax layer for tag classification, the generated entity tag probability is normalized by adding the TransE link predicted tag probability value, and the probability of the entity tag is output, which is specifically calculated as follows:

Figure BDA0002269600230000041

w in formula (7)yIs a matrix of a Softmax layer, NtIndicates the number of labels, TtRepresenting a prediction tag vector, ytRepresenting entity relationship label probability to obtain

Figure BDA0002269600230000044

Normalized tag probabilities are shown.

For step S7, the network of the present invention is established on the basis of weak supervised learning combined with joint information extraction, and by iteratively optimizing the following objective function, it is hoped that a better model for extracting information from a text can be trained to obtain a diversified and integrated information extractor. The overall loss function is as follows:

Figure BDA0002269600230000042

in the formula (8), | D | represents a training set, LjDenotes xjThe length of the word is such that,

Figure BDA0002269600230000043

denotes xjThe label i (O) of the jth word in (a) means 0 if the label is "O" and 1 instead.

The method for extracting the combined information based on the weak supervised learning solves part of problems existing in the prior art when extracting the text information, and has the following advantages that:

(1) the invention provides an information extraction method for the extraction work by utilizing the internet information source, which can improve the efficiency and accuracy of information extraction and liberate manpower;

(2) the model in learning is represented by introducing knowledge during model training, so that better supervision and correction are performed on model training;

(3) aiming at the problems that supervision is time-consuming and labor-consuming during information extraction, and error propagation exists when the information extraction is split into two subtasks, the invention provides a method for realizing combined information extraction based on weak supervision learning by combining remote supervision with a small amount of manual labeling data, thereby solving the problems that a huge training set, a corpus and error propagation are needed, and obtaining a diversified and integrated information extractor.

Drawings

Fig. 1 is a flowchart of a joint information extraction method based on weak supervised learning in the present invention.

FIG. 2 is a schematic diagram of Mutiple-BIO labeling data in the present invention.

FIG. 3 is a network structure diagram of the Bi-LSTM key module in the present invention.

Detailed Description

The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.

As shown in fig. 1, an implementation flowchart of a joint information extraction method based on weak supervised learning includes:

s1, preprocessing text information acquired through a web crawler, removing useless information, acquiring and determining candidate elements of the word web page and text information of stored candidate elements aiming at description of the Baidu encyclopedia words, and automatically labeling by means of alignment of a public Onlink knowledge base and a pure text.

S2, screening the automatically labeled text set, and manually labeling the obtained partial text by a Mutiple-BIO labeling method based on serialization labeling. As shown in fig. 2, an effective tag is composed of three parts of a word position in an entity, an entity relationship type and an entity relationship role, each word is assigned with a tag which is helpful for extracting information, and a word which is irrelevant to an extraction result is marked as "O"; consider the situation where an entity may belong to multiple triples, where the entity is tagged with multiple relationships in the traditional BIO labeling method, and the triples to which the entity belongs are distinguished in a parallel manner, and the relationship types are obtained from a predefined set as a small data set used in training.

S3, dividing the data set generated by S1 and S2 into a training set and a testing set, pre-training the related entities and relations, converting the training set into an embedded vector form, wherein the pre-training model adopts a TransE model representing learning in the step, the training set is randomly initialized into a vector form as input, word vectors corresponding to the entity set and a predefined relation set in the training set are generated as output, because the operation is mainly to adjust the error between a correct triplet and an incorrect triplet, the output entity relation word vector changes according to the change of a positive sample vector in the adjustment process, the entity set, the relation set and the training set are given, a negative sample is constructed by replacing a head entity or a tail entity randomly through the training set, the distance between the correct triplet entity and the relation is calculated, the distance between the entity relation in the negative sample is adjusted, and the error between the entity set and the relation in the negative sample is adjusted, the vector TransE loss function that represents entity relationships as realistic is as follows:

Figure BDA0002269600230000051

in the formula (9), the loss function of TransE is divided into the sum of the super-parameter and the difference between the positive sample distance and the negative sample distance, wherein gamma represents the super-parameter, f (h, r, t) represents the distance of the positive sample, and f (h ', r ', t ') represents the distance of the negative sampleWhere Δ represents a set of positive samples, Δ' represents a set of negative samples, [ x ]]+Represents max (0, x), where the distance formula is:

f(h,r,t)=(h+r-t)2(10)

in the formula (10), h represents a head entity, r represents a relationship, and t represents a tail entity.

S4, using bidirectional long-short term memory network layer to code vector, capturing semantic information of each word effectively, wherein the key module comprises a forward long-short term memory network layer, a backward long-short term memory network layer and a connection layer, the key idea is to consider the context information in front of the target word through a forward LSTM, represent the target word as a vector, the target word is represented as a vector by considering context information behind the target word to the LSTM, finally connecting the two vectors, as shown in fig. 3, the training set random vector is initialized as the input of Bi-LSTM Bi-directional long-short term memory network, and generating a prediction vector of the target word as an output, wherein the operation of the iteration module mainly comprises a vector layer, a forward long-short term memory network layer, a backward long-short term memory network layer and a connection layer, and the output vector is changed according to the output of the forward long-short term memory network layer and the output of the backward long-short term memory network layer. Given the training set, forward LSTM considers context information in front of the target word, i.e., from ω1To omegatObtaining a prediction vector c of the target wordtThe specific calculation is as follows:

Figure BDA0002269600230000061

w ═ ω in formula (11)1,...ωtt+1...ωnDenotes a sequence of words, ωt∈RdA vector representation representing the t-th word in a certain sentence, the word vector being a d-dimensional word vector, n representing the number of words in the sentence, ht-1Representing the pre-hidden vector in the memory module in Bi-LSTM, ct-1Representing a previous original vector in the memory module;

meanwhile, the target word is calculated to the LSTM after the target word is consideredContext information of a face, i.e. from ωt+1To omeganTo obtain another prediction vector otThe specific calculation is as follows:

ot=δ(Wωoωt+Whoht-1+Wcoct+bo) (12)

the two vectors c generated simultaneously are then combinedtAnd otInputting the connection layer, and obtaining a prediction vector h of the target word by using a hyperbolic tangent functiontThe specific calculation is as follows:

ht=ottan h(ct) (13)

c in formula (13)tAnd otRepresenting the target word vector, h, generated by equations (3) and (4)tA prediction vector representing the target word.

S5, using LSTM decoding layer to generate prediction label sequence, the key idea is to multiply the final prediction vector and forward LSTM prediction vector generated in the above step with the position of the word, finally adding deviation function, through a series of operations, outputting the prediction label vector of the target word, as shown in figure three, using long and short term memory network to take the vector obtained in the step S4 as input, generating sequence label as output, the key operation is to take the final prediction vector h generated in the step S4 as outputtAnd multiplying the forward LSTM prediction vector by the position sequence number of the word, updating and connecting, finally obtaining the prediction vector by hyperbolic tangent operation, multiplying the prediction vector by the position vector of the prediction vector and adding the deviation value of the prediction vector, and obtaining the prediction label vector as the output Tt

Figure BDA0002269600230000071

T in formula (14)t-1A vector of previous prediction labels is shown,representing the forward original vector, and obtaining a predicted label vector T through the operation with the position information and the deviation valuet

S6, inputting the label prediction vector generated in step S5 into the Softmax layer, classifying the labels, normalizing the generated entity label probability by adding the weighted TransE link prediction label probability value, and outputting the probability of the entity label, specifically calculating as follows:

Figure BDA0002269600230000073

w in formula (15)yIs a matrix of a Softmax layer, NtIndicates the number of labels, TtRepresenting a prediction tag vector, ytRepresenting entity relationship label probability to obtain

Figure BDA0002269600230000075

Normalized tag probabilities are shown.

S7, the network of the invention is established on the basis of weak supervision learning combined with joint information extraction, and through an iterative optimization objective function, namely, the logarithm taking the probability that the predicted label and the real label are equal under a certain sentence condition as the base is maximized, a better model for extracting the information of the text can be expected to be trained, and a diversified and integrated information extractor is obtained. The overall loss function is as follows:

in the formula (16), | D | represents a training set, LjDenotes xjThe length of the word is such that,denotes xjThe label i (O) of the jth word in (a) means 0 if the label is "O" and 1 instead.

The invention discloses a combined information extraction method based on weak supervised learning. Therefore, a diversified, integrated and high-accuracy information extractor is obtained, and the problems that a huge corpus is needed in supervised learning and error propagation between the traditional information extraction sub tasks are solved.

The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.

12页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:一种医疗语料标注方法

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!