Self-attention mechanism-based term extraction system, method, medium and terminal

文档序号:1922148 发布日期:2021-12-03 浏览:20次 中文

阅读说明:本技术 基于自注意力机制方面术语提取系统、方法、介质、终端 (Self-attention mechanism-based term extraction system, method, medium and terminal ) 是由 石俊杰 王茜 于 2021-08-18 设计创作,主要内容包括:本发明属于自然语言处理技术领域,公开了一种基于自注意力机制方面术语提取系统、方法、介质、终端,包括:词嵌入层利用词性标注工具进行语句中各个单词的词性,并输出句中每一个词的词性词向量表示;第一BiReGU层对句子每一个单词的前文信息和后文信息进行处理,挖掘上下文信息,计算隐藏状态;单词注意力计算层分配不同的权重,为句子的每个单词生成不同的权重向量,并加权求和得到上下文向量;第二BiReGU层基于得到的权重向量与词嵌入相结合进行全局特征信息的提取;全连接层对提取信息进行处理;CRF层进行方面术语的标记,提取得到相应的方面术语。本发明能够有效、准确的进行方面术语的提取。(The invention belongs to the technical field of natural language processing, and discloses a system, a method, a medium and a terminal for extracting terms based on a self-attention mechanism, which comprise: the word embedding layer utilizes a part-of-speech tagging tool to perform part-of-speech of each word in the sentence, and outputs part-of-speech word vector representation of each word in the sentence; the first BiReGU layer processes the preamble information and the postamble information of each word of the sentence, excavates context information and calculates a hidden state; the word attention calculation layer distributes different weights, different weight vectors are generated for each word of the sentence, and the context vectors are obtained through weighted summation; the second BiReGU layer extracts global feature information based on the combination of the obtained weight vector and word embedding; the full connection layer processes the extracted information; and marking the CRF layer with the aspect terms, and extracting the corresponding aspect terms. The invention can effectively and accurately extract the terms.)

1. A self-attention mechanism based aspect term extraction system, comprising:

the word embedding layer is used for performing the part of speech of each word in the sentence by using a part of speech tagging tool and outputting part of speech word vector representation of each word in the sentence;

the first BiReGU layer is used for processing the preamble information and postamble information of each word of the sentence, mining the context information of an input sequence, deeply training to obtain useful text characteristics and calculating a hidden state;

the word attention calculation layer is used for distributing different weights to each word of the sentence based on the hidden state obtained by calculation, generating different weight vectors for each word of the sentence, and carrying out weighted summation to obtain a context vector;

the second BiReGU layer is used for extracting global feature information based on the combination of the obtained weight vector and word embedding;

the full connection layer is used for processing the extracted information;

and the CRF layer is used for marking the aspect terms, and extracting the corresponding aspect terms.

2. A method of self-attention mechanism aspect term extraction operating the self-attention mechanism aspect term extraction-based system of claim 1, the method comprising:

firstly, performing the part of speech of each word in a sentence by using a part of speech tagging tool, and outputting part of speech word vector representation of each word in the sentence; meanwhile, the long-term dependence of the words on the aspects of content and part of speech is learned;

secondly, endowing each word in the text with different weights through a self-attention mechanism; combining the obtained weight vector with word embedding again to learn global text feature representation;

finally, considering the correlation between the neighbor labels, carrying out global selection, and calculating to obtain the label of each word; by IOB2The format tags each word in the sentence, extracting the word labeled B, I as an aspect term.

3. The self-attention mechanism-based aspect term extraction method according to claim 2, wherein the self-attention mechanism-based aspect term extraction system includes the steps of:

step one, converting a text data representation form; obtaining word vectors by utilizing a word embedding layer pre-training model, and determining the part-of-speech vectors by a part-of-speech tagging model;

processing the obtained part-of-speech vector by the first BiReGU layer to the preamble information and postamble information of each word of the sentence, mining the context information of the input sequence, deeply training to obtain useful text features, and calculating a hidden state;

step three, the attention calculation layer distributes different weights according to the hidden state output by the BiReGU layer based on a self-attention mechanism, generates different weight vectors for each word of the sentence, and obtains a context vector by weighting and summing;

splicing the weight output of the attention calculation layer and the part-of-speech word vector information, and inputting a splicing result into a second BiReGU layer to obtain global feature information;

and step five, sending the vector output by the second BiReGU layer into a full connection layer for processing, obtaining a predicted label sequence Y through an added CRF layer, namely a mark of the aspect term in the corresponding statement, and extracting to obtain the aspect term.

4. The method for extracting terms based on the self-attention mechanism aspect of claim 3, wherein in the first step, the obtaining of the word vector by using the word embedding layer pre-training model and the determining of the part-of-speech vector by the part-of-speech tagging model comprise:

the input word embedding layer obtains the expression form of a part-of-speech word vector: dividing an input sentence X into n words, representing the input sentence asWherein Xi(1. ltoreq. i.ltoreq.n) represents the ith word in X,representing a concatenation operation between words;

word XiCorrespondingly adopting a Glove model for pre-training to obtain a word vectorBy usingIs shown in whichRepresenting a training word vector set, | V | representing the size of a vocabulary V, and d representing a word vector dimension;

tagging tool by part of speechWith provision of XiPart of speech ofRepresents, obtains each word XiIs expressed as a part-of-speech word vector

5. The method for extracting terms based on self-attention mechanism as claimed in claim 3, wherein in the second step, the hidden state calculation formula is as follows:

wherein h istIndicating the hidden state at time t,the hidden state output result representing forward ReGU,and (4) representing a hidden state output result of backward RegU.

6. The method for extracting terms based on self-attention mechanism aspect as claimed in claim 3, wherein in step three, the weights, weight vectors and context vectors are calculated by the following formula:

h=tanh(Wth'+Wnhn);

et=Vttanh(Waht+ba);

wherein, Wt、Wn、WaEach represents a two-dimensional weight matrix, baRepresenting an offset vector, αtAttention weight, e, indicating the output of the t-th positiontIndicating the assignment of computational attention, h' indicating the use of an attention vector alphatAnd (5) weighting and averaging to obtain a vector.

7. The method for extracting terms based on self-attention mechanism as claimed in claim 3, wherein in step five, the CRF layer calculates the probability as follows:

8. a program storage medium for receiving user input, the stored computer program causing an electronic device to perform the self-attention mechanism based aspect term extraction method of any one of claims 2-7, comprising the steps of:

step one, converting a text data representation form; obtaining word vectors by utilizing a word embedding layer pre-training model, and determining the part-of-speech vectors by a part-of-speech tagging model;

processing the obtained part-of-speech vector by the first BiReGU layer to the preamble information and postamble information of each word of the sentence, mining the context information of the input sequence, deeply training to obtain useful text features, and calculating a hidden state;

step three, the attention calculation layer distributes different weights according to the hidden state output by the BiReGU layer based on a self-attention mechanism, generates different weight vectors for each word of the sentence, and obtains a context vector by weighting and summing;

splicing the weight output of the attention calculation layer and the part of speech word vector information, and inputting a splicing result into a second BiReGU layer to obtain global characteristic information;

and step five, sending the vector output by the second BiReGU layer into a full connection layer for processing, obtaining a predicted label sequence Y through an added CRF layer, namely a mark of the aspect term in the corresponding statement, and extracting to obtain the aspect term.

9. A computer program product stored on a computer readable medium, comprising a computer readable program for providing a user input interface for implementing the self-attention mechanism based aspect term extraction method as claimed in any one of claims 2-7 when executed on an electronic device.

10. An information data processing terminal, characterized in that the information data processing terminal is used for implementing the self-attention mechanism based term extraction method according to any one of claims 2-7.

Technical Field

The invention belongs to the technical field of natural language processing, and particularly relates to a system, a method, a medium and a terminal for extracting terms based on a self-attention mechanism.

Background

At present: aspect Term Extraction (Aspect Term Extraction) is a subtask of ABSA, and is also a domain-specific entity naming recognition. For example, in The "picture quality of my Motorola camera phone is an emotion" it can be seen from The perspective of aspect level emotion analysis that "picture quality" is an aspect term and "emotion" is an emotion term expressed by The aspect term. Therefore, the aspect term extraction is a key problem in aspect-level emotion analysis, wherein the aspect term with emotion is extracted in the emotion analysis, and then the emotion analysis is carried out according to the extracted aspect term.

Through the above analysis, the problems and defects of the prior art are as follows: the existing model method can not effectively mine the implicit relation among words, so that the extraction of the aspect terms is incomplete and incomplete, and even non-aspect term words can be extracted.

The difficulty in solving the above problems and defects is: based on the existing model, the part-of-speech information is combined with an attention mechanism, and a method for improving the phenomenon that a ReGU (residual gated unit) model depends on a pre-training word vector model to be used as input and the phenomenon of insufficient word co-occurrence information is found.

The significance of solving the problems and the defects is as follows: after the aspect term words are accurately extracted, the accuracy of the subsequent aspect-level emotion analysis can be improved.

Disclosure of Invention

Aiming at the problems in the prior art, the invention provides a system, a method, a medium and a terminal for extracting terms based on a self-attention mechanism.

The invention is realized in such a way that a system for extracting terms based on a self-attention mechanism comprises:

the word embedding layer is used for performing the part of speech of each word in the sentence by using a part of speech tagging tool and outputting part of speech word vector representation of each word in the sentence;

the first BiReGU layer is used for processing the preamble information and postamble information of each word of the sentence, mining the context information of an input sequence, deeply training to obtain useful text characteristics and calculating a hidden state;

the word attention calculation layer is used for distributing different weights to each word of the sentence based on the hidden state obtained by calculation, generating different weight vectors for each word of the sentence, and carrying out weighted summation to obtain a context vector;

the second BiReGU layer is used for extracting global feature information based on the combination of the obtained weight vector and word embedding;

the full connection layer is used for processing the extracted information;

and the CRF layer is used for marking the aspect terms, and extracting the corresponding aspect terms.

Another object of the present invention is to provide a self-attention mechanism-based aspect term extraction method applied to the self-attention mechanism-based aspect term extraction system, the self-attention mechanism-based aspect term extraction method including:

firstly, performing the part of speech of each word in a sentence by using a part of speech tagging tool, and outputting part of speech word vector representation of each word in the sentence; meanwhile, the long-term dependence of the words on the aspects of content and part of speech is learned;

secondly, endowing each word in the text with different weights through a self-attention mechanism; combining the obtained weight vector with word embedding again to learn global text feature representation;

finally, considering the correlation between the neighbor labels, carrying out global selection, and calculating to obtain the label of each word; by IOB2The format tags each word in the sentence, extracting the word labeled B, I as an aspect term.

Further, the self-attention mechanism based term extraction system comprises the following steps:

step one, converting a text data representation form; obtaining word vectors by utilizing a word embedding layer pre-training model, and determining the part-of-speech vectors by a part-of-speech tagging model;

processing the obtained part-of-speech vector by the first BiReGU layer to the preamble information and postamble information of each word of the sentence, mining the context information of the input sequence, deeply training to obtain useful text features, and calculating a hidden state;

step three, the attention calculation layer distributes different weights according to the hidden state output by the BiReGU layer based on a self-attention mechanism, generates different weight vectors for each word of the sentence, and obtains a context vector by weighting and summing;

splicing the weight output of the attention calculation layer and the part of speech word vector information, and inputting a splicing result into a second BiReGU layer to obtain global characteristic information;

and step five, sending the vector output by the second BiReGU layer into a full connection layer for processing, obtaining a predicted label sequence Y through an added CRF layer, namely a mark of the aspect term in the corresponding statement, and extracting to obtain the aspect term.

Further, in the first step, the obtaining of the word vector by using the word embedding layer pre-training model, and then determining the part-of-speech vector by the part-of-speech tagging model includes:

the input word embedding layer obtains the expression form of a part-of-speech word vector: dividing an input sentence X into n words, representing the input sentence asWherein Xi(1. ltoreq. i.ltoreq.n) represents the ith word in X,representing a concatenation operation between words;

word XiCorrespondingly adopting a Glove model for pre-training to obtain a word vectorBy usingIs shown in whichTo representTraining a word vector set, | V | represents the size of a vocabulary V, and d represents the word vector dimension;

obtaining X by using part-of-speech tagging tooliPart of speech ofRepresents, obtains each word XiIs expressed as a part-of-speech word vector

Further, in step two, the hidden state calculation formula is as follows:

wherein h istIndicating the hidden state at time t,the hidden state output result representing forward ReGU,and (4) representing a hidden state output result of backward RegU.

Further, in step three, the calculation formulas of the weight, the weight vector and the context vector are as follows:

h=tanh(Wth'+Wnhn);

et=Vttanh(Waht+ba);

wherein, Wt、Wn、WaAll represent twoDimension weight matrix, baRepresenting an offset vector, αtAttention weight, e, indicating the output of the t-th positiontIndicating the assignment of computational attention, h' indicating the use of an attention vector alphatAnd (5) weighting and averaging to obtain a vector.

Further, in step five, the CRF layer calculates the probability as follows:

by combining all the technical schemes, the invention has the advantages and positive effects that: the invention uses a double-embedding mechanism and ReGU (residual Gated Unit) as model auxiliary information on the basis of the traditional BilSTM. Meanwhile, considering grammatical relations among words, such as the fact that the aspect words are usually associated with adjectives, and the like, a self-attention mechanism is introduced to mine the dependency relations among the words. And in order to better identify the aspect terms formed by a plurality of words, part-of-speech tagging and modeling are introduced, and the importance and text characteristics of different words in the text sequence are fully considered. And a better effect is obtained. The invention can effectively and accurately extract the terms.

Drawings

Fig. 1 is a schematic diagram of a term extraction system based on a self-attention mechanism according to an embodiment of the present invention.

Fig. 2 is a schematic diagram of a BiLSTM framework provided by an embodiment of the present invention.

Fig. 3 is a schematic diagram of a BA model framework provided in the embodiment of the present invention.

Fig. 4 is a schematic diagram of a term extraction model based on a two-layer BiReGU aspect provided by an embodiment of the present invention.

FIG. 5 is an IOB provided by an embodiment of the present invention2Schematic diagram of labeling method.

Fig. 6 is a schematic diagram of a ReGU model according to an embodiment of the present invention.

Detailed Description

In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is further described in detail with reference to the following embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.

In view of the problems in the prior art, the present invention provides a system, method, medium, and terminal for extracting terms based on the self-attention mechanism, and the present invention is described in detail below with reference to the accompanying drawings.

As shown in fig. 1, an embodiment of the present invention provides a system for extracting terms based on a self-attention mechanism, including:

the word embedding layer is used for performing the part of speech of each word in the sentence by using a part of speech tagging tool and outputting part of speech word vector representation of each word in the sentence;

the first BiReGU layer is used for processing the preamble information and postamble information of each word of the sentence, mining the context information of an input sequence, deeply training to obtain useful text characteristics and calculating a hidden state;

the word attention calculation layer is used for distributing different weights to each word of the sentence based on the hidden state obtained by calculation, generating different weight vectors for each word of the sentence, and carrying out weighted summation to obtain a context vector;

the second BiReGU layer is used for extracting global feature information based on the combination of the obtained weight vector and word embedding;

the full connection layer is used for processing the extracted information;

and the CRF layer is used for marking the aspect terms, and extracting the corresponding aspect terms.

The method for extracting terms based on the self-attention mechanism comprises the following steps:

firstly, performing the part of speech of each word in a sentence by using a part of speech tagging tool, and outputting part of speech word vector representation of each word in the sentence; meanwhile, the long-term dependence of the words on the aspects of content and part of speech is learned;

secondly, endowing each word in the text with different weights through a self-attention mechanism; combining the obtained weight vector with word embedding again to learn global text feature representation;

finally, considering the correlation between the neighbor labels, carrying out global selection, and calculating to obtain the label of each word; by IOB2The format tags each word in the sentence, extracting the word labeled B, I as an aspect term.

The flow of the self-attention mechanism-based term extraction method provided by the embodiment of the invention comprises the following steps:

s101, converting the representation form of the text data; obtaining word vectors by utilizing a word embedding layer pre-training model, and determining the part-of-speech vectors by a part-of-speech tagging model;

s102, processing the obtained part-of-speech vector through a first BiReGU layer to obtain the preamble information and postamble information of each word of the sentence, mining the context information of an input sequence, deeply training to obtain useful text characteristics, and calculating a hidden state;

s103, the attention calculation layer allocates different weights according to the hidden state output by the BiReGU layer based on a self-attention mechanism, generates different weight vectors for each word of a sentence, and obtains a context vector by weighting and summing;

s104, splicing the weight output of the attention calculation layer and the part of speech word vector information, and inputting a splicing result into a second BiReGU layer to obtain global feature information;

and S105, sending the vector output by the second BiReGU layer into a full-connection layer for processing, obtaining a predicted label sequence Y through the added CRF layer, namely a mark of the aspect term in the corresponding statement, and extracting to obtain the aspect term.

The method for obtaining the word vector by utilizing the word embedding layer pre-training model and determining the part of speech vector by the part of speech tagging model provided by the embodiment of the invention comprises the following steps:

the input word embedding layer obtains the expression form of a part-of-speech word vector: dividing an input sentence X into n words, representing the input sentence asWherein Xi(1. ltoreq. i.ltoreq.n) represents the ith word in X,representing a concatenation operation between words;

word XiCorrespondingly adopting a Glove model for pre-training to obtain a word vectorBy usingIs shown in whichRepresenting a training word vector set, | V | representing the size of a vocabulary V, and d representing a word vector dimension;

obtaining X by using part-of-speech tagging tooliPart of speech ofRepresents, obtains each word XiIs expressed as a part-of-speech word vector

The hidden state calculation formula provided by the embodiment of the invention is as follows:

wherein h istIndicating the hidden state at time t,the hidden state output result representing forward ReGU,hidden state representing backward RegUAnd outputting the result.

The weight, weight vector and context vector calculation formulas provided by the embodiment of the invention are as follows:

h=tanh(Wth'+Wnhn);

et=Vttanh(Waht+ba);

wherein, Wt、Wn、WaEach represents a two-dimensional weight matrix, baRepresenting an offset vector, αtAttention weight, e, indicating the output of the t-th positiontIndicating the assignment of computational attention, h' indicating the use of an attention vector alphatAnd (5) weighting and averaging to obtain a vector.

The computed probability of the CRF layer provided by the embodiment of the invention is as follows:

the technical solution of the present invention is further described with reference to the following specific embodiments.

Example 1:

1. the invention is based on the feature extraction model of the two-way long and short term memory network

The feature extraction model BA (BilSTM-Attention) based on the bidirectional long-short term network is the first baseline model proposed by the invention. The model does not use position vectors, and utilizes a bidirectional long-short term memory network and an attention mechanism to acquire important information in sentences. Only the words which have decisive influence on the classification are concerned, and the extracted sentence-level feature vectors are used for relation classification.

At present, in order to obtain characteristics at a sentence level, a natural language processing task usually performs vectorization representation on words or phrases through a model, and performs operation on word vectors to obtain vector representation of sentences. The probability calculated for the general sentence sequence W ═ { W1, W2, W3} is shown in equation 1:

the traditional sentence vector representation usually adopts an average vector method, a vector addition method or a grammar rule analysis method. The methods have a very obvious problem, the influence of words in the feature extraction process is not considered in the obtained sentence vector, and the sentence is greatly influenced by the front-back sequence of the words and the limitation of rules. Semantic dependencies are captured by BiLSTM extraction features. The influence of the past information and the future information on the current information is fully considered. The BA model chooses to use BiLSTM for feature extraction at the neural network layer.

The BilSTM model calculates the LSTMs in two different directions, and finally combines and outputs the calculation results of the hidden layers in the two different directions. Knowledge about LSTM as already introduced in the previous section, BiLSTM extended in a unidirectional LSTM to add a second layer of LSTM, with the newly added layers flowing in reverse chronological order. Therefore, the model can utilize past and future information. As shown in fig. 2.

The network comprises two sub-networks for left and right sequential context, forward and backward delivery respectively, where element summation is used to combine the forward and backward delivery outputs.

The forward LSTM is calculated as shown in equation 2:

i.e. the probability that a preceding word will influence the prediction of a following word.

The reverse is true in backward LSTM, where later words affect the previous generation. As shown in equation 3:

by usingRepresenting the output of the i-time forward long-short term memory networkRepresenting the output to the long-short term memory network after the time i, the output of BLSTM at that time is shown in equation 4 below:

the maximum likelihood function equation for bi-directional LSTM is shown in equation 5:

wherein theta isx、θS、θLSTMThe parameters, θ, representing in turn the word, softmax and LSTMSAnd thetaxShared in a back and forth process.

The framework of the BA model is shown in fig. 3.

From the above figure, it can be seen that the BA model structure is mainly composed of five parts. The input layer is used for inputting an input sentence for extracting the aspect words for the model, and the words are mapped into word vectors through the embedding layer after the input sentence is input; then the neural network layer obtains high-level characteristics from the word vectors by using the BilSTM, obtains the deeper semantic vector expression of each word, and generates sentence-level feature vectors by using weights through the attention layer. And multiplying and summing the obtained weight vector and the hidden state of the neural network layer to obtain sentence-level features, and finally carrying out relation classification through the sentence-level feature vectors.

And taking the output of the last time sequence in the BilSTM as a feature vector and recording the feature vector as H. Let H be a matrix of output concealment vectors [ H1, H2, …, hT ] generated by LSTM, where the elements are all concatenated back and forth to the concealment unit output. As shown in equation 6:

and then the model inputs the obtained hidden state sequence into an attention layer, an attention mechanism describes the dependency relationship between target output and original input data through attention, and sentence vector representation is obtained by adding calculated weights. The sentence label Y is then predicted by the softmax classifier. In the experiment, the result is obviously improved by adding the attention layer. The feature extraction of the bidirectional embedding mechanism and the feature extraction of the self-attention mechanism are further known through the model.

2. The invention discloses a double-layer BiReGU model based on self attention

2.1 double-layer BiReGU-based aspect term extraction model

In order to extract vectors more deeply, the model adopts a double-layer BiReGU model based on an attention mechanism, and a double-embedding mechanism and a Residual Gated Unit (ReGU) are introduced as assistance on the basis of the traditional BiLSTM model so as to improve the capability of feature extraction.

The model uses BiReGU learning text characteristic representation to better capture long-term dependence relationship among words; and then adding an attention mechanism behind the first layer of BiReGU, adding different weights to each word in the sentence to obtain a new sentence characteristic representation after fusion, inputting the new sentence characteristic representation into the second layer of BiReGU to learn more global text characteristic representation, and finally labeling the aspect terms to finish the task of extracting the aspect terms. Attention is paid to the use of a mechanical mechanism and a BiReGU model, the importance and text characteristics of different words in a text sequence are fully considered, and the output sequence is better coded and long-term dependency relationships among labels are captured. Because of the existence of the BiReGU network layer, the model can acquire past and future characteristics; because of the existence of the CRF network layer, the model can use sentence-level annotation information. The network structure inputs the context information into the neural network model, so that not only can the long-term dependency relationship among the labels be captured better, but also the characteristics of the text can be acquired better. It also uses an attention mechanism to find important information in the text, automatically learning the relatively important words of the input text sequence. The model is shown in fig. 4.

The invention adopts IOB2Sequence labeling defines the tags of a sequence. And performing labeling operation on words in the sentence. Sequence labeling (Sequence labeling), which is one of the commonly used techniques in NLP, is often used to label words such as sentences to extract effective information such as places and terms. The sequence annotation mainly comprises an original annotation and a combined annotation. The former is a label that needs to be labeled for each word, and the latter is a label that labels all entity words as identical. For example, the named entity "Tom Bush" in the sentence "Tomorrow, Tom Bush mark a date". Labeling it should result in a complete "name" label, rather than labeling it separately as in the former. Generally, most of the processing joint annotation questions are converted into original annotation solutions. The standard practice is to use IOBs2And (6) marking. IOB2Notation as shown in fig. 5, B represents the beginning of a tagged word, I represents the remaining tokens of the tagged word, and O represents a non-tagged word sentence. The sentences are labeled with different labels by B, I, O, so that the character labeling result of the word can be directly obtained according to the sequence labeling result.

Text data is first expressed in a form that can be handled by a deep learning model, and an input sequence is expressed as W ═ x1,x2,...,xnN is the number of words in the input text sequence. The model adopts a double embedding mechanism to embed a Glove word vector into G (x)i) And domain-specific word vector embedding G (x)i) Splicing, vectorizing the words to obtain a word vector matrix E ═ E1,e2,...,enIn which eiRepresenting the word vector, dimension, represented by the ith wordIs composed of

In the previous feature extraction, each layer selectively utilizes a bidirectional LSTM neural network to process the contextual information, and fully mines the contextual information of the sentence. Here, a RegU structure is introduced on the basis of the BilSTM structure. The original LSTM structure is replaced by a ReGU structure, and the representation in two directions is also possessed. Regu passes through two gates ftAnd OtTo control the flow of input and hidden state information, where OtThe input of the information of the previous layer into the next layer can be controlled, and the useful text features can be trained and obtained more deeply. The structure of the ReGU is shown in FIG. 6.

It can be seen that the previous memory cell c is at time tt-1Input x oftAnd a new memory cell ctIs calculated by the following equation 7:

ct=(1-ft)·ct-1+ft·tanh(wixt) (7)

the new hidden state calculation is shown in equation 8:

wherein is ft=σ(wf·[ht-1,xt]+bf) Forget to open the door ot=σ(wo·[ht-1,xt]+bo) Is a residual door, and the residual door,is xtOr tanh (w)ixt) According to xtWhether the size is equal to ctAnd (6) determining.

The two-layer BiReGU network model is constructed like the two-layer BiLSTM structure, and context information of the input sequence is mined by using the bidirectional ReGU structure of each layer. Word vector W ═ h1,h2,...,hnAfter being input into the first layer of BiReGUIn the BiReGU, the input processing of the forward direction robust and the backward direction robust at the time t is respectively shown in formulas 9-10:

wherein, at the time t,is the hidden state output result of forward reguu,is the hidden state output result of backward RegU. HtThe hidden state output result, expressed as time t, is shown in equation 11 below:

considering that the importance of different words is not considered in the double-layer BiReGU network structure, the importance degree of each word is calculated by the attention calculation layer. The attention mechanism is still calculated by adopting an attention mechanism, and formulas are shown as 12-14:

M=tanh(ht) (12)

α=softmax(wTM) (13)

r=HαT (14)

where w is a trained parameter vector and r represents a representation of a sentence. Considering that a single-layer BiReGU cannot acquire more global feature information, a double-layer BiReGU is used, and the output of a single-word attention calculation layer is used as the input of a second-layer BiReGU layer to acquire more global feature information. To generate the final facet term labels, the model uses CRF instead of the softmax classifier as the last layer, which can improve the highly dependent performance between labels.

CRF (conditional Random field) is also called conditional Random field method. By inputting the sequence vector, the distribution of conditional probabilities is calculated. The method is used in the fields of part of speech tagging, entity naming identification, syntactic analysis and the like. The probability of CRF is calculated as shown in equation 15:

the maximum conditional likelihood estimation is used in the training process, and is calculated as shown in equation 16:

the final labeling result is generated with the highest conditional probability, as shown in equation 17:

the effectiveness of the ReGU module in the aspect term extraction is effectively proved through model experiments.

2.2 model of self-attention mechanism

The self-Attention mechanism (self-Attention) can ignore the distance between words in a sentence, capture the internal structure in the sentence and obtain the dependency relationship between the words. Attention mechanisms have been described in chapter ii, where the model is usually trained by itself to adjust the parameters continuously. That is, let K be V or Q, and consider the input sequence as a set of key-value pairs (K, V) and query Q, the self-attention mechanism automatically calculates attention weights of words and other words in a sentence.

The calculation is shown at 18.

Wherein X∈RnRepresenting the input vector matrix, dkFor the dimension of the matrix, preventing X.XTThe excessive inner product of (a) causes a phenomenon that softmax has an extreme value. And the relationships of K and V are in one-to-one correspondence, the inner product is obtained through each element in each element K in Q, then the calculation is carried out through softmax, and finally the weighted summation is carried out to obtain a new vector. The invention can utilize the attention mechanism to process the information sequence with long length. In a given sentence, the relationships between different words are sought.

In general, the self-attention mechanism is that the words pass through and all the words are subjected to attention calculation, so that global semantic information is possessed among the words. The invention can process any information sequence by a self-attention mechanism, enhance the relation among words in the sentence and prevent the weight value from reducing when the distance is too far.

2.3 SA-BiReGU network architecture

Based on the previous research on BiReGU and the self-attention mechanism, the invention provides a BiReGU aspect term extraction model based on the self-attention mechanism. Meanwhile, the problem that the part-of-speech information of a word is not considered in the input of a previous sentence is considered, and most of the aspect terms in the sentence are nouns associated with one or more adjectives, so that part-of-speech marks are added into the model to help identify the aspect terms, and the input word vector expresses richer semantic information. The input RegU module then learns the long-term dependencies of words in terms of content and part-of-speech. And then, the loss of long-term context information of the sentence is prevented through a self-attention mechanism, and different weights are given to each word in the text. And combining the obtained weight vector with word embedding again, inputting the weight vector into a second layer of BiReGU, and learning more global text feature representation. And finally, putting the word into a CRF layer, taking the correlation between the adjacent labels into consideration, making global selection, and calculating to obtain the label of each word. IOB for each word in a sentence2And marking the format, extracting a word with the label of B, I as an aspect term, and finishing term extraction.

First, the present invention should represent text data in a form that can be handled by a deep learning model. Obtaining word vectors by utilizing a pre-training model, and then performing word tagging injection moldingThe type determines a part-of-speech vector. And inputting the embedding layer to obtain a representation form of a part-of-speech word vector. Assuming that the input sentence X contains n words in total, the input sentence is represented asWherein Xi(1 ≦ i ≦ n) for the ith word in X,is a series operation between words. Word XiCorrespondingly adopting a Glove model for pre-training to obtain a word vectorBy usingIs shown, wherein it is assumed thatFor training the word vector set, | V | is the size of vocabulary V, and d is the word vector dimension. Obtaining X by using part-of-speech tagging tooliPart of speech ofAnd (4) showing. Then each word XiIs expressed as a part-of-speech word vector

Then, the obtained input vector processes the preamble information and postamble information of each word of the sentence through a BiReGU layer, fully excavates the context information of the input sequence, deeply trains to obtain useful text characteristics, and calculates the hidden state htThe formula is shown as formula 11 described earlier.

The self-attention mechanism outputs a vector h according to a BiReGU layertAnd distributing different weights to generate different weight vectors for each word of the sentence, and performing weighted summation to obtain the context vector. The calculation formula is shown in formulas 19-22:

h=tanh(Wth'+Wnhn) (19)

et=Vttanh(Waht+ba) (22)

wherein, Wt、Wn、WaAre all two-dimensional weight matrices, baAs a bias vector, αtAttention weight, e, indicating the output of the t-th positiontCalculating the distribution of attention, further extracting features, and using the attention vector alpha as htAnd (5) weighting and averaging to obtain a vector. Considering that a single-layer BiReGU cannot acquire more global feature information, the output of the attention calculation layer is spliced with part-of-speech word vector information input by a previous model to obtainAnd input into the BiReGU layer, the formula is shown as formula 11 introduced earlier, to obtain more global feature information.

After feature information is obtained through the BiReGU layer again, the vectors are sent to the full-connection layer to be processed, a predicted label sequence Y is obtained through the added CRF layer, and the calculation probability is as shown in formulas 23-24:

the CRF considers the correlation between neighboring tags and makes a global selection, rather than independently decoding each tag, to maximize the conditional probability of sentence tags given an input sentence, to obtain a tag sequence, i.e., a label corresponding to an aspect term in the sentence, and finally extract a desired aspect term.

The technical effects of the present invention will be described in detail with reference to experiments.

According to the invention, the provided model is tested, and the experimental result is compared and analyzed with other similar methods based on the same data set.

1 Experimental Environment and data set

Experiments of the invention experiments were designed using the python language and the pytorch framework. The Pythrch framework mainly accelerates the neural network training model through the GPU. Compared with the characteristic that TensorFlow is suitable for cross-platform or embedded deployment, the pytorch framework is more beneficial to rapid realization of prototypes of small-scale projects and is increasingly popular.

The physical equipment and environment used are shown in table 1 below:

TABLE 1 Experimental Environment

Environment(s) Model of the device
CPU Intel Core7700HQ Rui frequency 3.8GHz
Display card 4 blocks NVIDIA GeForce GTX1080Ti
Memory device 12GB DDR4 2400MHz
Operating system Ubuntu16.04LTS
Development environment PyCharm

The training data sets selected by the experiment of the invention are the Laptop and Restaurant data sets in the SemEval 2014 task, and the Subtask 2 in the Restaurant data set of SemEval 2016. The data set is mainly used for collecting and analyzing product comments and user comments in the catering field. While both data sets are tagged with IOB2 tags. The number of data and aspect terms contained in the different data sets is as follows:

table 2 data set statistics

The data is stored in the form of xml tags. The data sets provide classification labels for facet terms, facet categories, and emotion polarities. While the data set also has four emotional polarities for food, service, price, atmosphere and other five categories, as well as corresponding negative, neutral, positive and contradictory.

2 evaluation index

In the experiment, for better comparison with an extraction model in other terms, an F1 value (F1-score) is used as an evaluation index, and the model is evaluated according to actual data obtained from a test set. The F1 value is used as an evaluation two-classification model index with the value between 0 and 1, and comprises two measurement indexes of precision (preciouss) and Recall (Recall). These indices can be calculated from the confusion matrix given in table 3 below:

TABLE 3 confusion matrix

Positive(Actual) Negative(Actual)
Positive(Predicted) True Positive(TP) False Positive(FP)
Negative(Predicted) False Negative(FN) True Negative(TN)

The table contents are defined by equation 25:

the precision rate (P) is the percentage of the number of samples with positive correct prediction results in the samples with positive prediction results, and the recall rate (R) is the percentage of the number of samples with positive correct prediction results in the samples with positive actual results. The calculation formula is shown in the following formula 26:

in summary, a calculation of the value of F1 can be obtained, as shown in equation 3.27:

3 analysis of Experimental parameters and results

Experimental parameter setting

SA-BiReGU modelAnd (5) obtaining a word embedding vector by using a pre-trained Glove word vector model and initializing the word embedding vector. And the model used by Stanford POS tagger[60]The generated POS tag is annotated. There are 41 different types of POS tags in the dataset and the experiment used all existing POS tags, leaving the model to select those tags relevant to it during the training phase. Training is carried out on an LSTM at a learning rate of 0.01, dropout is set to be 0.2, batch _ size is set to be 64, Adam is selected as an optimizer, 10% of data sets are randomly extracted at the end of an experiment for verification, and an F1 value is calculated.

The Adam algorithm is widely applied as a self-adaptive learning algorithm. The convergence rate is high, the parameter is less, and the method is very suitable for experiments using large data sets and the like. The calculation process of the algorithm is as follows:

experimental results and analysis

A brief description of a comparative model of the invention follows:

CRF-1: extracting aspect terms only by the most basic conditional random field

CEF-2: adding word embedding vector extraction terms on basic conditional random field

BLSTM-CRF: bi-directional LSTM extraction features using pre-trained word vectors

IHS _ RD: SemEval-2014 notes field win scheme

Li: extracting terms considering the impact on the current prediction of the moment before the sequence annotation

DE-CNN: extracting terms using generic and domain-specific pretrained word-embedded CNN models

7) BiReGU-CRF: extracting terms using a two-layer RegU and attention mechanism

8) WDEmb: inputting CRF extraction terms through aspect words and weight position information of up-down calculation words

The results obtained by the model of the invention were compared experimentally with other reference models, as shown in table 4 below:

TABLE 4 comparison of values of different models F1

Where "-" is used to indicate that the model was not tested using the data set, the results are shown in bold for both the model name of the invention and the best results obtained for the model on the different data sets.

As can be seen from Table 4, the method provided by the invention basically achieves a good effect compared with other methods, and the model expression effect is only inferior to that of DE-CNN only on the data set in the SemEval-2016 restaurant field. Compared with the method, the DE-CNN word embedding uses double-embedding vector combination, and the feature extraction is carried out through the field word embedding and the common word embedding. The embedding of the domain words is beneficial to better mining the relation between the domain words in a specific domain, but has no good effect on other domains. The specific field word embedding needs to manually mark specific field data, and the application range is small. Therefore, the DE-CNN model can be developed in favor of some specific fields.

As can be seen from Table 4, the effect of the CRF-1 model is inferior to that of CRF-2, mainly the feature extraction capability of the model in CRF-1 is not good, and the CRF-2 can bring very good effect to the addition of word vectors. And the WDEmb adds the context vector characteristics to increase CRF input characteristics, so that the classification effect is better than that of CRF-2. From these three models, the effectiveness of word embedding for facet term extraction can be seen. The effect of BiReGU-CRF is better than that of BLSTM-CRF, and the BiReGU is further proved to have a certain effect improvement relative to BLSTM. Meanwhile, on the basis of the feature extraction model, CRF is added to BLSTM-CRF and WDEmb, the dependency relationship of different words in the network can be better acquired, and therefore the model effect is better than that of a bidirectional LSTM model only based on pre-training word vectors. The BiReGU-CRF performs better than the model provided by the invention, and proves that the relationship of the terms in the aspect can be excavated more deeply and higher-level features can be extracted through the improvement of word embedding and attention mechanism. In a word, the model can obtain good effect mainly because the POS label is introduced, the word vector information of the part of speech is added, the hidden relation modeling characteristics among the terms in the aspect are further mined through the self-attention mechanism, and the effectiveness of the method is proved through experiments.

This was compared in ablation experiments after comparison with other models of interest in order to explore the effectiveness of the introduced modules. Three different models were evaluated based on the SA-BiReGU model to study the importance of adding part-of-speech tagging functions and the impact of learning word position using a self-attention mechanism, as shown in table 5 below:

TABLE 5 ablation test results

From the results shown in table 5, it can be seen that the function of adding part-of-speech tags and the self-attention mechanism are important to improve the F1 value and improve the model capability of recognizing the terms in the sentence. The experimental results show that the model provided by the invention can achieve good effects.

The present invention is based on a self-attention aspect term extraction model. Firstly, the characteristics and the existing problems of the current mainstream aspect term extraction model are analyzed, then the aspect term extraction model SA-BiReGU based on self-attention is provided, and finally, a comparison test proves the effectiveness of the method model.

It should be noted that the embodiments of the present invention can be realized by hardware, software, or a combination of software and hardware. The hardware portion may be implemented using dedicated logic; the software portions may be stored in a memory and executed by a suitable instruction execution system, such as a microprocessor or specially designed hardware. Those skilled in the art will appreciate that the apparatus and methods described above may be implemented using computer executable instructions and/or embodied in processor control code, such code being provided on a carrier medium such as a disk, CD-or DVD-ROM, programmable memory such as read only memory (firmware), or a data carrier such as an optical or electronic signal carrier, for example. The apparatus and its modules of the present invention may be implemented by hardware circuits such as very large scale integrated circuits or gate arrays, semiconductors such as logic chips, transistors, or programmable hardware devices such as field programmable gate arrays, programmable logic devices, etc., or by software executed by various types of processors, or by a combination of hardware circuits and software, e.g., firmware.

The above description is only for the purpose of illustrating the present invention and the appended claims are not to be construed as limiting the scope of the invention, which is intended to cover all modifications, equivalents and improvements that are within the spirit and scope of the invention as defined by the appended claims.

23页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:文本纠错方法、装置、电子设备和计算机存储介质

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!