Mongolian aspect level emotion analysis method based on target template guidance and relation head coding

文档序号:1846597 发布日期:2021-11-16 浏览:37次 中文

阅读说明:本技术 基于目标模板指导和关系头编码的蒙语方面级情感分析方法 (Mongolian aspect level emotion analysis method based on target template guidance and relation head coding ) 是由 苏依拉 王涵 程永坤 张妍彤 仁庆道尔吉 吉亚图 于 2021-07-14 设计创作,主要内容包括:一种基于目标模板指导和关系头编码的蒙语方面级情感分析方法,从中文短语结构树中提取目标模板指导蒙汉神经机器翻译,将中文方面级情感语料翻译为蒙语方面级情感语料;利用蒙语依存句法解析器对蒙语方面级情感语料进行依存句法分析,得到蒙语依存分析树;对蒙语依存分析树进行重构,得到具有面向方面的树结构的蒙语依存分析树;采用图注意力神经网络模型,并添加关系头得到关系图注意力网络,对重构后的蒙语依存分析树中的依存关系进行编码,并建立方面和观点词之间的联系;训练所述关系图注意力网络使其能够对蒙语进行方面级情感分析,得到积极或者消极的情感极性,本发明提高了蒙语方面级情感分析的准确率。(A Mongolian aspect level emotion analysis method based on target template guidance and relation head coding comprises the steps of extracting a target template from a Chinese phrase structure tree to guide Mongolian neural machine translation, and translating Chinese aspect level emotion corpus into Mongolian aspect level emotion corpus; carrying out dependency syntax analysis on Mongolian aspect level emotion linguistic data by using a Mongolian dependency syntax analyzer to obtain a Mongolian dependency analysis tree; reconstructing the Mongolian dependency analysis tree to obtain the Mongolian dependency analysis tree with an aspect-oriented tree structure; adopting an image attention neural network model, adding a relation head to obtain a relation image attention network, coding the dependence relation in the reconstructed Mongolian dependence analysis tree, and establishing a relation between aspects and viewpoint words; the relation graph attention network is trained to carry out aspect level emotion analysis on Mongolian, positive or negative emotion polarities are obtained, and accuracy of Mongolian aspect level emotion analysis is improved.)

1. A Mongolian aspect level emotion analysis method based on target template guidance and relational header coding is characterized by comprising the following steps:

step 1, performing phrase structure analysis on Chinese aspect level emotion corpus by using a Chinese phrase structure analyzer to obtain a Chinese phrase structure tree, extracting a target template from the Chinese phrase structure tree, guiding Mongolian neural machine translation, and translating the Chinese aspect level emotion corpus into Mongolian aspect level emotion corpus;

step 2, carrying out dependency syntax analysis on Mongolian aspect-level emotion linguistic data by using a Mongolian dependency syntax analyzer to obtain a Mongolian dependency analysis tree, wherein the Mongolian dependency analysis tree is composed of a plurality of nodes and dependency relations among the nodes;

step 3, reconstructing the Mongolian dependency analysis tree to obtain the Mongolian dependency analysis tree with an aspect-oriented tree structure;

step 4, adopting a graph attention neural network model, adding a relation head to obtain a relation graph attention network, coding the dependence relation in the reconstructed Mongolian dependence analysis tree, and establishing a relation between aspects and viewpoint words to improve the accuracy of Mongolian aspect-level emotion analysis;

and 5, training the attention network of the relational graph to perform aspect-level emotion analysis on Mongolian to obtain positive or negative emotion polarities.

2. The method for Mongolian aspect level emotion analysis based on target template guidance and relational header coding as claimed in claim 1, wherein in step 1, a target template, i.e. a phrase structure of a Chinese sentence, is generated by using a Chinese phrase structure tree to learn the structure of the target sentence, i.e. the Mongolian sentence, the target template information is fused into a coder-decoder framework of a Mongolian neural machine, and a translation is generated by using the target template and a source text, i.e. Chinese, to obtain Mongolian aspect level emotion corpus.

3. The Mongolian aspect level emotion analysis method based on target template guidance and relational head coding as claimed in claim 2, wherein the target template guidance Mongolian neural machine translation is divided into two stages:

the method comprises the following steps that in the first stage, a standard Transformer model is trained to be used for predicting a target template by using a source text and the target template extracted from a Chinese phrase structure tree;

in the second stage, the source text and the target template are encoded using both the target template encoder and the source language encoder, and a final translation is generated by a decoder interacting with both encoders.

4. The Mongolian emotion analysis method based on target template guidance and relational header coding as claimed in claim 3, wherein the decoder and the source language encoder both use a Transformer model, and in the second stage, the source language sequence X ═ X { X } is read by the source language encoder1,x2,…,xi,…,xnAnd a target template encoder provides a template sequence consisting of target language words and non-terminal nodes, T ═ T1,t2,…,ti,…tm},xiIs the source language entry at the ith position, tiThe method comprises the steps that a template sequence entry at the ith position is formed, n is the length of a source language, m is the length of a template, a source language encoder and a target template encoder respectively map a source language sequence X and a template sequence T to hidden layer vectors, and then a decoder generates a final translated target language sequence Y ═ Y1,y2,…,yi,…ykAnd i.e.: yiis the target language entry at the ith position, k is the length of the target language, P (Y | X) is the probability that the source language sequence X translates into the target language sequence Y,obtaining the probability, θ, of the template sequence T for the source language sequence XX→TTo obtain the parameters of the template sequence T from the source language sequence X,the purpose of (1) is to predict a target template;for the probability, theta, of translation of a Source language sequence X into a target language sequence Y under the guidance of a template sequence T(X,T)→YAnd translating the source language target language sequence X into parameters in the process of the target language sequence Y under the guidance of the template target language sequence T.

5. The method for Mongolian aspect level emotion analysis based on target template guidance and relational header coding as claimed in claim 1, wherein in step 2, the Mongolian dependency syntax parser is obtained by training a manually labeled Mongolian dependency parse tree training set.

6. The Mongolian emotion analysis method based on target template guidance and relational header coding as claimed in claim 1, wherein in step 3, the Mongolian dependency analysis tree is reconstructed by the following method:

firstly, taking nodes of corresponding aspects, namely aspect nodes, in a Mongolian dependency analysis tree as roots, wherein the aspects refer to entities or entity attributes of Mongolian and are objects to be subjected to emotion analysis;

secondly, setting the nodes directly connected with the aspect nodes as child nodes, and reserving original dependency relationships for the child nodes;

and finally, constructing a dependency relationship for each aspect node in the Mongolian sentence to obtain an aspect-oriented tree structure, namely a reconstructed Mongolian dependency analysis tree.

The Mongolian is based on the root or stem of a word morphologically, and is followed by additional components to derive a new word and change the morpheme, and structurally, the Mongolian sentence has a certain regular word order, usually the subject is before, the predicate is after, the modifier is before the modified object, and the predicate is after the object, so that the Mongolian can be more accurately processed by constructing an aspect-oriented tree structure according to the linguistic characteristics of the Mongolian.

7. The Mongolian emotion analysis method based on target template guidance and relation header coding as claimed in claim 1, wherein in step 4, the relation header is used for controlling information flow from neighborhood nodes, and the dependency relationship is coded as follows:

first, the dependencies are mapped into a vector representation of a relationship header, which is computed as:

wherein the content of the first and second substances,is the head of the relationship of the node l +1 on the l +1 layer,the concatenation of the relational headers is represented,is the normalized attention coefficient calculated by the mth relationship head on the l layer,is an input to the transformation matrix and,representing a node j on the level l,the nonlinear output result of the m-th relation head point on the l layer passing through the activation function is shown, sigma is the Sigmoid activation function, relu is the activation function, rijRepresenting an embedded relationship between node i and node j, Wm1And Wm2Input transformation matrices, b, of the relationship head m1 and the relationship head m2, respectivelym1And bm2Parameters of the relationship head m1 and the relationship head m2 respectively;represents a collection of all nodes;

then, with a representation of the multi-headed attention aggregation neighbor node:

is the head of attention for node i on level l +1,representing the connection of the vector indices from 1 to K,is the normalized attention coefficient calculated by the kth attention head on the l-layer,is an input conversion matrix, attentions (i, j) adopt dot product attention;

adding a relation head in an original graph attention neural network to obtain a relation graph attention network so as to establish a relation between an aspect and a viewpoint word and improve the accuracy of Mongolian aspect level emotion analysis, wherein the relation graph attention network comprises k attention heads and m relation heads, and the final representation mode of each node is as follows:

the value of the node i of the layer l +1 of x is obtained by splicing the attention head and the relation head,representing the concatenation of the relationship head and attention head of node i on level l +1,is the final representation of node i at level l +1,to activate a function, Wl+1To input a transformation matrix, bl+1Are parameters.

8. The Mongolian emotion analysis method based on target template guidance and relational header coding as claimed in claim 7, wherein the transform is used to train the relational graph attention network, and the word embedding of the reconstructed Mongolian dependency analysis tree node is first encoded, and the initial representation of the node i is obtainedOutput hidden state h ofiThen, the aspect-corresponding word, i.e., the aspect word, is encoded and the encoded average hidden state is used as an initial representation of the corresponding rootTraversing the fully-connected softmax layer after applying the graph attention network to the reconstructed Mongolian dependency analysis treeThe probability p (a) mapped to different emotion polarities is calculated as follows:

wherein, WpIs the input transformation matrix for probability p (a),is a vector representation of the node a at the level l, bpIs an offset;

the loss function L (θ) is the cross entropy loss, which is calculated as:

wherein (S, A) is a sentence-aspect pair consisting of a sentence S and an aspect A,is a set of all sentence-aspect pairs, a represents aspects appearing in sentence S, θ contains all trainable parameters, a is an aspect word, and model parameters of the graph attention network are obtained through training.

Technical Field

The invention belongs to the technical field of artificial intelligence, and particularly relates to a Mongolian emotion analysis method based on target template guidance and relation head coding.

Background

Currently, methods for emotion analysis are mainly classified into two categories, namely methods based on emotion dictionaries and methods based on machine learning, wherein a deep learning method in the machine learning methods is a mainstream method for emotion analysis because the deep learning method can capture more comprehensive and deeper text information. Meanwhile, emotion analysis can be divided into three levels: document level, sentence level, and aspect level. Generally, a comment sentence may contain multiple aspects of opinions, wherein the aspects may be entities or attributes of entities, and emotional tendency of different aspects may be different. Document-level and sentence-level sentiment analysis is a coarse-grained sentiment analysis, generally assuming that an entire document or sentence contains only one topic of opinion, which is clearly not justified in many cases. Aspect-level sentiment analysis, i.e. fine-grained sentiment analysis, aims to judge the sentiment tendency of each aspect of the content under study, so that richer and more valuable information can be mined.

Most conventional approaches are based on dictionary and syntactic feature-based conventional machine learning models. The performance of these models is highly dependent on the quality of the hand-made features, which is labor intensive. Therefore, recent research has turned attention to developing end-to-end deep neural network models. For the existing deep learning-based aspect level emotion analysis method, classification is performed according to the type of the adopted deep learning technology, and the main methods are divided into the following five types: recurrent neural networks, attention-based recurrent neural networks, convolutional neural networks, and memory networks.

A recurrent Neural Network (RecNN) is a Neural Network that has a tree-like hierarchical structure and where Network nodes recurse input information in their order of connection, the methodology predicts the emotional polarity of an aspect from context and syntactic structure; a Recurrent Neural Network (RNN) is a kind of Neural Network for processing sequence data, and its hidden layer has a feedback mechanism, i.e. the hidden layer state is not only influenced by input, but also influenced by the previous hidden layer state; a recurrent neural network based on an attention mechanism for simultaneously modeling aspects and sentences to explicitly capture interactions between a given aspect and its context words; convolutional Neural Networks (CNN) are good at capturing local patterns, and have good performance in processing unstructured data such as pictures and texts, and fine-grained emotion analysis generally adopts CNN to extract local and global representations of texts; the memory network uses an attention mechanism with external memory to capture important information in a sentence that is relevant to a given aspect.

However, the slow progress of Mongolian emotion analysis due to the lack of Mongolian emotion corpus and the confusion of the connections between the aspect words and the emotion polarities caused by the complexity of the language and the presence of multiple aspects in a single sentence caused by the existing models have been hindered.

In addition to the above problems, Mongolian uses postaddition components as derivatives of words and word type changes, binding multiple additional components to represent multiple grammatical meanings; the Mongolian nouns and pronouns have grammatical categories such as numbers, lattices and the like, and the verbs have grammatical categories such as states, time, formulas and the like; the Mongolian verb is behind the object owner and the fixed word is in front of the modified word, and the characteristics of the Mongolian also bring great challenges to Mongolian aspect level emotion analysis.

As a core technology underlying natural language processing, syntactic analysis aims to formally analyze a sentence or a word string according to some grammar system from the perspective of constituent components to obtain a graph structure (usually, a tree structure) showing grammatical relations among the components. The method is an important link for connecting upper-layer application and bottom-layer technology, is widely applied to tasks such as text understanding, semantic disambiguation, backbone extraction, emotion analysis and machine translation on one hand, and can also help to improve the accuracy and efficiency of bottom-layer tasks on the other hand. The syntactic analysis tasks are three: judging whether the output character string belongs to a certain language, eliminating ambiguity in terms of morphology and structure in the input sentence, and analyzing internal components of the input sentence, such as component components, context, and the like, the second and third tasks are generally the main tasks of syntactic analysis.

The phrase structure tree and the dependency parse tree are two syntactic forms of syntactic parsing. Fig. 1 is a schematic diagram of a phrase structure tree, which is a tree with a phrase structure obtained by analyzing the phrase structure of a sentence, and is used to express the syntactic structure of the sentence, where only leaf nodes are associated with words in an input sentence, other intermediate nodes are labeled phrase components, and the upper nodes of the leaf nodes (words) are parts of speech. For example, IP-HLN is a single sentence-title, NP-SBJ is a noun phrase-subject, NP-PN is a noun phrase-pronoun, NP is a noun phrase, and VP is a verb phrase. In fig. 1, leaf nodes "shanghai" and "purtoyo" are proper nouns, and the two proper nouns are connected together, so that the two proper nouns are combined into a noun phrase, the combined label is used as a parent node of the two words, and the leaf node "and" is a conjunctive word, so that "development" and "legal" are used as brother nodes, and "legal" and "construction" are common nouns, which are then combined, the parent nodes of the words are labeled, and finally the labels are combined to obtain a phrase structure tree; fig. 2 is a schematic diagram of a dependency analysis tree having sentence dependency relationship obtained by performing dependency analysis on a sentence, in which if one word modifies another word in the sentence, the modified word is called dependent, the modified word is called dominant, and the grammatical relationship between the two words is called dependency relationship. The dependency relationship is marked on an arrow, a word is in a box, a number below the word is a position of a node, the first position is 0, 0 represents a virtual root, English on the right of the word represents the part of speech of the word, a specific example such as ' write ' is used as a core component and is the root of a dependency analysis tree, a ' write ' modification ' program is used, so that the dependency relationship is added between two words, and ' both ' represents the degree of ' write ', so that the dependency relationship of the two words is pointed to ' write ' from ' both ', and in the dependency analysis tree, a certain word in each word and a certain word in a sentence have an arrow and exist in the dependency relationship.

The existing syntactic analysis methods are mainly divided into three types: rule-based methods, statistical-based methods, and deep learning-based methods.

The rule-based syntactic analysis method comprises the following steps: early syntax analysis methods based on dependency grammar mainly include constraint satisfaction-based methods and deterministic analysis strategies, etc. However, for an input sentence with a medium length, it is very difficult to analyze all possible sentence structures by using a large-coverage grammar rule, and even if all possible sentence structures are analyzed, it is difficult to implement effective disambiguation, and a most possible analysis result is selected, and the manually written rule has a certain subjectivity, and also needs to consider generalization, and the accuracy is difficult to guarantee when a complex context is faced, the manually written rule itself is a complex labor with a large workload, and the written rule field has close correlation, which is not favorable for the transplantation of a syntactic analysis system to other fields. The syntax analysis algorithm based on the rules can successfully process the compiling of the programming language, but the natural language processing is difficult to get rid of the dilemma all the time, because of the subclass of the context-free grammar strictly limited by the knowledge used in the programming language, but the formal description method used in the natural language processing system far exceeds the expression capability of the context-free grammar; moreover, when people use programming languages, all expressions must comply with the requirements of machines, and one person is a process of complying with the machines, which is a mapping process from an infinite set to a finite set of languages, while in natural language processing, the contrary is true, and natural language processing realizes the processes of machine tracking and complying with the languages of people, and deducing from the finite set to the infinite set of languages.

The syntax analysis method based on statistics comprises the following steps: a large amount of excellent research work is also shown in the field of statistical natural language processing, including a generative dependency analysis method, a discriminant dependency analysis method and a deterministic dependency analysis method, most typically a Context-Free Grammar (PCFG), which is essentially a set of candidate tree-oriented evaluation methods, and assigns a higher-score unreasonable syntax tree to a correct syntax tree and a lower branch to the syntax tree, thereby disambiguating by using scores.

The syntax analysis method based on deep learning comprises the following steps: in recent years, deep learning has become a research focus on the issue of syntactic analysis, and the main research efforts have focused on feature representation. The feature representation of the traditional method mainly adopts artificial definition of atomic features and feature combinations, while deep learning carries out vectorization on the atomic features (words, parts of speech and category labels) and extracts features by utilizing a multi-layer neuron network.

In general languages such as english and chinese, the research on the aspect of syntactic analysis tends to be mature. With the construction of large-scale syntax tree libraries of general languages, the syntax analysis method based on the deep neural network is widely applied. However, in non-general languages such as Mongolian, there are still relatively few relevant studies on syntactic analysis. At the present stage, the main methods applied to the syntactic analysis of Mongolian are also rule-based and statistical-based methods. In addition, compared with the tree bank with the scale of ten thousand sentences or even hundred thousand sentences in general language, the tree bank of Mongolian language is still relatively lagged to be built.

In recent years, the academic world has paid more attention to the construction and development of language resources and language technology of the small languages, and many researchers have paid more attention to the natural language processing of the small languages, while the research on the parsing of Mongolian language is still rare.

Disclosure of Invention

Based on the prior art, the technical problems to be solved by the invention include:

(1) mongolian emotion corpus deficiency problem

Basic workflow of neural machine translation model: the method comprises the steps of firstly preprocessing linguistic data (Chinese word segmentation, BPE coding and the like), coding a source language sentence into a feature vector with fixed dimensionality by using an Encoder (Encoder), and then decoding a target language sentence from the feature vector containing semantic information of the source language sentence by using a Decoder (Decoder). Fig. 3 shows the structure of the encoder-decoder model, from bottom to top a process of machine translation. Conventional neural-machine translation models typically translate source language text directly to target language text. However, in practice, human beings usually have a rough idea of the sentence pattern or structure of the target text when translating a sentence, and then translate the source language text into the target language text, rather than translate the target language text word by word.

Usually, when sentence making training is performed, the teacher may teach some sentence patterns, such as "sb.like doing sth; the same be.. "etc., which are templates, and then let the student do the exercise. Most of the existing neural machine translation models translate word by word from a source text, the translated text has a hard feeling of a few machine turns, Mongolian-level emotion corpora are lacked at present, the syntax structure of a sentence cannot be learned by the existing machine translation method, emotion can be lost in the translation process, and the emotion corpora losing emotion are meaningless. Because there is no Mongolian-level emotion corpus in the open source, the first step of Mongolian emotion analysis is to obtain Mongolian-level emotion corpus.

(2) Problem of emotional polarity for specific aspects

Linking the various aspect words with their respective perspectives is the core of the aspect-level sentiment analysis task, and various attention mechanisms can be used to achieve this goal with good results. However, due to the complexity of language morphology and grammar, these mechanisms sometimes fail, such as a sentence: the noodles are so tasty, but the vegetables are bad, wherein the word "tasty" is closer to vegetables than "bad" and in some other remarks tasty vegetables may appear, which makes the two words closely related. Therefore, when analyzing people's attitudes towards vegetables, attention mechanisms may cause errors.

Attention-based LSTM model map as shown in fig. 4, in the previous sentence, like is a verb that expresses a positive emotion to recipe, but when like is used as a preposition in the latter sentence, the model still uses the model with a higher weight, resulting in an erroneous prediction. Fig. 5 shows the case where the two aspects have different emotional polarities in one sentence. For the chicken aspect, LSTM erroneously weights the words but and dreed very high, which leads to another prediction error.

(3) Ambiguity problem between aspect word and viewpoint word

Scientific counts have from space, which can be understood as "the Scientists count whales from the universe" and can also be understood as "the Scientists count whales from the universe". In daily life, the ambiguity is not enough to be caused by the fact that the object modified by Prepositional Phrase (PP) "from spread" is unknown. Besides, there are ambiguity such as unknown collodion scope (association scope ambiguity), unknown adjective modifier ambiguity (Verb argument ambiguity), unknown Verb Phrase modifier ambiguity (Verb Phrase attribute ambiguity), etc. human can determine many meanings that a sentence may express, but it is very difficult for a computer to understand.

In order to solve the above problems, the present invention aims to provide a Mongolian emotion analysis method based on target template guidance and relational header coding, wherein a template extracted from a phrase structure tree is used as a target template to guide a translation process, Mongolian emotion corpus is obtained by Chinese emotion corpus translation, and the problem of Mongolian emotion corpus shortage is solved; and the relation between the aspect words and the viewpoint words is established through syntactic analysis, so that the ambiguity problem between the aspect words and the viewpoint words can be solved, the information in a syntactic structure is processed by adding the relation head codes, and the accuracy of Mongolian aspect-level emotion analysis is improved.

In order to achieve the purpose, the invention adopts the technical scheme that:

a Mongolian aspect level emotion analysis method based on target template guidance and relational head coding comprises the following steps:

step 1, performing phrase structure analysis on Chinese aspect level emotion corpus by using a Chinese phrase structure analyzer to obtain a Chinese phrase structure tree, extracting a target template from the Chinese phrase structure tree, guiding Mongolian neural machine translation, and translating the Chinese aspect level emotion corpus into Mongolian aspect level emotion corpus;

step 2, carrying out dependency syntax analysis on Mongolian aspect-level emotion linguistic data by using a Mongolian dependency syntax analyzer to obtain a Mongolian dependency analysis tree, wherein the Mongolian dependency analysis tree is composed of a plurality of nodes and dependency relations among the nodes;

step 3, reconstructing the Mongolian dependency analysis tree to obtain the Mongolian dependency analysis tree with an aspect-oriented tree structure;

step 4, adopting a graph attention neural network model, adding a relation head to obtain a relation graph attention network, coding the dependence relation in the reconstructed Mongolian dependence analysis tree, and establishing a relation between aspects and viewpoint words to improve the accuracy of Mongolian aspect-level emotion analysis;

and 5, training the attention network of the relational graph to perform aspect-level emotion analysis on Mongolian to obtain positive or negative emotion polarities.

Compared with the prior art, the invention has the beneficial effects that:

1. the template extracted from the phrase structure tree is used as the target template to guide the translation process, the emotion of the source language is reserved while the template is extracted, most of the existing neural machine translation models are directly translated word by word from the source text without considering context and semantics, and Mongolian emotion linguistic data obtained through Mongolian Chinese neural machine translation is lack of emotion.

2. The invention uses two loss functions to make the model easier to train and avoid the interference of the noise in the received template, and obtains higher BLEU score because of the existence of some low-quality templates which affect the translation quality. By optimizing both objectives simultaneously, the impact of certain low quality templates can be reduced and the stability of the model improved.

3. The invention establishes the relation between the aspect words and the viewpoint words through the dependency syntax analysis, and can solve the ambiguity problem between the aspect words and the viewpoint words.

4. The invention reconstructs the common dependency tree and constructs an aspect-oriented dependency tree, and the aspect-oriented structure has the following advantages. First, each aspect has its own dependency tree and is less affected by the unorthodox nodes and relationships; second, if an aspect contains multiple words, dependencies will be aggregated at that aspect, and finally such a unified tree structure not only enables the model to focus on the connections between aspects and viewpoint words, but also facilitates batch and parallel operations in the training process.

5. The dependency analysis tree is coded by the graph neural network, the graph neural network can perform parallel computation on different nodes and can be directly used for solving the induction learning problem, and the graph neural network has the advantages that the accuracy of Mongolian aspect-level emotion analysis is improved.

6. The invention adds the relation head coding to process the information in the syntactic structure, the relation head codes the dependency relation, and establishes the relation between the aspect and the viewpoint words, so as to improve the accuracy of Mongolian aspect-level emotion analysis.

7. The invention can analyze the emotion of a plurality of aspects existing in one sentence.

8. The method and the device fully utilize the information in the Mongolian dependency analysis tree by combining the dependency syntax analysis and emotion analysis tasks so as to improve the accuracy of Mongolian aspect-level emotion analysis.

Drawings

FIG. 1 is a diagram of a phrase structure tree.

FIG. 2 is a diagram of a dependency parse tree.

Fig. 3 is a problem of modeling sequence-to-sequence using an encoder-decoder model.

FIG. 4 is an attention-based LSTM model.

FIG. 5 is an illustration of a sentence with different emotional polarities for two aspects.

Fig. 6 is a flow chart of the main body of the present invention.

FIG. 7 is an example of a process for guiding translation using a soft target template.

FIG. 8 is a target language template directed Mongolian Chinese machine translation model.

FIG. 9 is a process for obtaining a Mongolian dependency parse tree using a Mongolian dependency parser.

FIG. 10 is a diagram of an attention neural network with relational heads added.

FIG. 11 is a schematic analysis flow chart according to an embodiment of the present invention.

Detailed Description

The embodiments of the present invention will be described in detail below with reference to the drawings and examples.

As shown in FIG. 6, the invention is a Mongolian emotion analysis method based on target template guidance and relational header coding, comprising:

step 1, carrying out phrase structure analysis on the Chinese aspect level emotion corpus by using a Chinese phrase structure analyzer to obtain a Chinese phrase structure tree, extracting a target template from the Chinese phrase structure tree, guiding Mongolian neural machine translation, and translating the Chinese aspect level emotion corpus into Mongolian aspect level emotion corpus.

Because there is no Mongolian aspect level emotion corpus which is open at present, obtaining the Mongolian aspect level emotion corpus is the first step of Mongolian emotion analysis, and emotion of the Mongolian aspect level emotion corpus is required to be saved as much as possible while the Mongolian aspect level emotion corpus is expanded. The method comprises the steps of generating a target template, namely a phrase structure of a Chinese sentence, by utilizing a Chinese phrase structure tree, learning the structure of the target sentence, namely a Mongolian sentence, fusing target template information into a coder-decoder framework of a Mongolian neural machine, and generating a translation by utilizing the target template and a source text, namely Chinese to obtain Mongolian aspect-level emotion corpus.

In particular, to learn the syntactic structure of a target sentence, the present invention employs a phrase structure tree to generate candidate templates. As shown in fig. 7, taking chinese and english as an example, the present invention first predicts a target template to be used, i.e., a phrase structure, where "i like playing basketball", it is easy to think of the sentence pattern "sb.

The present invention does not impose a requirement that the generated target language translation must be generated based entirely on the template, where the template merely provides a reference to provide some assistance in translation, so the target template is a soft template.

To make more efficient use of templates, the present invention introduces a target template-based neural machine translation model, and FIG. 8 is a target template-guided Mongolian machine translation model that can use the source text and target templates to predict the final translation.

This step can be divided into two stages:

in the first stage, a standard Transformer model is trained for predicting target templates, i.e., converting source text to next generation target templates, by using source text and target templates extracted from the Chinese phrase structure tree.

The present invention models P (Y | X) using data from source language S and template T so that the template can be predicted from the source language. To construct a source language-template data set, a phrase structure tree is used to parse target language text and obtain a tree structure. And then, cutting nodes exceeding a certain depth, and restoring the cut subtrees back according to the original sequence to obtain template data. Through these operations, source language-template parallel training data is obtained, and the transform model P (Y | X) is trained to predict the target template.

The phrase structure tree may display the structure and grammar information of the entire sentence, using the grammar to distinguish terminal and non-terminal nodes. More specifically, the non-end nodes are grouped by belonging to a set S of non-end nodes, and the end nodes belong to a set V of target language nodes. The target template is a subtree extracted at a particular depth and generated using terminal and non-terminal nodes located on leaf nodes of the subtree.

To predict the target template, a Transformer model is trained based on the source text and training data of the extracted template. The Transformer model reads the source text and predicts the target template using a beam search. Then, the first K results of the beam search are selected as templates.

The selection of the depth of the subtree is a trade-off. Using the Transformer model requires the construction of pseudo-training data (source language text, target templates) rather than templates extracted directly through the phrase structure tree. Given source text X, a top-ranked target template T is generated by a bundle search using P (Y | X). Finally, triple training data (source language text, target template) are obtained to prepare for the next stage.

In the second stage, both the target text and the target template are encoded using both the target and source language encoders, and the final translation is generated by a decoder interacting with both encoders.

In the invention, both the decoder and the source language encoder adopt a Transformer model. After triple training data (source language text, target template) is given, a source language encoder reads a source language sequence X ═ X1,x2,…,xi,…,xnAnd a target template encoder provides a template sequence consisting of target language words and non-terminal nodes, T ═ T1,t2,…,ti,…tm},xiIs the source language entry at the ith position, tiThe method comprises the steps that a template sequence entry at the ith position is formed, n is the length of a source language, m is the length of a template, a source language encoder and a target template encoder respectively map a source language sequence X and a template sequence T to hidden layer vectors, namely, the two sequences are encoded to be in a hidden state, and then a decoder generates a final translated target language sequence Y which is { Y ═ Y1,y2,…,yi,…ykAnd i.e.:yiis the target language entry at the ith position, k is the length of the target language, P (Y | X) is the probability that the source language sequence X translates into the target language sequence Y,obtaining the probability, θ, of the template sequence T for the source language sequence XX→TTo obtain the parameters of the template sequence T from the source language sequence X,the purpose of (1) is to predict a target template;for the probability, theta, of translation of a Source language sequence X into a target language sequence Y under the guidance of a template sequence T(X,T)→YAnd translating the source language target language sequence X into parameters in the process of the target language sequence Y under the guidance of the template target language sequence T.

In the present invention, based on the source language encoder hidden layer state and the target template encoder hidden layer state, the decoder uses encoder-decoder multi-head attention to commonly use the source language and template information to generate Y. In addition, the decoder uses two sets of attention mechanism parameters for the different encoders. The decoder uses X and T, respectively, and then obtains two hidden states by focusing on the source context and the template context, where the hidden layer state containing the source language information and the hidden layer state containing the template information are fused using a gating cell, as follows:

Z=βZX,Y+(1-β)ZT,Y

wherein ZX,YAnd ZT,YIs a hidden state of the decoder, ZX,YContaining source language information at decoding time, ZT,YTemplate information included in the decoding, and β is a parameter that controls the degree of binding between the source text and the template. To effectively fuse source and template information, the parameter β is calculated as follows:

β=σ(WYZX,Y+UTZX,T)

WYand UTIs a parameter matrix, σ is a Sigmoid activation function, ZX,TIncluded in the template information at the time of encoding.

To be able to predict the target sequence, the model parameters are updated using maximum likelihood estimation. When training P (Y | X) without using the target template encoder, only the following penalty function needs to be optimized:

for the probability that X in the source language translates to Y, θX→YAre parameters of the encoder and decoder in the source language.

When training P (Y | X, T) using the target template encoder, the loss function may be calculated by the following equation:

θ(X,T)→Yare parameters of the source language encoder, target template encoder and decoder, is the probability of X being translated to Y under the guidance of T.

Optimizing the two loss functions can make the model easier to train to avoid interference from noise in the received template and obtain a higher BLEU score because there are some low quality templates that affect the translation quality. By optimizing both objectives simultaneously, the impact of certain low quality templates can be reduced and the stability of the model improved. To balance these two targets, the translation model is iteratively trained on both targets simultaneously, as follows:

α is a parameter to obtain a balance of the two loss functions.

After the Mongolian neural machine translation is guided by the target template, Mongolian aspect-level emotion corpus with emotion is obtained.

And 2, referring to fig. 9, performing dependency syntax analysis on the Mongolian aspect-level emotion corpus by using a Mongolian dependency syntax analyzer to obtain a Mongolian dependency analysis tree, wherein the Mongolian dependency analysis tree is composed of a plurality of nodes and dependency relations among the nodes. The Mongolian dependency syntax parser is obtained by training a manually marked Mongolian dependency analysis tree training set.

And 3, taking the aspect, the sentence, the dependency analysis tree and the dependency relationship as input, and reconstructing the Mongolian dependency analysis tree to obtain the Mongolian dependency analysis tree with an aspect-oriented tree structure, wherein the aspect refers to the entity or the entity attribute of the Mongolian and is an object to be subjected to emotion analysis.

The Mongolian is based on the root or stem of a word morphologically, and is followed by additional components to derive a new word and change the morpheme, and structurally, the Mongolian sentence has a certain regular word order, usually the subject is before, the predicate is after, the modifier is before the modified object, and the predicate is after the object, so that the Mongolian can be more accurately processed by constructing an aspect-oriented tree structure according to the linguistic characteristics of the Mongolian.

Based on this, the method for reconstructing the Mongolian dependency analysis tree in the step is as follows:

firstly, taking an aspect node which is a node of a corresponding aspect in a Mongolian dependency analysis tree as a root;

secondly, setting the nodes directly connected with the aspect nodes as child nodes, and reserving the original dependency relationship for the child nodes;

and finally, constructing a dependency relationship for each aspect node in the Mongolian sentence to obtain an aspect-oriented tree structure, namely a reconstructed Mongolian dependency analysis tree. Wherein the new "n: con" relationship is proposed rather than directly discarding nodes to enhance robustness, since the dependent syntax parser is likely to parse errors.

And 4, adopting a graph attention neural network model, adding a relation head to obtain a relation graph attention network, coding the dependence relation in the reconstructed Mongolian dependence analysis tree, and establishing a relation between the aspect and the viewpoint words so as to improve the accuracy of Mongolian aspect-level emotion analysis.

In this step, the relationship header is used to control the information flow from the neighborhood node, and referring to fig. 10, the process of encoding the dependency relationship is as follows:

first, the dependencies are mapped into a vector representation of a relationship header, which is computed as:

wherein the content of the first and second substances,is the head of the relationship of the node l +1 on the l +1 layer,the concatenation of the relational headers is represented,is the normalized attention coefficient calculated by the mth relationship head on the l layer,is an input conversionThe matrix is a matrix of a plurality of matrices,representing a node j on the level l,the nonlinear output result of the m-th relation head point on the l layer passing through the activation function is shown, sigma is the Sigmoid activation function, relu is the activation function, rijRepresenting an embedded relationship between node i and node j, Wm1And Wm2Input transformation matrices, b, of the relationship head m1 and the relationship head m2, respectivelym1And bm2Parameters of the relationship head m1 and the relationship head m2 respectively;representing the set of all nodes.

Then, with a representation of the multi-headed attention aggregation neighbor node:

is the head of attention for node i on level l +1,representing the connection of the vector indices from 1 to K,is the normalized attention coefficient calculated by the kth attention head on the l-layer,is an input transformation matrix, attribute (i, j) isAttention is paid by dot product;

adding a relation head in an original graph attention neural network to obtain a relation graph attention network so as to establish a relation between an aspect and a viewpoint word and improve the accuracy of Mongolian aspect level emotion analysis, wherein the relation graph attention network comprises k attention heads and m relation heads, and the final representation mode of each node is as follows:

the value of the node i of the layer l +1 of x is obtained by splicing the attention head and the relation head,representing the concatenation of the relationship head and attention head of node i on level l +1,is the final representation of node i at level l +1,to activate a function, Wl+1To input a transformation matrix, bl+1Are parameters.

The invention uses a transformer training relation graph attention network, firstly encodes the word embedding of the reconstructed Mongolian dependency analysis tree node, and obtains the initial representation of the node iOutput hidden state h ofiThen, specific words corresponding to the aspects are encoded, and the encoded average hidden state is used as an initial representation of the corresponding rootTraversing the fully-connected softmax layer after applying the graph attention network to the reconstructed Mongolian dependency analysis treeThe probability p (a) mapped to different emotion polarities is calculated as follows:

wherein, WpIs the input transformation matrix for probability p (a),is a vector representation of the node a at the level l, bpIs an offset;

the loss function L (θ) is the cross entropy loss, which is calculated as:

wherein (S, A) is a sentence-aspect pair consisting of a sentence S and an aspect A,the method is a set of all sentence-aspect pairs, A represents an aspect appearing in a sentence S, theta comprises all trainable parameters, and a is a specific word corresponding to the aspect, and model parameters of the relational graph attention network are obtained through training.

And 5, training the attention network of the relational graph to perform aspect-level emotion analysis on the Mongolian to obtain positive or negative emotion polarities.

As shown in FIG. 11, one embodiment of the method of the present invention is adopted, wherein "today's weather is really good" is the existing Chinese aspect level emotion corpus, and the target template guidance neural machine translation is performed on the existing Chinese aspect level emotion corpus to obtain Mongolian aspect level emotionLanguage materialPerforming dependency syntax analysis on Mongolian aspect level emotion corpus, wherein the dependency syntax analysis is realized by a Mongolian dependency syntax parser to obtain a Mongolian dependency analysis tree, and reconstructing the Mongolian dependency analysis tree to solve the ambiguity problem between the aspect words and the viewpoint wordsAndthere is no direct connection, so a virtual relationship 1: con is added, and finally the information in the syntactic structure is encoded and processed using the graph attention network, with its emotional output "positive". On the basis, Mongolian aspect level emotion corpora are greatly expanded, and accordingly accuracy of Mongolian aspect level emotion analysis can be greatly improved.

20页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:基于循环共同注意力Transformer的多模态蒙汉翻译方法

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!