Multi-item selection machine reading understanding method based on multi-stage maximum attention

文档序号:701004 发布日期:2021-04-13 浏览:10次 中文

阅读说明:本技术 一种基于多阶段最大化注意力的多项选择机器阅读理解的方法 (Multi-item selection machine reading understanding method based on multi-stage maximum attention ) 是由 颜洪 黄青松 刘利军 冯旭鹏 于 2020-12-29 设计创作,主要内容包括:本发明涉及一种基于多阶段最大化注意力的多项选择机器阅读理解方法,属于计算机自然语言处理技术领域。本发明包括步骤:首先通过预训练语言模型完成句子、问题与答案选项的初步编码,同时根据多阶段最大化注意力抓住词、句子之间的重要关系;再根据问题与答案选项利用双线性函数计算问题与句子、答案选项与句子之间的匹配分数,以确定证据句子;最后通过利用层级注意力机制融合证据句子、问题与答案选项得到最终答案。该方法有效的抓住了词层级与句子层级的重要关系,其准确度比传统的多选阅读理解方法提高了约4%。(The invention relates to a multi-choice machine reading understanding method based on multi-stage maximum attention, and belongs to the technical field of computer natural language processing. The invention comprises the following steps: firstly, completing preliminary coding of sentences, questions and answer options through a pre-training language model, and simultaneously capturing important relations between words and sentences according to multi-stage maximized attention; calculating matching scores between the questions and the sentences and between the answer options and the sentences by utilizing a bilinear function according to the questions and the answer options so as to determine evidence sentences; and finally, fusing the evidence sentences, the questions and the answer options by utilizing a hierarchy attention mechanism to obtain final answers. The method effectively grasps the important relation between the word level and the sentence level, and the accuracy is improved by about 4% compared with the traditional multi-choice reading understanding method.)

1. A method for multiple-choice machine-readable understanding based on multiple phases of maximized attention, comprising: the method comprises the following specific steps:

step1, collecting articles, questions and answer options as experimental data, preprocessing the materials, and generating content word vectors, question word vectors and answer option word vectors by using a pre-training language model as a content encoder;

step2, grasping the relationship between the word level and the sentence level: after Step1 preprocessing operation, capturing important relations between words and sentences by utilizing multi-stage attention of word levels and sentence levels respectively to obtain sentence content characteristic representation of content dependence;

step3, extracting an evidence sentence: using the sentence content characteristic representation obtained at Step2, extracting evidence sentences by combining the question vectors and the answer option vectors to form new content characteristic representation so as to analyze answers of the questions;

step4, post-processing: the new content feature obtained at Step3 represents the score of each choice of the question outputted in combination with the question vector and answer choice vector at Step1, and the final answer is determined.

2. The method for multi-selection machine-readable understanding based on multi-stage maximizing attention according to claim 1, wherein: the specific steps of Step1 are as follows:

step1.1, firstly, collecting articles, questions and answer options on question sets of junior high schools and high schools of a public data set website;

step1.2, performing word segmentation and segmentation pretreatment on articles, questions and answer options in the data set;

step1.3, training and coding the preprocessed data by utilizing a pre-training language model word vector training mode to obtain a content word vector HDQuestion word vector HQAnd answer option word vector HA

3. The method for multi-selection machine-readable understanding based on multi-stage maximizing attention according to claim 1, wherein: the specific steps of Step2 are as follows:

step2.1, after data preprocessing and coding, obtaining any two content word vectors through a bilinear functionSoft attention alignment matrix M between wordsDEach element in the matrix represents a relationship between two words;

step2.2, the resulting soft attention alignment matrix M between any two wordsDInputting into a first partial maximum attention network of the depth model; in order to grasp the important relationship between any two words, the important relationship matrix obtained by maximizing pooling based on columns is used and simultaneously input into a Softmax layer to obtain an attention vector matrix G between the wordsD

Step2.3, to understand the importance of the relevant weights under content awareness, the attention vector matrix G between wordsDApplied to the obtained content word vector HDFinally, the content vector seizing the word hierarchy relation is obtained

Step2.4, repeating the process from Step2.1 to Step2.3 k times to obtain the content vector catching more important word hierarchy relations

Step2.5 content vector of word hierarchy relationship obtained in Step2.4Sentence feature representation vector S obtained by self-attention of words in a sentenceDObtaining a soft attention alignment matrix M between any two sentences through a bilinear functions

Step2.6, Using the Step2.2-Step2.4 calculation procedure, the attention vector matrix G between sentences was obtained using column-based maximization pooling in combination with the Softmax layers

Step2.7, the attention vector matrix G between sentencessSentence feature representation vector S obtained from Step2.5DMultiplying to obtain the characteristic representation of the content of the seized sentenceRepeating the process from Step2.5 to Step2.7 k times to obtain a sentence content characteristic representation which captures more important characters

4. The method for multi-selection machine-readable understanding based on multi-stage maximizing attention according to claim 1, wherein: the specific steps of Step3 are as follows:

step3.1, finding an evidence sentence according to the question vector and the answer option vector; then, according to the evidence sentences, combining the question vectors and the answer selection vectors to deduce answers;

step3.2, starting with the sentence content characteristic representation obtained at Step2 as the center, respectively calculating the matching scores between the question vector and the sentence content characteristic representation, between the answer option vector and the sentence content characteristic representation, then adding the two matching scores to obtain a sentence score vector, and finally combining the T sentences with the maximum output scores into a new content characteristic representation

5. The method for multi-selection machine-readable understanding based on multi-stage maximizing attention according to claim 1, wherein: the specific steps of Step4 are as follows:

step4.1, splicing the problem word vector HQAnd answer option word vector HATo obtain HQA,HQAObtaining a vector S by self-attention of words in a sentenceQA

Step4.2, determining the importance of the sentence with the condition of the question and the answer option by using sentence-level hierarchical attention SHA, and finally obtaining a content vector H' under the consciousness of the question and the answer option;

and Step4.3, performing multi-classification on the content vector H' under the consciousness of the question and the answer option obtained by Step4.2 by using a softmax function, and finally obtaining a selection result, namely a correct answer, by adopting a cross entropy loss function on the basis of a classification model.

Technical Field

The invention relates to a multi-item selection machine reading understanding method based on multi-stage maximum attention, and belongs to the technical field of computer natural language processing.

Background

The purpose of Machine Reading Comprehension (MRC) is to teach machine reading and answer questions, which is a clear and long-term goal of Natural Language Understanding (NLU). For the task of MRC, it can be roughly classified into two types according to its answer style: green MRC and selective MRC. Generative reading understanding requires that models generate answers from paragraphs and questions, such as SQuAD and BoolQ datasets. Unlike the generation task, selective reading understands that the model provides several candidate answers to select the best answer, e.g., RACE and Dream datasets. As these data sets are developed and applied, the task of machine reading understanding has made significant progress. Research on machine-read understanding has led to a great deal of interest in the distribution of various reference data sets. Many neural-based MRC models have been proposed to date for these data sets, with the most successful models tending to construct an interdependent representation of documents and problems with either co-attentional or bi-directional attentional mechanisms. However, their attention mechanism in the entire document is very noisy, redundant, and contains many insignificant coding dependencies. Therefore, some recent efforts have focused on selecting sentences to answer questions. Due to the lack of evidence labels for surveillance, we face significant challenges to extract evidence sentences for machine-read understanding tasks. Recently, evidence sentence extractors have been mainly solved by the following three methods: 1) the rule-based method comprises the following steps: the distance labels are generated or refined using manually established rules and external resources. 2) The reinforcement learning-based method comprises the following steps: reinforcement Learning (RL) is employed to determine the tags of evidence sentences. 3) Neural-based methods: a neural-based model is used to calculate the similarity between the question and the sentence. However, most previous work has focused only on catching the semantic relationship between the question and the candidate sentence, which directly serves the target of the task. This approach ignores not only the relationships between words but also the relationships between sentences, which is also useful when extracting evidence sentences to infer answers. Furthermore, even humans sometimes have difficulty finding evidence sentences when the relationship between the question and its correct answer choice is implicitly indicated in the document. Previous work only modeled the relationship between the question and each sentence, but ignored much of the information in the reference document about the relationship between words and sentences, which was also useful for extracting evidence sentences to infer answers.

Disclosure of Invention

Compared with the traditional machine reading understanding method, the method provided by the invention fully considers the important relation among words and sentences, can more comprehensively and accurately extract evidence sentences and predict answers.

The technical scheme of the invention is as follows: a method for multi-item selection machine reading understanding based on multi-stage maximum attention, which comprises the following specific steps:

step1, collecting articles, questions and answer options as experimental data, preprocessing the materials, and generating content word vectors, question word vectors and answer option word vectors by using a pre-training language model as a content encoder;

step2, grasping the relationship between the word level and the sentence level: after Step1 preprocessing operation, capturing important relations between words and sentences by utilizing multi-stage attention of word levels and sentence levels respectively to obtain sentence content characteristic representation of content dependence;

step3, extracting an evidence sentence: using the sentence content characteristic representation obtained at Step2, extracting evidence sentences by combining the question vectors and the answer option vectors to form new content characteristic representation so as to analyze answers of the questions;

step4, post-processing: the new content feature obtained at Step3 represents the score of each choice of the question outputted in combination with the question vector and answer choice vector at Step1, and the final answer is determined.

As a further scheme of the invention, the Step1 comprises the following specific steps:

step1.1, firstly, collecting articles, questions and answer options on question sets of junior high schools and high schools of a public data set website;

step1.2, performing word segmentation and segmentation pretreatment on articles, questions and answer options in the data set;

step1.3, training and coding the preprocessed data by utilizing a pre-training language model word vector training mode to obtain a content word vector HDQuestion word vector HQAnd answer option word vector HA

As a further scheme of the invention, the Step2 comprises the following specific steps:

step2.1, after data preprocessing and encoding, obtaining a soft attention alignment matrix M between any two words by a content word vector through a bilinear functionDEach element in the matrix represents a relationship between two words;

step2.2, the resulting soft attention alignment matrix M between any two wordsDInputting into a first partial maximum attention network of the depth model; in order to grasp the important relationship between any two words, the important relationship matrix obtained by maximizing pooling based on columns is used and simultaneously input into a Softmax layer to obtain an attention vector matrix G between the wordsD

Step2.3, to understand the importance of the relevant weights under content awareness, the attention vector matrix G between wordsDApplied to the obtained content word vector HDFinally, the content vector seizing the word hierarchy relation is obtained

Step2.4, repeating the process from Step2.1 to Step2.3 k times to obtain the content vector catching more important word hierarchy relations

Step2.5 content vector of word hierarchy relationship obtained in Step2.4Sentence feature representation vector S obtained by self-attention of words in a sentenceDThen passes through a bilinear functionDeriving a soft attention alignment matrix M between any two sentencess

Step2.6, Using the Step2.2-Step2.4 calculation procedure, the attention vector matrix G between sentences was obtained using column-based maximization pooling in combination with the Softmax layers

Step2.7, the attention vector matrix G between sentencessSentence feature representation vector S obtained from Step2.5DMultiplying to obtain the characteristic representation of the content of the seized sentenceRepeating the process from Step2.5 to Step2.7 k times to obtain a sentence content characteristic representation which captures more important characters

As a further scheme of the invention, the Step3 comprises the following specific steps:

step3.1, finding an evidence sentence according to the question vector and the answer option vector; then, according to the evidence sentences, combining the question vectors and the answer selection vectors to deduce answers;

step3.2, starting with the sentence content characteristic representation obtained at Step2 as the center, respectively calculating the matching scores between the question vector and the sentence content characteristic representation, between the answer option vector and the sentence content characteristic representation, then adding the two matching scores to obtain a sentence score vector, and finally combining the T sentences with the maximum output scores into a new content characteristic representation

As a further scheme of the invention, the Step4 comprises the following specific steps:

step4.1, splicing the problem word vector HQAnd answer option word vector HATo obtain HQA,HQAObtaining a vector S by self-attention of words in a sentenceQA

Step4.2, determining the importance of the sentence with the condition of the question and the answer option by using sentence-level hierarchical attention SHA, and finally obtaining a content vector H' under the consciousness of the question and the answer option;

and Step4.3, performing multi-classification on the content vector H' under the consciousness of the question and the answer option obtained by Step4.2 by using a softmax function, and finally obtaining a selection result, namely a correct answer, by adopting a cross entropy loss function on the basis of a classification model.

Further, the pre-trained language model is based on a pre-trained BERT network; computing a soft attention alignment matrix M between any two words for a bilinear functionDM is shown as the formulaD=f(UD)f(VD)T(ii) a Wherein U isDAnd VDRepresenting a k-dimensional word vector; f is a linear function; the important relationship matrix obtained by maximizing pooling of columns is G ═ softmax (max)colMD) (ii) a Obtaining content vector of word hierarchy relation by applying attention vector G to content vectorAs shown in the formula:whereinRepresenting element-based dot product, HDIs a content word vector; obtaining sentence feature expression vector from attention as shown in formula: αijdenotes alphaiThe (j) th element of (a),a content word vector representing the ith sentence,j-th element of a content word vector, S, representing an i-th sentenceiRepresenting sentence feature representation vector SDF is a linear function; after the attention content vector is maximized in multiple stages, calculating a matching score between the sentence vector and the problem vector to select and reserve important k evidence sentences; the content vector under the question and answer options is calculated by sentence level attention as shown in the formula:where γ represents the amount of weight each word occupies in the content.

In the Softmax layer, on the basis of the classification model, the weight of a loss function is set, and a cross entropy loss function is adopted, namely

The invention has the beneficial effects that:

1. in the task of selecting the evidence sentences, the invention takes the important relationship between the words and the sentences into consideration, and can grasp the important relationship between the pronouns and the sentences. Word-level multi-stage maximized attention emphasizes important relationships between words and fuses information from multi-stage maximized attention output and initial input by applying bi-LSTM; sentence-level multi-stage maximized attention a sentence feature representation containing important relationships between sentences is obtained by applying bi-LSTM, capturing more important relationships between sentences and fusing information from multi-stage maximized attention output and residual concatenation.

2. As a natural practice how humans solve complex multi-choice reading comprehension, the first step is to find relevant sentences and grasp the general idea based on questions and answer options. Then, one would infer an answer from these evidence sentences, combining the question and answer choices. The depth model is structured to compute a similarity score between the sentence and the question based on a bilinear function. After a similarity score is calculated for each sentence, the K sentences with the highest score are combined into a new content selection to infer an answer.

3. The question word vectors and the answer option word vectors are combined to obtain the fused feature representation of the question and the answer options through the self-attention of words in the sentence, meanwhile, the importance of the sentence with the condition of the question and the answer options is determined by combining the new content feature representation and using the hierarchical attention of the sentence level, the final option output result is obtained through judgment, and the experimental result proves that the answer for reading and understanding the question can be better deduced.

In summary, the method for multi-item selection machine reading understanding based on multi-stage maximization attention first captures important relations between important words and sentences by using multi-stage maximization attention, extracts evidence sentences and combines the evidence sentences with questions and answer options to form new contents, finally fuses the new contents, the questions and the answers by sentence-level hierarchical attention, and inputs the new contents, the questions and the answers into a Softmax classifier to obtain final options. The final model improves the accuracy of multi-choice reading understanding.

Drawings

FIG. 1 is a block diagram for improving multiple choice reading understanding based on a multiple stage maximization attention combined with an evidence sentence matching mechanism under problem awareness;

FIG. 2 is a diagram of a multi-stage-based module for maximizing attention in the present invention.

Detailed Description

Example 1: 1-2, a method for multiple-choice machine-readable understanding based on multiple stages of maximizing attention, the method comprising the steps of:

step1, collecting articles, questions and answer options as experimental data, preprocessing the materials, and generating content word vectors, question word vectors and answer option word vectors by using a pre-training language model as a content encoder;

step2, grasping the relationship between the word level and the sentence level: after Step1 preprocessing operation, capturing important relations between words and sentences by utilizing multi-stage attention of word levels and sentence levels respectively to obtain sentence content characteristic representation of content dependence;

step3, extracting an evidence sentence: using the sentence content characteristic representation obtained at Step2, extracting evidence sentences by combining the question vectors and the answer option vectors to form new content characteristic representation so as to analyze answers of the questions;

step4, post-processing: the new content feature obtained at Step3 represents the score of each choice of the question outputted in combination with the question vector and answer choice vector at Step1, and the final answer is determined.

As a further scheme of the invention, the Step1 comprises the following specific steps:

step1.1, firstly, collecting articles, questions and answer options on question sets of junior high schools and high schools of a public data set website;

step1.2, performing word segmentation and segmentation pretreatment on articles, questions and answer options in the data set;

step1.3, training and coding the preprocessed data by utilizing a pre-training language model word vector training mode to obtain a content word vector HQQuestion word vector HDAnd answer option word vector HA

As a further scheme of the invention, the Step2 comprises the following specific steps:

step2.1, after data preprocessing and encoding, obtaining a soft attention alignment matrix M between any two words by a content word vector through a bilinear functionDEach element in the matrix represents a relationship between two words;

step2.2, the resulting soft attention alignment matrix M between any two wordsDInputting into a first partial maximum attention network of the depth model; in order to grasp the important relationship between any two words, the important relationship matrix obtained by maximizing pooling based on columns is used and simultaneously input into a Softmax layer to obtain an attention vector matrix G between the wordsD

Step2.3, to understand the importance of the relevant weights under content awareness, the attention vector matrix G between wordsDApplied to the obtained content word vector HDFinally, the content vector seizing the word hierarchy relation is obtained

Step2.4, repeating the process from Step2.1 to Step2.3 k times to obtain the content vector catching more important word hierarchy relations

Step2.5 content vector of word hierarchy relationship obtained in Step2.4Sentence feature representation vector S obtained by self-attention of words in a sentenceDObtaining a soft attention alignment matrix M between any two sentences through a bilinear functions

Step2.6, Using the Step2.2-Step2.4 calculation procedure, the attention vector matrix G between sentences was obtained using column-based maximization pooling in combination with the Softmax layers

Step2.7, the attention vector matrix G between sentencessSentence feature representation vector S obtained from Step2.5DMultiplying to obtain the characteristic representation of the content of the seized sentenceRepeating the process from Step2.5 to Step2.7 k times to obtain a sentence content characteristic representation which captures more important characters

As a further scheme of the invention, the Step3 comprises the following specific steps:

step3.1, finding an evidence sentence according to the question vector and the answer option vector; then, according to the evidence sentences, combining the question vectors and the answer selection vectors to deduce answers;

step3.2, starting from the sentence content characteristic representation obtained at Step2 as the center, and then respectively calculating a question vector, a sentence content characteristic representation and an answer option directionMatching scores between the quantity and the sentence content feature representation, then adding the two matching scores to obtain a sentence score vector, and finally combining the T sentences with the maximum output scores into a new content feature representation

As a further scheme of the invention, the Step4 comprises the following specific steps:

step4.1, splicing the problem word vector HQAnd answer option word vector HATo obtain HQA,HDAObtaining a vector S by self-attention of words in a sentenceQA

Step4.2, determining the importance of the sentence with the condition of the question and the answer option by using sentence-level hierarchical attention SHA, and finally obtaining a content vector H' under the consciousness of the question and the answer option;

and Step4.3, performing multi-classification on the content vector H' under the consciousness of the question and the answer option obtained by Step4.2 by using a softmax function, and finally obtaining a selection result, namely a correct answer, by adopting a cross entropy loss function on the basis of a classification model.

Further, the pre-trained language model is based on a pre-trained BERT network; computing a soft attention alignment matrix M between any two words for a bilinear functionDM is shown as the formulaD=f(UD)f(VD)T(ii) a Wherein U isDAnd VDRepresenting a k-dimensional word vector; f is a linear function; the important relationship matrix obtained by maximizing pooling of columns is G ═ softmax (max)colMD) (ii) a Obtaining content vector of word hierarchy relation by applying attention vector G to content vectorAs shown in the formula:whereinRepresenting element-based dot product, HDIs a content word vector; obtaining sentence feature expression vector from attention as shown in formula: αijdenotes alphaiThe (j) th element of (a),a content word vector representing the ith sentence,j-th element of a content word vector, S, representing an i-th sentenceiRepresenting sentence feature representation vector SDF is a linear function; after the attention content vector is maximized in multiple stages, calculating a matching score between the sentence vector and the problem vector to select and reserve important k evidence sentences; the content vector under the question and answer options is calculated by sentence level attention as shown in the formula:where γ represents the amount of weight each word occupies in the content.

Further, in the evidence sentence extraction, it is found that if only the two sentences related to the question are extracted, in practice sometimes the understanding of some pronouns and semantics in the sentences is not sufficient to deduce the answer. Therefore, to extract answers based on evidence sentences, we will extract more sentences and infer answers with full expression and semantics. Compared with the traditional method, the method has the advantages that the optimal result is obtained, the initial and high-school reading understanding accuracy values are 70.2% and 65.4%, and particularly, the accuracy is obviously improved aiming at the initial and high-school reading understanding.

The invention and other results of reading and understanding are shown in table 1, and table 1 reports the experimental results on a data set RACE-M and a data set RACE-H, wherein BERT + MLP represents the result obtained by combining a pre-trained language model BERT with a multi-layer perceptron, and BERT + HA represents the result obtained by combining the pre-trained language model BERT with a hierarchical attention mode, and it can be seen from table 1 that the prediction accuracy of the invention is high, and is about 4% higher than the traditional recognition rate.

Table 1 shows the comparison of the effects of the present invention and other reading and understanding methods

In the Softmax layer, on the basis of the classification model, the weight of a loss function is set, and a cross entropy loss function is adopted, namely

In the invention, based on a multi-stage maximization attention combined with an evidence sentence matching model under problem awareness, the overall structure is as shown in fig. 1, and first, the method for capturing important relations between words and sentences by using multi-stage maximization attention is determined as shown in fig. 2. For the input question and answer options, after preliminary coding, evidence sentences are extracted by calculating similarity scores with the sentences. And taking the evidence sentences extracted in the previous step as a center, and finally fusing the evidence sentences, the questions and the answer options by utilizing a hierarchy attention mechanism to obtain final answers.

While the present invention has been described in detail with reference to the embodiments shown in the drawings, the present invention is not limited to the embodiments, and various changes can be made without departing from the spirit of the present invention within the knowledge of those skilled in the art.

11页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:基于依存句法树的知识解析系统及方法

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!