Construction method of semi-supervised general neural machine translation model

文档序号:1567774 发布日期:2020-01-24 浏览:29次 中文

阅读说明:本技术 一种半监督式通用神经机器翻译模型的构建方法 (Construction method of semi-supervised general neural machine translation model ) 是由 陈巍华 于 2019-08-28 设计创作,主要内容包括:本发明提供了一种半监督式通用神经机器翻译模型的构建方法,该半监督式通用神经机器翻译模型的构建方法包括如下步骤:步骤(1),确定若干单语源语料、若干单语目标语料和若干平行双语语料作为训练数据;步骤(2),构建关于encoder模块与分类器模块的第一网络结构,同时采用该若干单语源语料训练该第一网络结构;步骤(3),构建关于decoder模块与分类器模块的第二网络结构,同时采用该若干单语目标语料训练该第二网络结构;步骤(4),根据经过训练的该第一网络结构和该第二网络结构,重新构建新encoder-decoder框架,同时采用该若干平行双语语料训练该新encoder-decoder框架,以此构建得到通用神经机器翻译模型。(The invention provides a construction method of a semi-supervised general neural machine translation model, which comprises the following steps: determining a plurality of monolingual source corpora, a plurality of monolingual target corpora and a plurality of parallel bilingual corpora as training data; step (2), constructing a first network structure about an encoder module and a classifier module, and simultaneously training the first network structure by adopting the plurality of monolingual source materials; step (3), constructing a second network structure related to the decoder module and the classifier module, and simultaneously training the second network structure by adopting the plurality of monolingual target corpora; and (4) reconstructing a new encoder-decoder framework according to the trained first network structure and the trained second network structure, and simultaneously training the new encoder-decoder framework by adopting the plurality of parallel bilingual corpora to construct and obtain the universal neural machine translation model.)

1. A construction method of a semi-supervised general neural machine translation model is characterized by comprising the following steps:

determining a plurality of monolingual source corpora, a plurality of monolingual target corpora and a plurality of parallel bilingual corpora as training data;

step (2), a first network structure about an encoder module and a classifier module is built, and the first network structure is trained by adopting the monolingual source materials;

step (3), a second network structure about the decoder module and the classifier module is built, and meanwhile the second network structure is trained by adopting the monolingual target corpora;

and (4) reconstructing a new encoder-decoder framework according to the trained first network structure and the trained second network structure, and simultaneously training the new encoder-decoder framework by adopting the plurality of parallel bilingual corpora to construct and obtain the universal neural machine translation model.

2. The method of constructing a semi-supervised universal neural machine translation model of claim 1, wherein:

in the step (2), a first network structure about an encoder module and a classifier module is built, and meanwhile, the training of the first network structure by adopting the plurality of monolingual source materials specifically comprises the steps of (201) extracting the encoder module from an original encoder-decoder framework, and building the first network structure by combining the classifier module;

step (202), processing the monolingual corpus by using sub-word BPE technology, and converting the monolingual source corpus into a new monolingual source corpus;

and (203) training words on the first network structure by using the new monolingual source material.

3. The method of constructing a semi-supervised universal neural machine translation model of claim 2, wherein:

in the step (201), the method specifically includes extracting the encoder module from the original encoder-decoder framework, and constructing and obtaining the first network structure by combining the classifier module,

step (2011) of determining separable attributes in the encoder module and the original encoder-decoder framework;

step (2012), if the separable attribute indicates that the encoder module has separable characteristics, directly extracting the encoder module from the original encoder-decoder framework, and if the separable attribute indicates that the encoder module does not have separable characteristics, performing functional module segmentation processing on the original encoder-decoder framework, and then extracting the encoder module from the original encoder-decoder framework;

and (2013) connecting the output end of the encoder module obtained by extraction with the input end of the classifier module, so as to construct and obtain the first network structure.

4. The method of constructing a semi-supervised universal neural machine translation model of claim 2, wherein:

in said step (202), transforming said plurality of monolingual source material into a new monolingual source material specifically comprises,

step (2021), performing first random mask processing on the plurality of monolingual source materials in a mode of tokens with the random mask linguistic data of 10% -15% and continuous tokens with the random mask linguistic data of 40% -50%, so as to obtain a plurality of monolingual source materials with mask modes;

step (2022), performing first record positioning processing on all words and/or phrases in the monolingual source linguistic data with the masking mode, so as to determine the linguistic segment position information corresponding to the monolingual source linguistic data with the masking mode;

and (2023) judging the validity of the language fragment position information, and taking the monolingual source language material with the mask mode corresponding to the valid language fragment position information as the new monolingual source language material.

5. The method of constructing a semi-supervised universal neural machine translation model of claim 2, wherein:

in said step (203), training said first network structure for words using said new monolingual source material specifically comprises,

step (2031), performing a first loop training on the first network structure by using all masked-mode monolingual source corpora in the new monolingual source corpora, so as to realize a first prediction processing on words and/or phrases with masked modes;

step (2032) of extracting at least one first prediction result from the first prediction processing, and performing a first word matching judgment on the at least one first prediction result;

step (2033), if the result of the first word match judgment indicates that the at least one first prediction result matches with a monolingual source material previously masked randomly, the first loop training is completed, otherwise, the first loop training is continued until the two match.

6. The method of constructing a semi-supervised universal neural machine translation model of claim 1, wherein:

in the step (3), a second network structure about a decoder module and a classifier module is built, and meanwhile, the training of the second network structure by adopting the monolingual target corpora specifically comprises the steps of (301), extracting the decoder module from an original encoder-decoder framework, and building the second network structure by combining the classifier module;

step (302), converting the monolingual target corpora into new monolingual target corpora;

and (303) training words on the second network structure by using the new monolingual target corpus.

7. The method of constructing a semi-supervised universal neural machine translation model of claim 6, wherein:

in the step (301), the method specifically includes extracting the decoder module from the original encoder-decoder framework, and constructing and obtaining the second network structure by combining the classifier module,

step (3011), determining separable attributes of the decoder module and the original encoder-decoder framework;

step (3012), if the separable attribute indicates that the decoder module has separable characteristics, the decoder module is directly extracted from the original encoder-decoder framework, and if the separable attribute indicates that the decoder module does not have separable characteristics, the decoder module is extracted from the original encoder-decoder framework after the function module segmentation processing is performed on the original encoder-decoder framework;

and (3013) connecting the output end of the decoder module obtained by extraction with the input end of the classifier module, so as to construct and obtain the second network structure.

8. The method of constructing a semi-supervised universal neural machine translation model of claim 6, wherein:

in the step (302), transforming the monolingual target corpora into new monolingual target corpora specifically includes,

step (3021), performing second random mask processing on the plurality of monolingual target corpora in a mode of token with 10% -15% of random mask corpora and continuous tokens with 40% -50% of random mask corpora to obtain a plurality of monolingual target corpora with mask mode;

step (3022), performing second recording and positioning processing on all words and/or phrases in the monolingual target corpora with the masking mode, so as to determine corpus position information corresponding to the monolingual target corpora with the masking mode;

and (3023) judging the validity of the phrase segment position information, and using the monolingual target corpus with the mask mode corresponding to the valid phrase segment position information as the new monolingual target corpus.

9. The method of constructing a semi-supervised universal neural machine translation model of claim 6, wherein:

in the step (303), the training of words on the second network structure using the new monolingual target corpus specifically includes,

step (3031), performing second cyclic training on the second network structure by using all masked-mode monolingual target corpora in the new monolingual target corpora, so as to realize second prediction processing on words and/or phrases with masked modes;

step (3032), at least one second prediction result is extracted from the second prediction processing, and second word matching judgment is carried out on the at least one second prediction result;

and (3033) if the result of the second word matching judgment indicates that the at least one second prediction result is matched with the monolingual target corpus processed by the random mask, finishing the second cyclic training, otherwise, continuing the second cyclic training until the at least one second prediction result is matched with the monolingual target corpus processed by the random mask.

10. The method of constructing a semi-supervised universal neural machine translation model of claim 1, wherein:

in the step (4), reconstructing a new encoder-decoder framework according to the trained first network structure and the trained second network structure, and simultaneously training the new encoder-decoder framework by adopting the plurality of parallel bilingual corpora to obtain the general neural machine translation model,

step (401), acquiring an encoder parameter of an encoder module in the trained first network structure and a decoder parameter of a decoder module in the trained second network structure;

a step (402) of migrating the encoder parameters and the decoder parameters to an original encoder-decoder framework;

and (403) training the original encoder-decoder framework by using the plurality of parallel bilingual corpora and a fine tuning mode, so as to construct and obtain the universal neural machine translation model.

Technical Field

The invention relates to the technical field of neural networks, in particular to a construction method of a semi-supervised general neural machine translation model.

Background

At present, a neural network is widely applied to the field of machine translation, and existing general neural machine translation systems are all in an encoder-decoder framework structure from end to segment, a large number of parallel bilingual corpora are used for training a machine translation model on the basis of a supervision mode under a general condition, and for a monolingual corpus, the parallel bilingual corpora are forged firstly in a data enhancement mode and then are added into training data for training. In the actual operation process, a general neural machine translation system needs a large amount of parallel bilingual corpora to train, the parallel bilingual corpora needs to relate to a large amount of manual labeling processing, the general neural machine translation system does not effectively utilize the monolingual corpora in the training process, and the monolingual corpora can be easily obtained under the condition that the manual labeling processing is not needed, and in addition, the parallel bilingual corpora obtained by counterfeiting in a data enhancement mode usually has noise, which can influence the machine translation effect. Therefore, a model construction method capable of training a neural machine translation model by fully utilizing monolingual linguistic data is urgently needed in the prior art.

Disclosure of Invention

Aiming at the defects in the prior art, the invention provides a construction method of a semi-supervised general neural machine translation model, which comprises the following steps: determining a plurality of monolingual source corpora, a plurality of monolingual target corpora and a plurality of parallel bilingual corpora as training data; step (2), constructing a first network structure about an encoder module and a classifier module, and simultaneously training the first network structure by adopting the plurality of monolingual source materials; step (3), constructing a second network structure related to the decoder module and the classifier module, and simultaneously training the second network structure by adopting the plurality of monolingual target corpora; and (4) reconstructing a new encoder-decoder framework according to the trained first network structure and the trained second network structure, and simultaneously training the new encoder-decoder framework by adopting the plurality of parallel bilingual corpora to construct and obtain the universal neural machine translation model. It can be seen that the construction method of the semi-supervised universal neural machine translation model is different from the construction method of the neural machine translation model in the prior art which only adopts parallel bilingual corpus or forged parallel bilingual corpus to train the neural machine translation model, can directly adopt a large amount of monolingual corpus to train the neural machine translation model and adopt a small amount of parallel bilingual corpus to fine tune the neural machine translation model, so that a large amount of complicated manual labeling processing can be avoided when a large amount of parallel bilingual corpus is used as training data, and can reach a level equivalent to the level of training by using a large amount of parallel bilingual corpus under the condition of adopting a small amount of parallel bilingual corpus, thereby effectively reducing the workload of the neural machine translation model in the early stage and improving the translation accuracy of the neural machine translation model; in addition, the construction method is particularly suitable for the translation of the small languages because the parallel bilingual corpus is difficult to obtain in the small languages, and the monolingual corpus is easier to collect instead.

The invention provides a construction method of a semi-supervised general neural machine translation model, which is characterized by comprising the following steps of:

determining a plurality of monolingual source corpora, a plurality of monolingual target corpora and a plurality of parallel bilingual corpora as training data;

step (2), a first network structure about an encoder module and a classifier module is built, and the first network structure is trained by adopting the monolingual source materials;

step (3), a second network structure about the decoder module and the classifier module is built, and meanwhile the second network structure is trained by adopting the monolingual target corpora;

step (4), reconstructing a new encoder-decoder framework according to the trained first network structure and the trained second network structure, and simultaneously training the new encoder-decoder framework by adopting the plurality of parallel bilingual corpora to construct and obtain a general neural machine translation model;

further, in the step (2), constructing a first network structure about an encoder module and a classifier module, and training the first network structure using the plurality of monolingual source materials specifically includes,

step (201), extracting the encoder module from an original encoder-decoder framework, and constructing and obtaining the first network structure by combining the classifier module;

step (202), processing the monolingual corpus by using sub-word BPE technology, and converting the monolingual source corpus into a new monolingual source corpus;

step (203), training words on the first network structure by using the new monolingual source material;

further, in the step (201), the extracting the encoder module from the original encoder-decoder framework and the constructing the first network structure by combining the classifier module specifically include,

step (2011) of determining separable attributes in the encoder module and the original encoder-decoder framework;

step (2012), if the separable attribute indicates that the encoder module has separable characteristics, directly extracting the encoder module from the original encoder-decoder framework, and if the separable attribute indicates that the encoder module does not have separable characteristics, performing functional module segmentation processing on the original encoder-decoder framework, and then extracting the encoder module from the original encoder-decoder framework;

step (2013), the output end of the encoder module obtained by extraction is connected with the input end of the classifier module, and the first network structure is obtained through construction;

further, in said step (202), transforming said plurality of monolingual source material into a new monolingual source material specifically comprises,

step (2021), performing first random mask processing on the plurality of monolingual source materials in a mode of tokens with the random mask linguistic data of 10% -15% and continuous tokens with the random mask linguistic data of 40% -50%, so as to obtain a plurality of monolingual source materials with mask modes;

step (2022), performing first record positioning processing on all words and/or phrases in the monolingual source linguistic data with the masking mode, so as to determine the linguistic segment position information corresponding to the monolingual source linguistic data with the masking mode;

step (2023), judging the validity of the language fragment position information, and using the monolingual source language material with the mask mode corresponding to the valid language fragment position information as the new monolingual source language material;

further, in step (203), the training of the first network structure for words using the new monolingual source material specifically includes,

step (2031), performing a first loop training on the first network structure by using all masked-mode monolingual source corpora in the new monolingual source corpora, so as to realize a first prediction processing on words and/or phrases with masked modes;

step (2032) of extracting at least one first prediction result from the first prediction processing, and performing a first word matching judgment on the at least one first prediction result;

step (2033), if the result of the first word match judgment indicates that the at least one first prediction result matches with a monolingual source material previously masked randomly, the first loop training is completed, otherwise, the first loop training is continued until the two match;

further, in the step (3), constructing a second network structure regarding the decoder module and the classifier module, and simultaneously training the second network structure using the monolingual target corpora specifically includes,

step (301), extracting the decoder module from an original encoder-decoder framework, and constructing and obtaining the second network structure by combining the classifier module;

step (302), converting the monolingual target corpora into new monolingual target corpora;

step (303), training words on the second network structure by using the new monolingual target corpus;

further, in the step (301), the extracting the decoder module from the original encoder-decoder framework and the constructing the second network structure by combining the classifier module specifically include,

step (3011), determining separable attributes of the decoder module and the original encoder-decoder framework;

step (3012), if the separable attribute indicates that the decoder module has separable characteristics, the decoder module is directly extracted from the original encoder-decoder framework, and if the separable attribute indicates that the decoder module does not have separable characteristics, the decoder module is extracted from the original encoder-decoder framework after the function module segmentation processing is performed on the original encoder-decoder framework;

step (3013), connect the output end of the said decoder module obtained by extracting with the input end of the said classifier module, construct and get the said second network structure in this way;

further, in the step (302), transforming the monolingual target corpora into new monolingual target corpora specifically includes,

step (3021), performing second random mask processing on the plurality of monolingual target corpora in a mode of token with 10% -15% of random mask corpora and continuous tokens with 40% -50% of random mask corpora to obtain a plurality of monolingual target corpora with mask mode;

step (3022), performing second recording and positioning processing on all words and/or phrases in the monolingual target corpora with the masking mode, so as to determine corpus position information corresponding to the monolingual target corpora with the masking mode;

step (3023), judging the validity of the phrase position information, and using the monolingual target corpus having the mask mode corresponding to the valid phrase position information as the new monolingual target corpus;

further, in the step (303), the training of words on the second network structure using the new monolingual target corpus specifically includes,

step (3031), performing second cyclic training on the second network structure by using all masked-mode monolingual target corpora in the new monolingual target corpora, so as to realize second prediction processing on words and/or phrases with masked modes;

step (3032), at least one second prediction result is extracted from the second prediction processing, and second word matching judgment is carried out on the at least one second prediction result;

step (3033), if the result of the second word matching judgment indicates that the at least one second prediction result matches with the monolingual target corpus which is randomly masked before, the second cyclic training is completed, otherwise, the second cyclic training is continued until the at least one second prediction result matches with the monolingual target corpus;

further, in the step (4), reconstructing a new encoder-decoder frame according to the trained first network structure and the trained second network structure, and simultaneously training the new encoder-decoder frame by using the plurality of parallel bilingual corpora to obtain the general neural machine translation model specifically including,

step (401), acquiring an encoder parameter of an encoder module in the trained first network structure and a decoder parameter of a decoder module in the trained second network structure;

a step (402) of migrating the encoder parameters and the decoder parameters to an original encoder-decoder framework;

and (403) training the original encoder-decoder framework by using the plurality of parallel bilingual corpora and a fine tuning mode, so as to construct and obtain the universal neural machine translation model.

Compared with the prior art, the construction method of the semi-supervised universal neural machine translation model is different from the construction method of the neural machine translation model in the prior art which only adopts parallel bilingual corpus or forged parallel bilingual corpus to train the neural machine translation model, can directly adopt a large amount of monolingual corpus to train the neural machine translation model and adopt a small amount of parallel bilingual corpus to fine tune the neural machine translation model, so that a large amount of complicated manual labeling processing can be avoided when a large amount of parallel bilingual corpus is used as training data, and can reach the level equivalent to the level of training by using a large amount of parallel bilingual corpus under the condition of adopting a small amount of parallel bilingual corpus, thereby effectively reducing the early-stage workload of the neural machine translation model training and improving the translation accuracy of the neural machine translation model; in addition, the construction method is particularly suitable for the translation of the small languages because the parallel bilingual corpus is difficult to obtain in the small languages, and the monolingual corpus is easier to collect instead.

Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention. The objectives and other advantages of the invention will be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.

The technical solution of the present invention is further described in detail by the accompanying drawings and embodiments.

Drawings

In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.

Fig. 1 is a schematic flow chart of a method for constructing a semi-supervised general neural machine translation model provided by the present invention.

Detailed Description

The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.

Fig. 1 is a schematic flow chart of a method for constructing a semi-supervised general neural machine translation model according to an embodiment of the present invention. The construction method of the semi-supervised general neural machine translation model comprises the following steps:

step (1), determining a plurality of monolingual source corpora, a plurality of monolingual target corpora and a plurality of parallel bilingual corpora as training data.

And (2) constructing a first network structure of the encoder module and the classifier module, and training the first network structure by adopting the plurality of monolingual source materials.

Preferably, in the step (2), constructing a first network structure regarding the encoder module and the classifier module, and the training the first network structure using the plurality of monolingual source materials specifically includes,

step (201), extracting the encoder module from an original encoder-decoder framework, and constructing and obtaining the first network structure by combining the classifier module;

step (202), processing the monolingual corpus by using sub-word BPE technology, and converting the monolingual source corpora into new monolingual source corpora;

and (203) training the first network structure about words by using the new monolingual source material.

Preferably, in the step (201), extracting the encoder module from the original encoder-decoder framework, and constructing the first network structure by combining the classifier module specifically includes,

a step (2011) of determining separable attributes in the encoder module and the original encoder-decoder framework;

step (2012), if the separable attribute indicates that the encoder module has separable characteristics, the encoder module is directly extracted from the original encoder-decoder framework, and if the separable attribute indicates that the encoder module does not have separable characteristics, the encoder module is extracted from the original encoder-decoder framework after the function module segmentation processing is performed on the original encoder-decoder framework;

and (2013) connecting the output end of the encoder module obtained by extraction with the input end of the classifier module, so as to construct and obtain the first network structure.

Preferably, in this step (202), transforming the monolingual source corpora into new monolingual source corpora specifically includes,

step (2021), performing first random mask processing on the plurality of monolingual source materials in a mode of token with 10% -15% of random mask corpuses and continuous tokens with 40% -50% of random mask corpuses to obtain a plurality of monolingual source materials with mask modes;

step (2022), performing first record positioning processing on all words and/or phrases in the monolingual source linguistic data with the masking mode, so as to determine the linguistic segment position information corresponding to the monolingual source linguistic data with the masking mode;

step (2023), judging the validity of the phrase position information, so as to use the monolingual source language material with the mask mode corresponding to the valid phrase position information as the new monolingual source language material.

Preferably, in this step (203), training the first network structure for words using the new monolingual source material specifically includes,

step (2031), performing a first loop training on the first network structure by using all masked-mode monolingual source corpora in the new monolingual source corpora, so as to realize a first prediction processing on words and/or phrases with masked modes;

step (2032) of extracting at least one first prediction result from the first prediction processing, and performing a first word matching judgment on the at least one first prediction result;

step (2033) of completing the first round of training if the result of the first word match determination indicates that the at least one first prediction matches a previously haplotypic source material that was randomly masked, otherwise continuing the first round of training until the two match.

And (3) constructing a second network structure related to the decoder module and the classifier module, and simultaneously training the second network structure by adopting the plurality of monolingual target corpora.

Preferably, in the step (3), constructing a second network structure regarding the decoder module and the classifier module, and the training the second network structure using the monolingual target corpora specifically includes,

step (301), extracting the decoder module from the original encoder-decoder framework, and constructing and obtaining the second network structure by combining the classifier module;

step (302), converting the monolingual target corpora into new monolingual target corpora;

and (303) training words on the second network structure by using the new monolingual target corpus.

Preferably, in the step (301), extracting the decoder module from the original encoder-decoder framework, and constructing the second network structure by combining the classifier module specifically includes,

a step (3011) of determining separable attributes of the decoder module and the original encoder-decoder framework;

step (3012), if the separable attribute indicates that the decoder module has separable characteristics, the decoder module is directly extracted from the original encoder-decoder framework, and if the separable attribute indicates that the decoder module does not have separable characteristics, the decoder module is extracted from the original encoder-decoder framework after the function module segmentation processing is performed on the original encoder-decoder framework;

and (3013) connecting the extracted output end of the decoder module with the input end of the classifier module, thereby constructing and obtaining the second network structure.

Preferably, in the step (302), transforming the monolingual target corpora into new monolingual target corpora specifically includes,

step (3021), performing second random mask processing on the plurality of monolingual target corpora in a mode of token of 10% -15% of random mask corpora and continuous tokens of 40% -50% of random mask corpora to obtain a plurality of monolingual target corpora with mask modes;

step (3022), performing second recording and positioning processing on all words and/or phrases in the monolingual target corpora with the masking mode, so as to determine corpus position information corresponding to the monolingual target corpora with the masking mode;

and (3023) judging the validity of the phrase position information, and using the monolingual target corpus having the mask mode corresponding to the valid phrase position information as the new monolingual target corpus.

Preferably, in the step (303), the training of the second network structure for words using the new monolingual target corpus specifically includes,

step (3031), performing second cyclic training on the second network structure by using all masked-mode monolingual target linguistic data in the new monolingual target linguistic data, so as to realize second prediction processing on words and/or phrases with masked modes;

step (3032), extracting at least one second prediction result from the second prediction processing, and performing second word matching judgment on the at least one second prediction result;

and (3033) if the result of the second word matching judgment indicates that the at least one second prediction result matches with the monolingual target corpus which is randomly masked before, completing the second cyclic training, otherwise, continuing the second cyclic training until the at least one second prediction result matches with the monolingual target corpus.

And (4) reconstructing a new encoder-decoder framework according to the trained first network structure and the trained second network structure, and simultaneously training the new encoder-decoder framework by adopting the plurality of parallel bilingual corpora to construct and obtain the universal neural machine translation model.

Preferably, in the step (4), reconstructing a new encoder-decoder framework according to the trained first network structure and the trained second network structure, and simultaneously training the new encoder-decoder framework by using the plurality of parallel bilingual corpora, so as to obtain the universal neural machine translation model through construction,

step (401), acquiring an encoder parameter of an encoder module in the trained first network structure and a decoder parameter of a decoder module in the trained second network structure;

a step (402) of migrating the encoder parameters and the decoder parameters to an original encoder-decoder framework;

and (403) training the original encoder-decoder framework by utilizing the plurality of parallel bilingual corpora and a fine tuning mode, so as to construct and obtain the universal neural machine translation model.

It can be seen from the above embodiments that the construction method of the semi-supervised general neural machine translation model is different from the prior art that the neural machine translation model is trained only by using parallel bilingual corpus or forged parallel bilingual corpus, and the construction method can directly train the neural machine translation model by using a large amount of monolingual corpus and fine tune the neural machine translation model by using a small amount of parallel bilingual corpus, so that a large amount of complicated manual labeling processing can be avoided when using a large amount of parallel bilingual corpus as training data, and a level equivalent to that of using a large amount of parallel bilingual corpus can be achieved under the condition of using a small amount of parallel bilingual corpus, thereby effectively reducing the early workload of the neural machine translation model training and improving the translation accuracy of the neural machine translation model; in addition, the construction method is particularly suitable for the translation of the small languages because the parallel bilingual corpus is difficult to obtain in the small languages, and the monolingual corpus is easier to collect instead.

It will be apparent to those skilled in the art that various changes and modifications may be made in the present invention without departing from the spirit and scope of the invention. Thus, if such modifications and variations of the present invention fall within the scope of the claims of the present invention and their equivalents, the present invention is also intended to include such modifications and variations.

12页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:一种基于树到序列的蒙汉机器翻译方法

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!