Method and system for replacing terms

文档序号:1087398 发布日期:2020-10-20 浏览:8次 中文

阅读说明:本技术 一种术语替换方法及系统 (Method and system for replacing terms ) 是由 蔡洁 于 2020-06-04 设计创作,主要内容包括:本发明实施例提供一种术语替换方法及系统,包括:对于重新排序后的术语表中的当前术语,对当前术语进行分词处理,获取若干子术语;将每一子术语和目标原文输入到稀疏词语对齐模型中,获取每一子术语与每一预测术语译文的对齐概率,并选择目标术语译文;若所有目标术语译文之间连续,则将所有目标术语译文替换为本土化翻译,若当前术语不是最后一个术语,则将重新排序后的术语表中的下一个术语重新作为当前术语,重复上述过程,直到更新后的当前术语为最后一个,将最后得到的预设译文作为最佳译文。本发明实施例通过预设术语库存储若干术语和本土化翻译,然后将预设译文中术语的非本土化翻译替换为本土化翻译,得到最终的最佳译文。(The embodiment of the invention provides a method and a system for replacing terms, which comprises the following steps: for the current term in the reordered term list, performing word segmentation processing on the current term to obtain a plurality of sub-terms; inputting each sub-term and the target original text into a sparse word alignment model, acquiring the alignment probability of each sub-term and each predicted term translation, and selecting a target term translation; and if the current term is not the last term, the next term in the reordered glossary is used as the current term again, the process is repeated until the updated current term is the last term, and the finally obtained preset translation is used as the best translation. According to the embodiment of the invention, a plurality of terms and native translations are stored through the preset term library, and then the non-native translations of the terms in the preset translation are replaced by the native translations, so that the final optimal translation is obtained.)

1. A method for replacing terms, comprising:

for a current term in the reordered term table, performing word segmentation on the current term to obtain a plurality of sub-terms corresponding to the current term, wherein the term table comprises all terms in a target original text and a native translation corresponding to each term, and the reordered term table is obtained by ordering all terms in the term table according to a preset rule;

inputting each sub-term corresponding to the current term and the target original text into a sparse word alignment model, obtaining alignment probability of each sub-term corresponding to the current term and each predicted term translation, and selecting a target term translation from all the predicted term translations, wherein the sparse word alignment model is obtained by training a plurality of word vector samples serving as training samples and reference word vectors serving as labels;

and if all target term translations corresponding to each sub-term are continuous, replacing all target term translations in the preset translation with the native translation corresponding to the current term, if the current term is not the last term in the reordered term table, taking the next term in the reordered term table as the current term again, repeating the process until the updated current term is the last term in the reordered term table, and taking the finally obtained preset translation as the best translation of the target original text.

2. The term replacement method according to claim 1, wherein the reordered term table is determined by:

and sequencing all terms in the term table corresponding to the target original text in the sequence from large to small in length to obtain the reordered term table.

3. The term replacement method according to claim 1, wherein the inputting each sub-term corresponding to the current term and the target original text into a sparse word alignment model and obtaining an alignment probability of each sub-term corresponding to the current term and each predicted term translation specifically includes:

inputting each sub-term corresponding to the target original text and the current term into an input layer and an alignment layer of a sparse word alignment model, and acquiring an alignment relation matrix of each sub-term corresponding to the current term and each predicted vocabulary vector, wherein the predicted vocabulary vector is a vector corresponding to the predicted term translation;

inputting the alignment relation matrix into a softmax function of the sparse word alignment model, and acquiring the alignment probability of each sub-term corresponding to the current term and each prediction term translation.

4. The term replacement method according to claim 1, wherein the sparse word alignment model is obtained by training using a plurality of vocabulary vector samples as training samples and using reference vocabulary vectors as labels, and specifically comprises:

inputting each vocabulary vector sample into the sparse word alignment model to obtain each predicted vocabulary vector;

calculating cross entropy loss between each predicted vocabulary vector and the reference vocabulary vector corresponding to each vocabulary vector sample;

and updating the parameters of the sparse word alignment model through back propagation, repeating the process, performing gradient descent to find a local optimal solution, and acquiring the trained sparse word alignment model.

5. The term replacement method as claimed in claim 3, wherein the input layer includes a first input unit and a second input unit, wherein:

the first input unit consists of N GRU neural networks, each GRU neural network is connected in sequence according to a preset direction, and N represents the number of all segmentation words in the target primitive text;

the second input unit consists of N GRU neural networks, and each GRU neural network is sequentially connected in a direction opposite to the preset direction;

and each GRU neural network in the first input unit is connected with each GRU neural network in the second input unit in a one-to-one correspondence mode.

6. The term replacement method as claimed in claim 5, wherein the alignment layer is located behind the input layer, the alignment layer is composed of N GRU neural networks, each GRU neural network being connected in sequence according to the preset direction;

and each GRU neural network in the second input unit is connected with each GRU neural network in the alignment layer in a one-to-one correspondence mode.

7. The term replacement method according to claim 6, wherein the sparse term alignment model further comprises an output layer, the output layer is located behind the alignment layer, the output layer is composed of M GRU neural networks, each GRU neural network is sequentially connected according to the preset direction, M represents the number of all segmented terms in the prediction translation;

if M is larger than N, the first N GRU neural networks in the alignment layer are correspondingly connected with each GRU neural network in the alignment layer one by one;

and if M is smaller than N, each GRU neural network of the alignment layer is correspondingly connected with the first M GRU neural networks in the alignment layer one by one.

8. The term replacement method according to claim 1, wherein the glossary is obtained by:

and inputting the target original text and a preset term library into a preset term matching model, and acquiring a term table corresponding to the target original text.

9. A term replacement system, comprising:

the word segmentation module is used for performing word segmentation processing on a current term in the reordered term list to obtain a plurality of sub-terms corresponding to the current term, wherein the term list comprises all terms in a target original text and a native translation corresponding to each term, and the reordered term list is obtained by ordering all terms in the term list according to a preset rule;

the alignment module is used for inputting each sub-term corresponding to the current term and the target original text into a sparse word alignment model, acquiring the alignment probability of each sub-term corresponding to the current term and each predicted term translation, and selecting the target term translation from all the predicted term translations, wherein the sparse word alignment model is obtained by training by taking a plurality of word vector samples as training samples and reference word vectors as labels;

and if the current term is not the last term in the reordered term table, re-using the next term in the reordered term table as the current term, repeating the above process until the updated current term is the last term in the reordered term table, and using the finally obtained preset translation as the optimal translation of the target original text.

10. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the processor implements the steps of the term replacement method as claimed in any one of claims 1 to 8 when executing the program.

Technical Field

The invention relates to the technical field of computers, in particular to a method and a system for replacing terms.

Background

In the translation process, because some words are ambiguous words or different persons have different translation habits, the same word can be translated into different translations. In order to uniformly translate some more important words, the manager of the translation project defines some words as fixed translation methods. These words are then termed "terms".

For example:

original text: the southern chord of the leaf and the cool baby turned away from here.

Translation: yenanxian and Liang erturbned around and andeffeft.

Due to the requirement of the native translation of the novel, the translation corresponding to the southern chord of the leaf should be NathanielYe, and the translation of the southern chord of the leaf should be Liang' er.

However, the method for implementing term replacement by training the translation model is troublesome in that the translation model of a third party cannot be affected, for example, a translator using the google translation engine cannot use a modified training strategy to affect google, so that a post-translation editing method is required to deal with the requirement.

Therefore, a method and system for replacing terms is needed.

Disclosure of Invention

In order to solve the above problems, embodiments of the present invention provide a method and system for replacing terms.

In a first aspect, an embodiment of the present invention provides a method for replacing terms, including:

for a current term in the reordered term table, performing word segmentation on the current term to obtain a plurality of sub-terms corresponding to the current term, wherein the term table comprises all terms in a target original text and a native translation corresponding to each term, and the reordered term table is obtained by ordering all terms in the term table according to a preset rule;

inputting each sub-term corresponding to the current term and the target original text into a sparse word alignment model, obtaining alignment probability of each sub-term corresponding to the current term and each predicted term translation, and selecting a target term translation from all the predicted term translations, wherein the sparse word alignment model is obtained by training a plurality of word vector samples serving as training samples and reference word vectors serving as labels;

and if all target term translations corresponding to each sub-term are continuous, replacing the target term translation in the preset translation with the native translation corresponding to the current term, if the current term is not the last term in the reordered term table, taking the next term in the reordered term table as the current term again, repeating the process until the updated current term is the last term in the reordered term table, and taking the finally obtained preset translation as the best translation of the target original text.

Preferably, the reordered glossary is specifically determined as follows:

and sequencing all terms in the term table corresponding to the target original text in the sequence from large to small in length to obtain the reordered term table.

Preferably, the inputting each sub-term corresponding to the current term and the target original text into a sparse word alignment model, and obtaining an alignment probability of each sub-term corresponding to the current term and each predicted term translation specifically includes:

inputting each sub-term corresponding to the target original text and the current term into an input layer and an alignment layer of a sparse word alignment model, and acquiring an alignment relation matrix of each sub-term corresponding to the current term and each predicted vocabulary vector, wherein the predicted vocabulary vector is a vector corresponding to the predicted term translation;

inputting the alignment relation matrix into a softmax function of the sparse word alignment model, and acquiring the alignment probability of each sub-term corresponding to the current term and each prediction term translation.

Preferably, the sparse word alignment model is obtained by training a plurality of word vector samples as training samples and reference word vectors as labels, and specifically includes:

inputting each vocabulary vector sample into the sparse word alignment model to obtain each predicted vocabulary vector;

calculating cross entropy loss between each predicted vocabulary vector and the reference vocabulary vector corresponding to each vocabulary vector sample;

and updating the parameters of the sparse word alignment model through back propagation, repeating the process, performing gradient descent to find a local optimal solution, and acquiring the trained sparse word alignment model.

Preferably, the input layer comprises a first input unit and a second input unit, wherein:

the first input unit consists of N GRU neural networks, each GRU neural network is connected in sequence according to a preset direction, and N represents the number of all segmentation words in the target primitive text;

the second input unit consists of N GRU neural networks, and each GRU neural network is sequentially connected in a direction opposite to the preset direction;

and each GRU neural network in the first input unit is connected with each GRU neural network in the second input unit in a one-to-one correspondence mode.

Preferably, the alignment layer is located behind the input layer, the alignment layer is composed of N GRU neural networks, and each GRU neural network is sequentially connected according to the preset direction;

and each GRU neural network in the second input unit is connected with each GRU neural network in the alignment layer in a one-to-one correspondence mode.

Preferably, the sparse word alignment model further includes an output layer, the output layer is located behind the alignment layer, the output layer is composed of M GRU neural networks, each GRU neural network is sequentially connected with the preset direction, and M represents the number of all segmented words in the predictive translation;

if M is larger than N, the first N GRU neural networks in the alignment layer are correspondingly connected with each GRU neural network in the alignment layer one by one;

and if M is smaller than N, each GRU neural network of the alignment layer is correspondingly connected with the first M GRU neural networks in the alignment layer one by one.

Preferably, the glossary is obtained by:

and inputting the target original text and a preset term library into a preset term matching model, and acquiring a term table corresponding to the target original text.

In a second aspect, an embodiment of the present invention provides a term replacement system, including:

the word segmentation module is used for performing word segmentation processing on a current term in the reordered term list to obtain a plurality of sub-terms corresponding to the current term, wherein the term list comprises all terms in a target original text and a native translation corresponding to each term, and the reordered term list is obtained by ordering all terms in the term list according to a preset rule;

the alignment module is used for inputting each sub-term corresponding to the current term and the target original text into a sparse word alignment model, acquiring the alignment probability of each sub-term corresponding to the current term and each predicted term translation, and selecting the target term translation from all the predicted term translations, wherein the sparse word alignment model is obtained by training by taking a plurality of word vector samples as training samples and reference word vectors as labels;

and if the current term is not the last term in the reordered term table, re-using the next term in the reordered term table as the current term, repeating the above process until the updated current term is the last term in the reordered term table, and using the finally obtained preset translation as the optimal translation of the target original text.

In a third aspect, an embodiment of the present invention provides an electronic device, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor executes the computer program to implement the steps of the term replacement method provided in the first aspect of the present invention.

In a fourth aspect, an embodiment of the present invention provides a non-transitory computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the steps of a term replacement method provided in the first aspect of the present invention.

According to the method and the system for replacing the terms, provided by the embodiment of the invention, a plurality of terms and the native translations corresponding to the terms are stored through the preset term library, and then the non-native translations of the terms in the preset translation are replaced by the native translations, so that the final optimal translation is obtained.

Drawings

In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and those skilled in the art can also obtain other drawings according to the drawings without creative efforts.

Fig. 1 is a flowchart of a term replacement method according to an embodiment of the present invention;

FIG. 2 is a schematic structural diagram of a sparse word alignment model in an embodiment of the present invention;

fig. 3 is a schematic flowchart of a term replacement method according to another embodiment of the present invention;

fig. 4 is a schematic structural diagram of a term replacement system according to an embodiment of the present invention;

fig. 5 is a schematic physical structure diagram of an electronic device according to an embodiment of the present invention.

Detailed Description

In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.

Fig. 1 is a flowchart of a term replacement method according to an embodiment of the present invention, as shown in fig. 1, the method includes:

s1, for the current term in the reordered term list, performing word segmentation on the current term to obtain a plurality of sub-terms corresponding to the current term, wherein the term list comprises all terms in the target original text and the localized translation corresponding to each term, and the reordered term list is obtained by ordering all terms in the term list according to a preset rule;

first, a current term in a reordered term table is obtained, word segmentation processing is performed on the current term, and a plurality of sub-terms corresponding to the current term are obtained, specifically, the term table includes all terms in a target original text and a native translation corresponding to each term.

The target original text is taken as "south-leaf chord and cool baby turn away", the preset translation corresponding to the target original text is "Ye Nanxian and Liang erturbed around and left", and two terms of "south-leaf chord" and "cool baby" exist in the target original text, and the two terms are ordered in the glossary according to the preset rule. The glossary of terms is [ southern leaf string, NathanielYe ] and [ cool, Liang' er ], preceded by a term and followed by the corresponding native translation of that term.

The term "south chord of leaf" is used as the current term example for explanation, and the term "south chord of leaf" is subjected to word segmentation to obtain three sub-terms of "leaf", "south" and "chord".

S2, inputting each sub-term corresponding to the current term and the target original text into a sparse word alignment model, obtaining the alignment probability of each sub-term corresponding to the current term and each predicted term translation, and selecting the target term translation from all the predicted term translations, wherein the sparse word alignment model is obtained by training by taking a plurality of word vector samples as training samples and reference word vectors as labels;

inputting each sub-term corresponding to the current term and the target original text into a sparse word alignment model, obtaining alignment probability of each sub-term and the predicted term translation, and then selecting the target term translation from the predicted term translation according to a certain rule.

For example, three sub-terms of "leaf", "south" and "chord" and target original text "leaf south chord and cool baby turn away" are input into a sparse word alignment model, and alignment probabilities between "leaf" and each word "Ye", "Nanxian", "and", "living", "around", "and" left "are obtained, the words are prediction term translations, and the prediction term translations are each word of the target original text corresponding translations.

Obtaining the alignment probability between the leaf and each word "Ye", "Nanxian", "and", "Lianger", "turned", "around", "and", "left", in the embodiment of the present invention, the predicted term translation with the highest alignment probability and the alignment probability greater than the preset probability is used as the target term translation of the leaf, and experiments show that the target term translation of the leaf is "Ye". In the embodiment of the invention, the value of the preset probability is 0.4 through experimental verification.

According to the same process, the target term translation of "south" and the target term translation of "chord" are found as "Nanxian".

Specifically, in the embodiment of the present invention, the sparse word alignment model is obtained by training a plurality of word vector samples as training samples and reference word vectors as labels, where the reference word vectors are standard translations corresponding to the word vector samples.

S3, if all target term translations corresponding to each sub-term are consecutive, replacing the target term translation in the preset translation with the native translation corresponding to the current term, and if the current term is not the last term in the reordered glossary, re-using the next term in the reordered glossary as the current term, repeating the above process until the updated current term is the last term in the reordered glossary, and using the last obtained preset translation as the best translation of the target original text.

Specifically, if the target term translations corresponding to each sub-term are consecutive, that is, the three words "Ye", "nan", "chord" corresponding to the target term translations "Ye", "nan", and "nan", are consecutive in the preset translation "Ye nan", and Liang' erturbered around and left ", the purpose is to prevent cross-alignment, all target term translations in the preset translation are replaced by the native translation corresponding to the current term, namely, replacing the Ye Nanxian in the Ye Nanxian and Liang erturbered around and left with the native translation of the south chord of the leaf, namely the NathanieYe, and then, looking at whether the leaf south chord is the last term in the glossary, if not, taking the next term ' cool ' as the current term again, repeating the process, and replacing the ' Liang er ' in the preset translation with the ' cool ' native translation ' Liang ' er '.

And then taking the finally obtained preset translation as the optimal translation of the target original text, namely' the leaf south chord and the cool baby turn away from the target original text. The best translation of "is" NathanielYe and Liang' erturbed around and left.

According to the term replacement method provided by the embodiment of the invention, a plurality of terms and the native translations corresponding to the terms are stored through the preset term library, and then the non-native translations of the terms in the preset translation are replaced by the native translations, so that the final optimal translation is obtained.

On the basis of the foregoing embodiment, preferably, the reordered glossary is specifically determined as follows:

and sequencing all terms in the term table corresponding to the target original text in the sequence from large to small in length to obtain the reordered term table.

Specifically, in the embodiment of the present invention, all terms in the term table are sorted according to length from large to small to obtain a reordered term table, and terms with large length are replaced first, and terms with short length are replaced later, which is performed to prevent that repeated processing causes confusion of terms due to inclusion of sub-terms.

On the basis of the foregoing embodiment, preferably, the inputting each sub-term corresponding to the current term and the target original text into a sparse word alignment model, and obtaining an alignment probability of each sub-term corresponding to the current term and each predicted term translation specifically includes:

inputting each sub-term corresponding to the target original text and the current term into an input layer and an alignment layer of a sparse word alignment model, and acquiring an alignment relation matrix of each sub-term corresponding to the current term and each predicted vocabulary vector, wherein the predicted vocabulary vector is a vector corresponding to the predicted term translated text;

inputting the alignment relation matrix into a softmax function of the sparse word alignment model, and acquiring the alignment probability of each sub-term corresponding to the current term and each prediction term translation.

Specifically, each sub-term corresponding to the target text and the current term is input into an input layer of the sparse word alignment model to obtain an intermediate result, and the intermediate result is input into an alignment layer of the sparse word alignment model to obtain an alignment relation matrix of each sub-term corresponding to the current term and each predicted word vector. The predicted vocabulary vector is a vector corresponding to the predicted term translation, and the alignment matrix is a representation of the alignment between each sub-term and the predicted vocabulary vector.

And then inputting the alignment relation matrix into a softmax function of the sparse word alignment model to obtain the alignment probability of each sub-term corresponding to the current term and each predicted term translation.

On the basis of the foregoing embodiment, preferably, the sparse word alignment model is obtained by training using a plurality of vocabulary vector samples as training samples and using reference vocabulary vectors as tags, and specifically includes:

inputting each vocabulary vector sample into the sparse word alignment model to obtain each predicted vocabulary vector;

calculating cross entropy loss between each predicted vocabulary vector and the reference vocabulary vector corresponding to each vocabulary vector sample;

and updating the parameters of the sparse word alignment model through back propagation, repeating the process, performing gradient descent to find a local optimal solution, and acquiring the trained sparse word alignment model.

Specifically, the sparse word alignment model is obtained by training a plurality of word vector samples serving as training samples and reference word vectors serving as labels, and the specific process is as follows:

inputting each word vector sample into an initialized sparse word alignment model to obtain each predicted word vector, wherein the predicted word vectors are all possible word vectors predicted by the sparse word alignment model according to the word vector samples, then calculating cross entropy loss between each predicted word vector and a reference word vector, the predicted word vectors can be regarded as predicted values, the reference word vectors can be regarded as standard values, comparing the difference between the predicted values and the standard values, then updating parameters of the sparse word alignment model through back propagation, continuously reducing the difference between the predicted values and the standard values, repeating the process until a local optimal solution is found through a gradient descent algorithm, and obtaining the trained sparse word alignment model.

On the basis of the above embodiment, preferably, the input layer includes a first input unit and a second input unit, wherein:

the first input unit consists of N GRU neural networks, each GRU neural network is connected in sequence according to a preset direction, and N represents the number of all segmentation words in the target primitive text;

the second input unit consists of N GRU neural networks, and each GRU neural network is sequentially connected in a direction opposite to the preset direction;

and each GRU neural network in the first input unit is connected with each GRU neural network in the second input unit in a one-to-one correspondence mode.

Fig. 2 is a schematic structural diagram of a sparse word alignment model in an embodiment of the present invention, and as can be seen from fig. 2, an input layer is composed of a first input unit and a second input unit, the first input unit is composed of N GRU neural networks, and each GRU neural network is sequentially connected according to a preset direction, a direction indicated by a horizontal arrow in the drawing is the preset direction, and each GRU neural network in the first input unit is sequentially connected according to a rightward direction.

The second input unit is sequentially connected by the N GRU neural networks according to a direction opposite to the preset direction, as can be seen from the figure, the preset direction is a right direction, the direction opposite to the preset direction is a left direction, and each GRU neural network in the second input unit is sequentially connected according to the left direction.

And each GRU neural network in the first input unit is also connected with each GRU neural network in the second input unit in a one-to-one correspondence mode.

Specifically, the first input unit records each space vocabulary vector of a sentence, and inputs the space vocabulary vectors into the neural network in sequence.

The second input unit is used for preventing the input sequence from influencing the alignment result too much, and inputting the input reverse adjustment once again.

On the basis of the above embodiment, preferably, the alignment layer is composed of N GRU neural networks, and each GRU neural network is sequentially connected according to the preset direction;

and each GRU neural network in the second input unit is connected with each GRU neural network in the alignment layer in a one-to-one correspondence mode.

Specifically, the alignment layer is composed of N GRU neural networks, and each GRU neural network is sequentially connected in a rightward direction. And each GRU neural network in the second input unit is connected with each GRU neural network in the alignment layer in a one-to-one correspondence mode.

On the basis of the foregoing embodiment, preferably, the sparse word alignment model further includes an output layer, the output layer is composed of M GRU neural networks, each GRU neural network is sequentially connected with the preset direction, M represents the number of all segmented words in the prediction term translation;

if M is larger than N, the first N GRU neural networks in the alignment layer are correspondingly connected with each GRU neural network in the alignment layer one by one;

and if M is smaller than N, each GRU neural network of the alignment layer is correspondingly connected with the first M GRU neural networks in the alignment layer one by one.

Specifically, the output layer is composed of M GRU neural networks, each GRU neural network is connected in sequence in the rightward direction, and M represents the number of all segmented words in the prediction translation.

Since M is likely not equal to N in practical applications, the GRU neural networks of the alignment layer and the GRU neural networks of the output layer are aligned in order.

Based on the sparse word alignment model, through multi-party experimental verification, the alignment effect of the method provided by the embodiment of the invention on sparse words is better than that of the prior art.

On the basis of the above embodiments, preferably, the glossary is obtained by:

and inputting the target original text and a preset term library into a preset term matching model, and acquiring a term table corresponding to the target original text.

Specifically, the target text and the preset term library are input into a preset term matching model, and terms contained in the target text can be found out, wherein the preset term matching model is a trained neural network model, and the preset term library comprises a plurality of terms needing to be translated locally, such as [ southern leaf string, nathanieye ], [ cool, Liang' er ], [ Kunlun City, Kunlun City ], [ exercise, maneuver ], and the like.

Fig. 3 is a schematic flowchart of a term replacement method according to another embodiment of the present invention, as shown in fig. 3, a target original text and a preset term library are input into a preset term matching model, and a term table after matching is obtained.

And then inputting the sub-terms corresponding to the matched terms and the target original text into the sparse word alignment model, obtaining the words of the translation corresponding to the term original text, and replacing the words of the corresponding term translation into the words of the corresponding translation in the glossary.

Fig. 4 is a schematic structural diagram of a term replacement system according to an embodiment of the present invention, and as shown in fig. 4, the system includes: a segmentation module 401, an alignment module 402 and a probability module 403. Wherein:

the word segmentation module 401 is configured to perform word segmentation on a current term in a reordered term table to obtain a plurality of sub-terms corresponding to the current term, where the term table includes all terms in a target original text and a native translation corresponding to each term, and the reordered term table is obtained by ordering all terms in the term table according to a preset rule;

the alignment module 402 is configured to input each sub-term corresponding to the current term and the target original text into a sparse word alignment model, obtain an alignment probability between each sub-term corresponding to the current term and each predicted term translation, and select a target term translation from all the predicted term translations, where the sparse word alignment model is obtained by training using a plurality of word vector samples as training samples and using a reference word vector as a tag;

the probability module 403 is configured to replace the target term translation in the preset translation with the native translation corresponding to the current term if all target term translations corresponding to each sub-term are consecutive, and if the current term is not the last term in the reordered term table, re-use the next term in the reordered term table as the current term, and repeat the above process until the updated current term is the last term in the reordered term table, and use the finally obtained preset translation as the best translation of the target original text.

The system embodiment provided in the embodiments of the present invention is for implementing the above method embodiments, and for details of the process and the details, reference is made to the above method embodiments, which are not described herein again.

Fig. 5 is a schematic entity structure diagram of an electronic device according to an embodiment of the present invention, and as shown in fig. 5, the electronic device may include: a processor (processor)501, a communication Interface (Communications Interface)502, a memory (memory)503, and a bus 504, wherein the processor 501, the communication Interface 502, and the memory 503 are configured to communicate with each other via the bus 504. The communication interface 502 may be used for information transfer of an electronic device. The processor 501 may call logic instructions in the memory 503 to perform a method comprising:

for a current term in the reordered term table, performing word segmentation on the current term to obtain a plurality of sub-terms corresponding to the current term, wherein the term table comprises all terms in a target original text and a native translation corresponding to each term, and the reordered term table is obtained by ordering all terms in the term table according to a preset rule;

inputting each sub-term corresponding to the current term and the target original text into a sparse word alignment model, obtaining alignment probability of each sub-term corresponding to the current term and each predicted term translation, and selecting a target term translation from all the predicted term translations, wherein the sparse word alignment model is obtained by training a plurality of word vector samples serving as training samples and reference word vectors serving as labels;

and if all target term translations corresponding to each sub-term are continuous, replacing all target term translations in the preset translation with the native translation corresponding to the current term, if the current term is not the last term in the reordered term table, taking the next term in the reordered term table as the current term again, repeating the process until the updated current term is the last term in the reordered term table, and taking the finally obtained preset translation as the best translation of the target original text.

In addition, the logic instructions in the memory 503 may be implemented in the form of software functional units and stored in a computer readable storage medium when the logic instructions are sold or used as independent products. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the above-described method embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.

In another aspect, an embodiment of the present invention further provides a non-transitory computer-readable storage medium, on which a computer program is stored, where the computer program is implemented to perform the transmission method provided in the foregoing embodiments when executed by a processor, and for example, the method includes:

for a current term in the reordered term table, performing word segmentation on the current term to obtain a plurality of sub-terms corresponding to the current term, wherein the term table comprises all terms in a target original text and a native translation corresponding to each term, and the reordered term table is obtained by ordering all terms in the term table according to a preset rule;

inputting each sub-term corresponding to the current term and the target original text into a sparse word alignment model, obtaining alignment probability of each sub-term corresponding to the current term and each predicted term translation, and selecting a target term translation from all the predicted term translations, wherein the sparse word alignment model is obtained by training a plurality of word vector samples serving as training samples and reference word vectors serving as labels;

and if all target term translations corresponding to each sub-term are continuous, replacing all target term translations in the preset translation with the native translation corresponding to the current term, if the current term is not the last term in the reordered term table, taking the next term in the reordered term table as the current term again, repeating the process until the updated current term is the last term in the reordered term table, and taking the finally obtained preset translation as the best translation of the target original text.

The above-described embodiments of the apparatus are merely illustrative, and the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.

Through the above description of the embodiments, those skilled in the art will clearly understand that each embodiment can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware. With this understanding in mind, the above-described technical solutions may be embodied in the form of a software product, which can be stored in a computer-readable storage medium such as ROM/RAM, magnetic disk, optical disk, etc., and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the methods described in the embodiments or some parts of the embodiments.

Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

16页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:用于生成属性信息的方法和装置

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!