Method and device for generating story, computer equipment and medium

文档序号:1127740 发布日期:2020-10-02 浏览:11次 中文

阅读说明:本技术 一种故事生成的方法、装置、计算机设备和介质 (Method and device for generating story, computer equipment and medium ) 是由 席亚东 毛晓曦 李乐 林磊 江琳 陈彦江 杨淑涵 曾歌鸽 李智 范长杰 胡志鹏 于 2020-06-28 设计创作,主要内容包括:本申请提供了一种故事生成的方法、装置、计算机设备和介质,所述方法包括:获取用户所输入的第一故事段落;将所述第一故事段落和所述第一故事段落中出现的人物名称均输入到训练完成的语句生成模型中,以得到目标第二故事段落;所述目标第二故事段落中出现的人物名称与所述第一故事段落中出现的至少部分的人物名称相同;将所述目标第二故事段落反馈给用户。本申请中通过向训练完成的语句生成模型中输入第一故事段落中所出现的人物名称,提高了在目标第二故事段落中出现第一故事段落中出现的人物名称的概率。(The application provides a method, an apparatus, a computer device and a medium for story generation, wherein the method comprises the following steps: acquiring a first story paragraph input by a user; inputting the first story paragraph and the character names appearing in the first story paragraph into a trained sentence generation model to obtain a target second story paragraph; the names of characters appearing in the target second story paragraph are the same as the names of characters appearing in at least a portion of the first story paragraph; and feeding back the target second story paragraph to the user. In the method and the device, the character name appearing in the first story paragraph is input into the trained sentence generation model, so that the probability that the character name appearing in the first story paragraph appears in the target second story paragraph is improved.)

1. A method of story generation, comprising:

acquiring a first story paragraph input by a user;

inputting the first story paragraph and the character names appearing in the first story paragraph into a trained sentence generation model to obtain a target second story paragraph; the names of characters appearing in the target second story paragraph are the same as the names of characters appearing in at least a portion of the first story paragraph;

and feeding back the target second story paragraph to the user.

2. The method of claim 1, wherein the name of the character appearing in the first story paragraph is user-entered; or the character name appearing in the first story paragraph is obtained by segmenting the first story paragraph through a segmentation model.

3. The method of claim 1, wherein inputting the first story paragraph and the names of characters appearing in the first story paragraph into a trained sentence generation model to obtain a target second story paragraph comprises:

inputting the first story paragraph and the character names appearing in the first story paragraph into a trained sentence generation model for multiple times to obtain multiple candidate second story paragraphs;

and selecting the target second story paragraph from a plurality of candidate second story paragraphs according to the logical consistency of each candidate second story paragraph and the first story paragraph.

4. The method of claim 3, wherein selecting the target second story paragraph from a plurality of candidate second story paragraphs based on a logical coherence of each of the candidate second story paragraphs with the first story paragraph comprises:

for each candidate second story paragraph, determining the logic progressive efficiency of the candidate second story paragraph according to the logic consistency between adjacent short sentences in the candidate second story paragraph;

and selecting the target second story paragraph from a plurality of candidate second story paragraphs according to the logic consistency of each candidate second story paragraph and the first story paragraph and the logic progressive efficiency of each candidate second story paragraph.

5. The method of claim 1, wherein said feeding back said target second story paragraph to a user comprises:

judging whether the target second story paragraph is the last paragraph;

if the target second story paragraph is not the last paragraph, taking the target second story paragraph as a current story paragraph;

inputting the names of characters appearing in the current story paragraph and the current story paragraph into a sentence generation model after training for multiple times to obtain multiple candidate next story paragraphs;

selecting a target next story paragraph from a plurality of the candidate next story paragraphs according to the logical coherence of each of the candidate next story paragraphs and the current story paragraph; the names of characters appearing in the target next story paragraph are the same as the names of characters appearing in at least part of the current story paragraph;

and feeding back the current story paragraph and the target next story paragraph to the user.

6. The method of claim 5, further comprising, prior to feeding back the current story paragraph and the target next story paragraph to a user:

judging whether the target next story passage is the last passage or not;

if the target next story paragraph is not the last paragraph, taking the target next story paragraph as a current story paragraph, and executing the step of inputting the character names appearing in the current story paragraph and the current story paragraph into a sentence generation model after training for multiple times so as to obtain a plurality of candidate next story paragraphs;

and if the target next story paragraph is the last paragraph, executing the step of feeding back the current story paragraph and the target next story paragraph to the user.

7. The method of claim 5, wherein selecting a target next story paragraph from a plurality of the candidate next story paragraphs based on a logical coherence of each of the candidate next story paragraphs with the current story paragraph comprises:

respectively calculating the logic consistency of each candidate next story paragraph and the logic progressive efficiency of each candidate next story paragraph; the logical coherence is the logical coherence of the candidate next story paragraph with the current story paragraph;

selecting the target next story paragraph from a plurality of the candidate next story paragraphs according to the logical coherence and the logical progression efficiency of each candidate next story paragraph.

8. The method of claim 7, further comprising:

respectively calculating the content repetition degree of each candidate next story paragraph; the content repetition degree comprises any one or more of a character name repetition degree and a character repetition degree; the degree of character name repetition is determined according to the number of character names that appear in the candidate next story paragraph and that do not appear in the current story paragraph; the character repetition degree is determined according to the repetition degree of different short sentences appearing in the candidate next story paragraph;

selecting the target next story paragraph from the plurality of candidate next story paragraphs according to the content repetition, the logical continuity, and the logical progression efficiency of each candidate next story paragraph.

9. The method of claim 8, wherein selecting the target next story paragraph from a plurality of candidate next story paragraphs based on the content repetition, the logical continuity, and the logical progression efficiency of each candidate next story paragraph comprises:

removing the candidate next story paragraphs with the content repetition degrees larger than a preset value from the candidate next story paragraphs according to the content repetition degrees;

and selecting the target next story paragraph from the candidate next story paragraphs with the content repetition degrees larger than a preset value according to the logic consistency and the logic progressive efficiency.

10. The method of claim 8, wherein selecting the target next story paragraph from a plurality of candidate next story paragraphs based on the content repetition, the logical continuity, and the logical progression efficiency of each candidate next story paragraph comprises:

acquiring a writing requirement input by a user; the composition requirements include any one or more of the following requirements: the method comprises the following steps that (1) a logical consistency attribute of a story paragraph, a logical progressive efficiency attribute of the story paragraph and a content repetition degree attribute of the story paragraph are obtained;

and selecting the target next story paragraph from the candidate next story paragraphs according to the logic coherence corresponding to the logic coherence attribute of the story paragraph, the logic progressive efficiency corresponding to the logic progressive efficiency attribute of the story paragraph, and the content repetition degree corresponding to the content repetition degree attribute of the story paragraph.

11. The method of claim 7, wherein the logical consistency is calculated by:

vectorizing each short sentence in the candidate next story paragraph and the last short sentence in the current story paragraph to obtain a short sentence vector of each short sentence in the candidate next story paragraph and a short sentence vector of the last short sentence in the current story paragraph;

calculating the cosine similarity between each short sentence in the candidate next story paragraph and the last short sentence of the current story paragraph according to the short sentence vector of each short sentence in the candidate next story paragraph and the short sentence vector of the last short sentence of the current story paragraph;

and determining the logic consistency of the candidate next story paragraph and the current story paragraph according to the mean value of the cosine similarity of each short sentence in the candidate next story paragraph and the last short sentence of the current story paragraph.

12. The method of claim 7, wherein the logical progression efficiency is calculated by:

vectorizing each short sentence in the candidate next story paragraph to obtain a short sentence vector of each short sentence in the candidate next story paragraph;

aiming at each short sentence in the candidate next story paragraph, calculating the cosine similarity of the two short sentences according to the short sentence vector of the short sentence and the short sentence vector of the next short sentence of the short sentence;

and determining the logic progressive efficiency of the candidate next story paragraph according to the cosine similarity between every two short sentences in the candidate next story paragraph.

13. The method of claim 5, wherein inputting the names of characters appearing in the current story paragraph and the current story paragraph into a trained sentence generation model a plurality of times to obtain a plurality of candidate next story paragraphs comprises:

inputting the current story paragraph and the character names appearing in the current story paragraph into a trained sentence generation model to obtain a first short sentence of the candidate next story paragraph, and taking the first short sentence as the current short sentence;

inputting the current short sentence into the trained sentence generation model to obtain a next short sentence in the candidate next story paragraph;

judging whether the next short sentence is the last short sentence or not;

if the next short sentence is not the last short sentence, taking the next short sentence as the current short sentence, and repeating the steps to input the current short sentence into the sentence generation model after training to obtain the next short sentence in the candidate next story paragraph;

and if the next short sentence is the last short sentence, the first short sentence and each next short sentence output form the candidate next story paragraph.

14. The method of claim 13, wherein inputting the current story paragraph and the names of characters appearing in the current story paragraph into a trained sentence generation model to obtain the first short sentence of the candidate next story paragraph comprises:

inputting the current story paragraph and the character names appearing in the current story paragraph into a trained sentence generation model to obtain a first word of a first short sentence in a candidate next story paragraph, and taking the first word as a current word;

inputting the current word into the trained sentence generation model to obtain a next word of a first short sentence in the candidate next story paragraph;

judging whether the next word is the last word;

if the next word is not the last word, taking the next word as the current word, and repeating the steps to input the current word into the trained sentence generation model to obtain the next word of the first short sentence in the candidate next story paragraph;

and if the next word is the last word, the first word and each output next word form the first short sentence.

15. The method of claim 1, wherein the sentence generation model is trained by:

obtaining a training sample; the training sample is the front content of the reference story;

inputting the front content and the character name of the reference story into a sentence generation model to be trained to obtain a first output result; the character names are obtained by segmenting the reference stories through a segmentation model;

comparing the first output result with the back content of the reference story to generate a first loss function;

and adjusting the sentence generation model to be trained according to the first loss function.

16. The method of claim 1, wherein inputting both the front content of the reference story and the character name into the sentence generation model to be trained, resulting in a first output result, comprises:

inputting the front content of the reference story, the character name of the front content of the reference story and the short sentence type of each short sentence in the front content of the reference story into a sentence generation model to be trained to obtain a first output result; the character names are obtained by segmenting the reference stories through a segmentation model.

17. A story generation apparatus, comprising:

the acquisition module is used for acquiring a first story paragraph input by a user;

the generation module is used for inputting the first story paragraph and the character names appearing in the first story paragraph into a trained sentence generation model so as to obtain a target second story paragraph; the names of characters appearing in at least a portion of the target second story paragraph are the same as the names of characters appearing in the first story paragraph;

and the feedback module is used for feeding back the target second story paragraph to the user.

18. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the steps of the method of any of the preceding claims 1-16 are implemented by the processor when executing the computer program.

19. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method according to any one of the claims 1 to 16.

Technical Field

The present application relates to the field of story generation, and in particular, to a method, an apparatus, a computer device, and a medium for story generation.

Background

With the development of technology, electronic devices are increasingly applied to the life of people, the reading of people is not limited to paper books, and a large amount of reading can be performed through the electronic devices. People wear away leisure time by reading.

Generally, most of the data read by people are written artificially, and the writing mode is too slow. The efficiency is low. In order to improve writing efficiency, writing is increasingly applied through a deep learning model, but the content written through the deep learning model is relatively random and has low logic consistency.

Disclosure of Invention

In view of the above, an object of the present application is to provide a method, an apparatus, a computer device and a medium for story generation, which are used to solve the problem in the prior art that the story line of a story segment generated by using a model is not compact.

In a first aspect, an embodiment of the present application provides a method for story generation, including:

acquiring a first story paragraph input by a user;

inputting the first story paragraph and the character names appearing in the first story paragraph into a trained sentence generation model to obtain a target second story paragraph; the names of characters appearing in the target second story paragraph are the same as the names of characters appearing in at least a portion of the first story paragraph;

and feeding back the target second story paragraph to the user.

Optionally, the names of characters appearing in the first story paragraph are user-input; or the character name appearing in the first story paragraph is obtained by segmenting the first story paragraph through a segmentation model.

Optionally, the inputting both the first story paragraph and the character name appearing in the first story paragraph into the trained sentence generation model to obtain a target second story paragraph includes:

inputting the first story paragraph and the character names appearing in the first story paragraph into a trained sentence generation model for multiple times to obtain multiple candidate second story paragraphs;

and selecting the target second story paragraph from a plurality of candidate second story paragraphs according to the logical consistency of each candidate second story paragraph and the first story paragraph.

Optionally, the selecting the target second story paragraph from the candidate second story paragraphs according to the logical coherence between each candidate second story paragraph and the first story paragraph includes:

for each candidate second story paragraph, determining the logic progressive efficiency of the candidate second story paragraph according to the logic consistency between adjacent short sentences in the candidate second story paragraph;

and selecting the target second story paragraph from a plurality of candidate second story paragraphs according to the logic consistency of each candidate second story paragraph and the first story paragraph and the logic progressive efficiency of each candidate second story paragraph.

Optionally, the feeding back the target second story paragraph to the user includes:

judging whether the target second story paragraph is the last paragraph;

if the target second story paragraph is not the last paragraph, taking the target second story paragraph as a current story paragraph;

inputting the names of characters appearing in the current story paragraph and the current story paragraph into a sentence generation model after training for multiple times to obtain multiple candidate next story paragraphs;

selecting a target next story paragraph from a plurality of the candidate next story paragraphs according to the logical coherence of each of the candidate next story paragraphs and the current story paragraph; the names of characters appearing in the target next story paragraph are the same as the names of characters appearing in at least part of the current story paragraph;

and feeding back the current story paragraph and the target next story paragraph to the user.

Optionally, before feeding back the current story paragraph and the target next story paragraph to the user, the method further includes:

judging whether the target next story passage is the last passage or not;

if the target next story paragraph is not the last paragraph, taking the target next story paragraph as a current story paragraph, and executing the step of inputting the character names appearing in the current story paragraph and the current story paragraph into a sentence generation model after training for multiple times so as to obtain a plurality of candidate next story paragraphs;

and if the target next story paragraph is the last paragraph, executing the step of feeding back the current story paragraph and the target next story paragraph to the user.

Optionally, the selecting a target next story paragraph from a plurality of candidate next story paragraphs according to the logical coherence between each candidate next story paragraph and the current story paragraph includes:

respectively calculating the logic consistency of each candidate next story paragraph and the logic progressive efficiency of each candidate next story paragraph; the logical coherence is the logical coherence of the candidate next story paragraph with the current story paragraph;

selecting the target next story paragraph from a plurality of the candidate next story paragraphs according to the logical coherence and the logical progression efficiency of each candidate next story paragraph.

Optionally, the method further includes:

respectively calculating the content repetition degree of each candidate next story paragraph; the content repetition degree comprises any one or more of a character name repetition degree and a character repetition degree; the degree of character name repetition is determined according to the number of character names that appear in the candidate next story paragraph and that do not appear in the current story paragraph; the character repetition degree is determined according to the repetition degree of different short sentences appearing in the candidate next story paragraph;

selecting the target next story paragraph from the plurality of candidate next story paragraphs according to the content repetition, the logical continuity, and the logical progression efficiency of each candidate next story paragraph.

Optionally, selecting the target next story paragraph from the plurality of candidate next story paragraphs according to the content repetition degree, the logic continuity, and the logic progression efficiency of each candidate next story paragraph, includes:

removing the candidate next story paragraphs with the content repetition degrees larger than a preset value from the candidate next story paragraphs according to the content repetition degrees;

and selecting the target next story paragraph from the candidate next story paragraphs with the content repetition degrees larger than a preset value according to the logic consistency and the logic progressive efficiency.

Optionally, selecting the target next story paragraph from the plurality of candidate next story paragraphs according to the content repetition degree, the logic continuity, and the logic progression efficiency of each candidate next story paragraph, includes:

acquiring a writing requirement input by a user; the written requirements include any one or more of the following requirements: the method comprises the following steps that (1) a logical consistency attribute of a story paragraph, a logical progressive efficiency attribute of the story paragraph and a content repetition degree attribute of the story paragraph are obtained;

and selecting the target next story paragraph from the candidate next story paragraphs according to the logic coherence corresponding to the logic coherence attribute of the story paragraph, the logic progressive efficiency corresponding to the logic progressive efficiency attribute of the story paragraph, and the content repetition degree corresponding to the content repetition degree attribute of the story paragraph.

Optionally, the logic consistency is calculated by:

vectorizing each short sentence in the candidate next story paragraph and the last short sentence in the current story paragraph to obtain a short sentence vector of each short sentence in the candidate next story paragraph and a short sentence vector of the last short sentence in the current story paragraph;

calculating the cosine similarity between each short sentence in the candidate next story paragraph and the last short sentence of the current story paragraph according to the short sentence vector of each short sentence in the candidate next story paragraph and the short sentence vector of the last short sentence of the current story paragraph;

and determining the logic consistency of the candidate next story paragraph and the current story paragraph according to the mean value of the cosine similarity of each short sentence in the candidate next story paragraph and the last short sentence of the current story paragraph.

Optionally, the logic progression efficiency is calculated by:

vectorizing each short sentence in the candidate next story paragraph to obtain a short sentence vector of each short sentence in the candidate next story paragraph;

aiming at each short sentence in the candidate next story paragraph, calculating the cosine similarity of the two short sentences according to the short sentence vector of the short sentence and the short sentence vector of the next short sentence of the short sentence;

and determining the logic progressive efficiency of the candidate next story paragraph according to the cosine similarity between every two short sentences in the candidate next story paragraph.

Optionally, inputting the names of characters appearing in the current story paragraph and the current story paragraph into the sentence generation model after training for multiple times to obtain multiple candidate next story paragraphs, including:

inputting the current story paragraph and the character names appearing in the current story paragraph into a trained sentence generation model to obtain a first short sentence of the candidate next story paragraph, and taking the first short sentence as the current short sentence;

inputting the current short sentence into the trained sentence generation model to obtain a next short sentence in the candidate next story paragraph;

judging whether the next short sentence is the last short sentence or not;

if the next short sentence is not the last short sentence, taking the next short sentence as the current short sentence, and repeating the steps to input the current short sentence into the sentence generation model after training to obtain the next short sentence in the candidate next story paragraph;

and if the next short sentence is the last short sentence, the first short sentence and each next short sentence output form the candidate next story paragraph.

Optionally, the inputting the current story paragraph and the names of characters appearing in the current story paragraph into a sentence generation model after training to obtain a first short sentence of the candidate next story paragraph includes:

inputting the current story paragraph and the character names appearing in the current story paragraph into a trained sentence generation model to obtain a first word of a first short sentence in a candidate next story paragraph, and taking the first word as a current word;

inputting the current word into the trained sentence generation model to obtain a next word of a first short sentence in the candidate next story paragraph;

judging whether the next word is the last word;

if the next word is not the last word, taking the next word as the current word, and repeating the steps to input the current word into the trained sentence generation model to obtain the next word of the first short sentence in the candidate next story paragraph;

and if the next word is the last word, the first word and each output next word form the first short sentence.

Optionally, the statement generation model is obtained by training through the following steps:

obtaining a training sample; the training sample is the front content of the reference story;

inputting the front content and the character name of the reference story into a sentence generation model to be trained to obtain a first output result; the character names are obtained by segmenting the reference stories through a segmentation model;

comparing the first output result with the back content of the reference story to generate a first loss function;

and adjusting the sentence generation model to be trained according to the first loss function.

Optionally, the front content and the character name of the reference story are both input to the sentence generation model to be trained, and a first output result is obtained, where the first output result includes:

inputting the front content of the reference story, the character name of the front content of the reference story and the short sentence type of each short sentence in the front content of the reference story into a sentence generation model to be trained to obtain a first output result; the character names are obtained by segmenting the reference stories through a segmentation model.

In a second aspect, an embodiment of the present application provides a story generation apparatus, including:

the acquisition module is used for acquiring a first story paragraph input by a user;

the generation module is used for inputting the first story paragraph and the character names appearing in the first story paragraph into a trained sentence generation model so as to obtain a target second story paragraph; the names of characters appearing in at least a portion of the target second story paragraph are the same as the names of characters appearing in the first story paragraph;

and the feedback module is used for feeding back the target second story paragraph to the user.

In a third aspect, an embodiment of the present application provides a computer device, which includes a memory, a processor, and a computer program stored on the memory and executable on the processor, and the processor implements the steps of the above method when executing the computer program.

In a fourth aspect, the present application provides a computer-readable storage medium, on which a computer program is stored, and the computer program, when executed by a processor, performs the steps of the above method.

The method for generating the story comprises the steps of firstly, obtaining a first story paragraph input by a user; then, inputting the character names appearing in the first story paragraph and the first story paragraph into a trained sentence generation model to obtain a target second story paragraph; the names of characters appearing in the target second story paragraph are the same as the names of characters appearing in at least a portion of the first story paragraph; and finally, feeding back the target second story paragraph to the user.

By adopting the mode, the character names appearing in the first story paragraph and the first story paragraph are input into the trained sentence generation model to obtain the target second story paragraph, so that the character names in the target second story paragraph are at least partially identical to the character names appearing in the first story paragraph, the probability of the character names appearing in the first story paragraph in the target second story paragraph is improved, the plot between the target second story paragraph and the first story paragraph is compact and strict in logic, and the problem that the character names in the prior art are disordered is avoided.

In order to make the aforementioned objects, features and advantages of the present application more comprehensible, preferred embodiments accompanied with figures are described in detail below.

Drawings

In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are required to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present application and therefore should not be considered as limiting the scope, and for those skilled in the art, other related drawings can be obtained from the drawings without inventive effort.

Fig. 1 is a schematic flowchart of a method for generating a story according to an embodiment of the present disclosure;

fig. 2 is a schematic flowchart of a training method for a sentence generation model according to an embodiment of the present application;

fig. 3 is a schematic structural diagram of a story generation apparatus provided in an embodiment of the present application;

fig. 4 is a schematic structural diagram of a computer device according to an embodiment of the present application.

Detailed Description

In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all the embodiments. The components of the embodiments of the present application, generally described and illustrated in the figures herein, can be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present application, presented in the accompanying drawings, is not intended to limit the scope of the claimed application, but is merely representative of selected embodiments of the application. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the present application without making any creative effort, shall fall within the protection scope of the present application.

A novel is a literary genre that develops narrative descriptions around a person or different persons. A novel may be composed of multiple stories. In general, novels require a large number of life reviews to be able to write them well, and therefore, excellent novels are rare. In recent years, with the increase of the reading demand of the public, a large number of network novels appear, but because readers are numerous, the output is low because the novels are written by the novels. Furthermore, in order to improve the production efficiency of the novel, a technology for generating the novel by using a language model has been developed, but the novel generated by using the language model has random character names, and the character names which are not appeared in the content of the prior novel frequently appeared in the subsequent paragraphs, so that the generated novel can cause the plot to be broken or the original environment to be jumped off, and further cause the novel to have logical deviation.

In view of the above situation, an embodiment of the present application provides a story generation method, as shown in fig. 1, including the following steps:

s101, acquiring a first story paragraph input by a user;

s102, inputting the first story paragraph and the character names appearing in the first story paragraph into the trained sentence generation model to obtain a target second story paragraph; the names of characters appearing in the target second story paragraph are the same as the names of characters appearing in at least a portion of the first story paragraph;

and S103, feeding back the target second story paragraph to the user.

In the above step S101, the first story paragraph is the story preamble content (such as the beginning of a novel) written by the user for the novel, and mainly describes the names of characters, basic plot, and the like appearing in the novel. The first story passage is the basis of the novel, on the basis of which the subsequent content can be derived. The first story passage may be entered by the user in a separate application or in a prompt interface presented during the game (i.e. the scheme may be a separate novel generation APP or may be an additional function in some programs, such as games). Steps S102-S103 can be performed only if the first story segment is acquired after.

In the above step S102, the character name is the name of a story character that can appear in the novel to be obtained by the user. The name of the character appearing in the first story paragraph may be user-entered; or the character names appearing in the first story paragraph are obtained by segmenting the first story paragraph through the segmentation model.

The word segmentation model may be any one of the following techniques: NLP word segmentation algorithm, jieba word segmentation algorithm and the like. The sentence generation model is a deep learning model, and can be obtained by adopting a GPT2-large model for training or by adopting a BERT model for training. Because the GPT2-large model has good training effect and is simple, the GPT2-large model is usually adopted to train the statement generation model. Of course, the character names appearing in the first story paragraph do not need to be processed by the system using the word segmentation model if the character names are input by the user. The character names appearing in the first story paragraph need not be input by the user if they are processed using a word segmentation model.

In order to further improve the logical continuity of the novel, in addition to requiring that the character name appearing in the target second story paragraph is the same as the character name of at least a portion appearing in the first story paragraph, it should be required that the character name appearing in the target second story paragraph cannot be a character name that has not appeared in the first story paragraph.

28页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:一种实体修订方法、装置、计算机设备和可读存储介质

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!