Method and device for training style conversion model, electronic equipment and storage medium

文档序号:68862 发布日期:2021-10-01 浏览:15次 中文

阅读说明:本技术 风格转换模型的训练方法、装置、电子设备以及存储介质 (Method and device for training style conversion model, electronic equipment and storage medium ) 是由 黄焱晖 张记袁 蔡远俊 彭卫华 徐伟建 于 2021-07-13 设计创作,主要内容包括:本公开提供了一种风格转换模型的训练方法、装置、电子设备以及存储介质,涉及人工智能技术领域,具体涉及自然语言处理、深度学习技术领域。具体实现方案为:获取训练语料,其中,训练语料包括:原始风格的样本文本以及目标风格的样本文本;构建初始的训练模型,其中,训练模型包括:编码层、分别与编码层连接的分类层和解码层;以原始风格的样本文本作为编码层的输入特征和解码层的输出特征,以目标风格的样本文本作为分类层的输入特征,对训练模型进行训练;根据训练好的训练模型中的编码层,生成从原始风格的文本至目标风格的文本的风格转换模型,不需要获取内容相同且风格不同的样本文本进行训练,降低了风格转换模型的训练成本。(The disclosure provides a training method and device of a style conversion model, electronic equipment and a storage medium, and relates to the technical field of artificial intelligence, in particular to the technical field of natural language processing and deep learning. The specific implementation scheme is as follows: obtaining a corpus, wherein the corpus comprises: sample texts in an original style and sample texts in a target style; constructing an initial training model, wherein the training model comprises: the device comprises an encoding layer, a classification layer and a decoding layer which are respectively connected with the encoding layer; training a training model by taking the sample text in the original style as the input characteristic of a coding layer and the output characteristic of a decoding layer and taking the sample text in the target style as the input characteristic of a classification layer; according to the coding layer in the trained training model, the style conversion model from the original style text to the target style text is generated, sample texts with the same content and different styles do not need to be obtained for training, and the training cost of the style conversion model is reduced.)

1. A training method of a style conversion model comprises the following steps:

obtaining a corpus, wherein the corpus comprises: sample texts in an original style and sample texts in a target style;

constructing an initial training model, wherein the training model comprises: the device comprises an encoding layer, a classification layer and a decoding layer which are respectively connected with the encoding layer;

taking the sample text of the original style as the input feature of the coding layer and the output feature of the decoding layer, and taking the sample text of the target style as the input feature of the classification layer, and training the training model;

and generating a style conversion model from the original style text to the target style text according to the coding layer in the trained training model.

2. The method of claim 1, wherein the training model comprises:

constructing a loss function according to the sample text of the original style, the predicted text of the original style output by the decoding layer and the classification result output by the classification layer;

and adjusting the coefficients of each layer in the training model according to the value of the loss function so as to realize training.

3. The method of claim 2, wherein the constructing a loss function according to the sample text of the original style, the predicted text of the original style output by the decoding layer and the classification result output by the classification layer comprises:

constructing a first sub-loss function according to the sample text in the original style and the predicted text in the original style;

constructing a second sub-loss function according to the classification result and a target probability, wherein the classification result represents the probability that the style of a middle vector output by the coding layer is the target style;

and constructing the loss function according to the first sub-loss function and the second sub-loss function.

4. The method of claim 1, wherein training the training model with the original style sample text as input features of the coding layer and output features of the decoding layer and the target style sample text as input features of the classification layer comprises:

determining a sample text vector of the original style according to the sample text of the original style;

determining a sample text vector of the target style according to the sample text of the target style;

and training the training model by taking the sample text vector of the original style as the input characteristic of the coding layer and the output characteristic of the decoding layer and taking the sample text vector of the target style as the input characteristic of the classification layer.

5. The method of claim 4, wherein the generating a style conversion model from the original style text to the target style text according to the coding layer in the trained training model comprises:

and generating the style conversion model according to the coding layer in the trained training model and a preset vector prediction layer.

6. The method of claim 1, wherein the contents of the original style sample text and the target style sample text are the same or different.

7. An apparatus for training a style conversion model, comprising:

the obtaining module is configured to obtain a corpus, where the corpus includes: sample texts in an original style and sample texts in a target style;

a building module configured to build an initial training model, wherein the training model comprises: the device comprises an encoding layer, a classification layer and a decoding layer which are respectively connected with the encoding layer;

the training module is used for training the training model by taking the original style sample text as the input characteristic of the coding layer and the output characteristic of the decoding layer and taking the target style sample text as the input characteristic of the classification layer;

and the generating module is used for generating a style conversion model from the original style text to the target style text according to the coding layer in the trained training model.

8. The apparatus of claim 7, wherein the training module is specifically configured to,

constructing a loss function according to the sample text of the original style, the predicted text of the original style output by the decoding layer and the classification result output by the classification layer;

and adjusting the coefficients of each layer in the training model according to the value of the loss function so as to realize training.

9. The apparatus of claim 8, wherein the training module is specifically configured to,

constructing a first sub-loss function according to the sample text in the original style and the predicted text in the original style;

constructing a second sub-loss function according to the classification result and a target probability, wherein the classification result represents the probability that the style of a middle vector output by the coding layer is the target style;

and constructing the loss function according to the first sub-loss function and the second sub-loss function.

10. The apparatus of claim 7, wherein the training module is specifically configured to,

determining a sample text vector of the original style according to the sample text of the original style;

determining a sample text vector of the target style according to the sample text of the target style;

and training the training model by taking the sample text vector of the original style as the input characteristic of the coding layer and the output characteristic of the decoding layer and taking the sample text vector of the target style as the input characteristic of the classification layer.

11. The apparatus of claim 10, wherein the means for generating is specifically configured to,

and generating the style conversion model according to the coding layer in the trained training model and a preset vector prediction layer.

12. The apparatus of claim 7, wherein the contents of the original style sample text and the target style sample text are the same or different.

13. An electronic device, comprising:

at least one processor; and

a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,

the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-6.

14. A non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method of any one of claims 1-6.

15. A computer program product comprising a computer program which, when executed by a processor, implements the method according to any one of claims 1-6.

Technical Field

The present disclosure relates to the field of artificial intelligence technologies, and in particular, to the field of natural language processing and deep learning technologies, and in particular, to a method and an apparatus for training a style conversion model, an electronic device, and a storage medium.

Background

At present, a style conversion model training method includes obtaining a large number of sample text pairs, where the sample text pairs include an original style sample text and a target style sample text, and the contents of the original style sample text and the target style sample text are the same; and training the initial recurrent neural network model by adopting a large number of sample texts to obtain a style conversion model. The sample text pairs are difficult to obtain, so that the training cost of the style conversion model is high.

Disclosure of Invention

The disclosure provides a style conversion model training method and device, electronic equipment and a storage medium.

According to an aspect of the present disclosure, there is provided a method for training a style conversion model, including: obtaining a corpus, wherein the corpus comprises: sample texts in an original style and sample texts in a target style; constructing an initial training model, wherein the training model comprises: the device comprises an encoding layer, a classification layer and a decoding layer which are respectively connected with the encoding layer; taking the sample text of the original style as the input feature of the coding layer and the output feature of the decoding layer, and taking the sample text of the target style as the input feature of the classification layer, and training the training model; and generating a style conversion model from the original style text to the target style text according to the coding layer in the trained training model.

According to another aspect of the present disclosure, there is provided a training apparatus for a style conversion model, including: the obtaining module is configured to obtain a corpus, where the corpus includes: sample texts in an original style and sample texts in a target style; a building module configured to build an initial training model, wherein the training model comprises: the device comprises an encoding layer, a classification layer and a decoding layer which are respectively connected with the encoding layer; the training module is used for training the training model by taking the original style sample text as the input characteristic of the coding layer and the output characteristic of the decoding layer and taking the target style sample text as the input characteristic of the classification layer; and the generating module is used for generating a style conversion model from the original style text to the target style text according to the coding layer in the trained training model.

According to still another aspect of the present disclosure, there is provided an electronic device including:

at least one processor; and

a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,

the memory stores instructions executable by the at least one processor to enable the at least one processor to perform a method of training a style conversion model according to an aspect of the disclosure.

According to yet another aspect of the present disclosure, there is provided a non-transitory computer-readable storage medium storing computer instructions for causing a computer to perform a method of training a style conversion model set forth in the above-described aspect of the present disclosure.

According to yet another aspect of the present disclosure, a computer program product is provided, comprising a computer program, which when executed by a processor, implements the method for training a style transformation model set forth in the above aspect of the present disclosure.

It should be understood that the statements in this section do not necessarily identify key or critical features of the embodiments of the present disclosure, nor do they limit the scope of the present disclosure. Other features of the present disclosure will become apparent from the following description.

Drawings

The drawings are included to provide a better understanding of the present solution and are not to be construed as limiting the present disclosure. Wherein:

FIG. 1 is a schematic diagram according to a first embodiment of the present disclosure;

FIG. 2 is a schematic diagram according to a second embodiment of the present disclosure;

FIG. 3 is a schematic diagram of a training model;

FIG. 4 is a schematic diagram according to a third embodiment of the present disclosure;

FIG. 5 is a block diagram of an example electronic device used to implement embodiments of the present disclosure;

Detailed Description

Exemplary embodiments of the present disclosure are described below with reference to the accompanying drawings, in which various details of the embodiments of the disclosure are included to assist understanding, and which are to be considered as merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.

In the related technology, a style conversion model training method comprises the steps of obtaining a large number of sample text pairs, wherein the sample text pairs comprise sample texts in an original style and sample texts in a target style, and the sample texts in the original style and the sample texts in the target style have the same content; and training the initial recurrent neural network model by adopting a large number of sample texts to obtain a style conversion model. The sample text pairs are difficult to obtain, so that the training cost of the style conversion model is high.

In order to solve the above problems, the present disclosure provides a method and an apparatus for training a style conversion model, an electronic device, and a storage medium.

Fig. 1 is a schematic diagram of a first embodiment of the present disclosure, and it should be noted that the method for training a style conversion model according to the embodiment of the present disclosure may be applied to a device for training a style conversion model according to the embodiment of the present disclosure, and the device may be configured in an electronic device, so that the electronic device may perform a function of training a style conversion model.

The electronic device may be any device having a computing capability, for example, a Personal Computer (PC), a mobile terminal, a server, and the like, and the mobile terminal may be a hardware device having various operating systems, touch screens, and/or display screens, such as an in-vehicle device, a mobile phone, a tablet Computer, a Personal digital assistant, and a wearable device.

As shown in fig. 1, the training method of the style conversion model may include the following steps:

step 101, obtaining a corpus, wherein the corpus includes: sample text of an original style and sample text of a target style.

In the embodiment of the disclosure, the original style and the target style may be specifically emotion styles, which represent different emotions. The emotional style is, for example, sad, active, optimistic, passive, etc. Emotional style is positive text, for example, "the tone quality of the mobile phone is impressive"; emotional style is negative text, e.g., "battery life of this camera is too short".

In the embodiment of the present disclosure, the contents of the original style sample text and the target style sample text are the same or different. For example, the original style is positive, and the target style is negative, the sample text of the original style may be "the sound quality of the mobile phone is an exclamation"; the target style sample text may be "battery life of this camera too short".

Therefore, in the present disclosure, it is not limited that the original style sample text and the target style sample text must have the same content, and the original style sample text and the target style sample text may have different contents. And the text with emotion style is easier to obtain, so that the obtaining cost of the training corpus is reduced, and the training cost of the style conversion model is further reduced.

Step 102, constructing an initial training model, wherein the training model comprises: the device comprises an encoding layer, a classification layer and a decoding layer, wherein the classification layer and the decoding layer are respectively connected with the encoding layer.

In the embodiment of the present disclosure, the output of the coding layer is the input of the classification layer and the input of the decoding layer, respectively. The encoding layer is used for encoding input content, the decoding layer is used for decoding output content of the encoding layer, and the classification layer is used for classifying output content of the encoding layer. The decoding layer corresponds to the coding layer and is used for decoding the output content of the coding layer to acquire the input content of the coding layer.

And 103, training the training model by taking the sample text in the original style as the input characteristic of the coding layer and the output characteristic of the decoding layer and the sample text in the target style as the input characteristic of the classification layer.

In the embodiment of the present disclosure, the training device for the style conversion model executes the process of step 103, for example, to determine a sample text vector of an original style according to the sample text of the original style; determining a sample text vector of a target style according to the sample text of the target style; and training the training model by taking the sample text vector of the original style as the input characteristic of the coding layer and the output characteristic of the decoding layer and taking the sample text vector of the target style as the input characteristic of the classification layer.

Thus, in one example, the input to the encoding layer may be sample text of an original style, the input to the classification layer may include sample text of a target style, and the output of the decoding layer may be predicted text of an original style. Correspondingly, the coding layer is used for carrying out vector conversion and coding processing on the sample text with the original style to obtain an intermediate vector; the decoding layer is used for decoding the intermediate vector and performing vector conversion processing to obtain a prediction text of an original style; the classification layer is used for carrying out style classification processing based on the intermediate vector and the sample text of the target style, and determining the probability that the style of the text corresponding to the intermediate vector is the target style. And training the training model by combining the output of the classification layer and the output of the decoding layer.

In another example, the input to the encoding layer may be a sample text vector of an original style, the input to the classification layer may include a sample text vector of a target style, and the output of the decoding layer may be a predicted text vector of an original style. Correspondingly, the coding layer is used for coding the sample text vector of the original style to obtain an intermediate vector; the decoding layer is used for decoding the intermediate vector to obtain a predicted text vector of an original style; the classification layer is used for carrying out style classification processing based on the intermediate vector and the sample text vector of the target style, and determining the probability that the style of the text corresponding to the intermediate vector is the target style. And training the training model by combining the output of the classification layer and the output of the decoding layer.

In the embodiment of the disclosure, the sample text vector in the original style may be a vector obtained by inputting the sample text in the original style into the word vector model; the sample text vector of the target style may be a vector obtained by inputting the sample text of the target style into the word vector model. The word vector model may be, for example, a word2vec model.

And step 104, generating a style conversion model from the original style text to the target style text according to the coding layer in the trained training model.

In an example of the disclosed embodiment, the input of the encoding layer may be sample text of an original style, the input of the classification layer may include sample text of a target style, and the output of the decoding layer may be predicted text of the original style. Correspondingly, the training device of the style conversion model can generate the style conversion model according to the coding layer in the trained training model and the preset vector prediction layer. That is, the style conversion model includes: a coding layer and a vector prediction layer connected with the coding layer. The vector prediction layer is used for carrying out vector conversion on the intermediate vector output by the coding layer to obtain a prediction text of a target style.

In another example, the input to the encoding layer may be a sample text vector of an original style, the input to the classification layer may include a sample text vector of a target style, and the output of the decoding layer may be a predicted text vector of an original style. Correspondingly, the training device of the style conversion model can directly use the coding layer in the trained training model as the style conversion model from the original style text to the target style text.

The training method of the style conversion model according to the embodiment of the present disclosure obtains a corpus, where the corpus includes: sample texts in an original style and sample texts in a target style; constructing an initial training model, wherein the training model comprises: the device comprises an encoding layer, a classification layer and a decoding layer which are respectively connected with the encoding layer; training a training model by taking the sample text in the original style as the input characteristic of a coding layer and the output characteristic of a decoding layer and taking the sample text in the target style as the input characteristic of a classification layer; according to the coding layer in the trained training model, the style conversion model from the original style text to the target style text is generated, so that the initial training model can be trained according to the sample texts with different contents and styles to obtain the trained style conversion model, the sample texts with the same contents and different styles do not need to be obtained for training, and the training cost of the style conversion model is reduced.

In order to further improve the accuracy of the style conversion model, as shown in fig. 2, fig. 2 is a schematic diagram according to a second embodiment of the present disclosure, in the embodiment of the present disclosure, a loss function may be constructed according to the sample text of the original style, the predicted text of the original style output by the decoding layer, and the classification result output by the classification layer, so as to adjust the coefficients of the training model. The embodiment shown in fig. 2 may include the following steps:

step 201, obtaining a corpus, wherein the corpus includes: sample text of an original style and sample text of a target style.

Step 202, constructing an initial training model, wherein the training model comprises: the device comprises an encoding layer, a classification layer and a decoding layer, wherein the classification layer and the decoding layer are respectively connected with the encoding layer.

Step 203, constructing a first sub-loss function according to the sample text in the original style and the predicted text in the original style.

In the embodiment of the present disclosure, the first sub-loss function may be, for example, a negative number of cosine similarities of the original style sample text vector and the original style predicted text vector. The more similar the two vectors are, the larger the value of the cosine similarity is, so that the negative number of the cosine similarity of the original-style sample text vector and the original-style predicted text vector needs to be used as the first sub-loss function, so that the smaller the first sub-loss function is when the original-style sample text vector and the original-style predicted text vector are more similar.

And 204, constructing a second sub-loss function according to the classification result and the target probability, wherein the classification result represents the probability that the style of the intermediate vector output by the coding layer is the target style.

In the embodiment of the present disclosure, the second sub-loss function may be, for example, a difference between the classification result and the target probability, or an absolute value of the difference. For example, taking the classification layer as a sigmoid layer as an example, the output result of the sigmoid layer is specifically the probability that the style of the text corresponding to the intermediate vector is the target style, that is, the more similar the style of the text corresponding to the intermediate vector is to the target style, the higher the probability is. And when the style of the text corresponding to the intermediate vector is used as the target and the giving time, the probability is 1. Therefore, when the classification layer is a sigmoid layer, the target probability may be 1.

For another example, taking the classification layer as the softmax layer as an example, the output result of the softmax layer is specifically the probability that the style of the text corresponding to the intermediate vector is the target style and the probability that the style of the text corresponding to the input text vector is the target style. When the style of the text corresponding to the intermediate vector is the target style, the softmax layer is difficult to distinguish the intermediate vector from the input text vector in style, so that the probability that the style of the text corresponding to the intermediate vector is the target style and the probability that the text corresponding to the input text vector is 0.5 respectively. And the text corresponding to the input text vector is the sample text of the target style. Therefore, in the case where the classification layer is the softmax layer, the target probability may be specifically 0.5.

Step 205, constructing a loss function according to the first sub-loss function and the second sub-loss function.

In the embodiment of the present disclosure, the training device of the style conversion model may determine the weight of the first sub-loss function and the weight of the second sub-loss function; and carrying out weighted summation on the first sub-loss function and the second sub-loss function according to the weight to obtain a loss function.

And step 206, adjusting coefficients of each layer in the training model according to the value of the loss function so as to realize training.

And step 207, generating a style conversion model from the original style text to the target style text according to the coding layer in the trained training model.

It should be noted that steps 201, 202, and 207 may be implemented by any way in each embodiment of the present disclosure, and this is not limited by the embodiment of the present disclosure and is not described again.

The training method of the style conversion model according to the embodiment of the present disclosure obtains a corpus, where the corpus includes: sample texts in an original style and sample texts in a target style; constructing an initial training model, wherein the training model comprises: the device comprises an encoding layer, a classification layer and a decoding layer which are respectively connected with the encoding layer; constructing a first sub-loss function according to the sample text in the original style and the predicted text in the original style; constructing a second sub-loss function according to the classification result and the target probability, wherein the classification result represents the probability that the style of the intermediate vector output by the coding layer is the target style; constructing a loss function according to the first sub-loss function and the second sub-loss function; adjusting coefficients of each layer in the training model according to the value of the loss function to realize training; according to the coding layer in the trained training model, the style conversion model from the original style text to the target style text is generated, so that the initial training model can be trained according to the sample texts with different contents and styles to obtain the trained style conversion model, the sample texts with the same contents and different styles do not need to be obtained for training, and the training cost of the style conversion model is reduced.

In order that those skilled in the art will more clearly understand the disclosure, the description will now be given by way of example.

FIG. 3 is a schematic diagram of a training model. As shown in fig. 3, training the model may include: an encoding layer, a decoding layer, and a classification layer. The input vector of the coding layer is a sample text vector of an original style, and the output vector of the coding layer is a middle vector; after the intermediate vector is input into the decoding layer, the output vector output by the decoding layer is the predicted text vector of the original style. The self-supervision loss in fig. 3 is the negative of the cosine similarity of the original style sample text vector and the original style predicted text vector; the classification loss is a difference value between the classification result and the target probability or an absolute value of the difference value, wherein the classification result represents the probability that the style of the intermediate vector output by the coding layer is the target style. And training the training model by combining the self-supervision loss and the classification loss, so that the probability that the style of the text corresponding to the intermediate vector output by the coding layer in the trained training model is the target style is the target probability.

In the disclosed embodiment, the encoding layer and the decoding layer in fig. 3 may be, for example, a recurrent neural network layer. The recurrent neural network of the recurrent neural network layer may be, for example, a Long Short-Term Memory neural network (LSTM).

In order to implement the above embodiment, the embodiment of the present disclosure further provides a training device for a style conversion model.

Fig. 4 is a schematic diagram of a third embodiment of the present disclosure, and as shown in fig. 4, the training apparatus 400 of the style transformation model includes: an acquisition module 410, a construction module 420, a training module 430, and a generation module 440.

Wherein, the obtaining module 410 is configured to obtain a corpus, where the corpus includes: sample text of an original style and sample text of a target style.

A building module 420 configured to build an initial training model, wherein the training model includes: the device comprises an encoding layer, a classification layer and a decoding layer, wherein the classification layer and the decoding layer are respectively connected with the encoding layer.

And the training module 430 is configured to train the training model by using the original style sample text as the input feature of the coding layer and the output feature of the decoding layer, and using the target style sample text as the input feature of the classification layer.

And the generating module 440 is configured to generate a style conversion model from the original-style text to the target-style text according to the coding layer in the trained training model.

As one possible implementation of the disclosed embodiment, among other things, the training module 430 is specifically configured to,

constructing a loss function according to the sample text of the original style, the predicted text of the original style output by the decoding layer and the classification result output by the classification layer;

and adjusting the coefficients of each layer in the training model according to the value of the loss function so as to realize training.

As one possible implementation of the disclosed embodiment, among other things, the training module 430 is specifically configured to,

constructing a first sub-loss function according to the sample text in the original style and the predicted text in the original style;

constructing a second sub-loss function according to the classification result and the target probability, wherein the classification result represents the probability that the style of the intermediate vector output by the coding layer is the target style;

and constructing a loss function according to the first sub-loss function and the second sub-loss function.

As one possible implementation of the disclosed embodiment, among other things, the training module 430 is specifically configured to,

determining a sample text vector of the original style according to the sample text of the original style;

determining a sample text vector of a target style according to the sample text of the target style;

and training the training model by taking the sample text vector of the original style as the input characteristic of the coding layer and the output characteristic of the decoding layer and taking the sample text vector of the target style as the input characteristic of the classification layer.

As one possible implementation of the disclosed embodiment, among other things, the generating module 440 is specifically configured to,

and generating a style conversion model according to a coding layer in the trained training model and a preset vector prediction layer.

As a possible implementation manner of the embodiment of the present disclosure, the contents of the original style sample text and the target style sample text are the same or different.

The training device of the style conversion model of the embodiment of the present disclosure is through obtaining the corpus, wherein the corpus includes: sample texts in an original style and sample texts in a target style; constructing an initial training model, wherein the training model comprises: the device comprises an encoding layer, a classification layer and a decoding layer which are respectively connected with the encoding layer; training a training model by taking the sample text in the original style as the input characteristic of a coding layer and the output characteristic of a decoding layer and taking the sample text in the target style as the input characteristic of a classification layer; according to the coding layer in the trained training model, the style conversion model from the original style text to the target style text is generated, so that the initial training model can be trained according to the sample texts with different contents and styles to obtain the trained style conversion model, the sample texts with the same contents and different styles do not need to be obtained for training, and the training cost of the style conversion model is reduced.

In the technical scheme of the disclosure, the acquisition, storage, application and the like of the personal information of the related user all accord with the regulations of related laws and regulations, and do not violate the good customs of the public order.

The present disclosure also provides an electronic device, a readable storage medium, and a computer program product according to embodiments of the present disclosure.

The present disclosure provides a computer program product comprising a computer program which, when executed by a processor, implements a method of training a style conversion model as described above.

FIG. 5 illustrates a schematic block diagram of an example electronic device 500 that can be used to implement embodiments of the present disclosure. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the disclosure described and/or claimed herein.

As shown in fig. 5, the apparatus 500 comprises a computing unit 501 which may perform various appropriate actions and processes in accordance with a computer program stored in a Read Only Memory (ROM)502 or a computer program loaded from a storage unit 508 into a Random Access Memory (RAM) 503. In the RAM 503, various programs and data required for the operation of the device 500 can also be stored. The calculation unit 501, the ROM 502, and the RAM 503 are connected to each other by a bus 504. An input/output (I/O) interface 505 is also connected to bus 504.

A number of components in the device 500 are connected to the I/O interface 505, including: an input unit 506 such as a keyboard, a mouse, or the like; an output unit 507 such as various types of displays, speakers, and the like; a storage unit 508, such as a magnetic disk, optical disk, or the like; and a communication unit 509 such as a network card, modem, wireless communication transceiver, etc. The communication unit 509 allows the device 500 to exchange information/data with other devices through a computer network such as the internet and/or various telecommunication networks.

The computing unit 501 may be a variety of general-purpose and/or special-purpose processing components having processing and computing capabilities. Some examples of the computing unit 501 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various dedicated Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, and so forth. The calculation unit 501 performs the respective methods and processes described above, such as a training method of the style conversion model. For example, in some embodiments, the training method of the style conversion model may be implemented as a computer software program tangibly embodied in a machine-readable medium, such as storage unit 508. In some embodiments, part or all of the computer program may be loaded and/or installed onto the device 500 via the ROM 502 and/or the communication unit 509. When the computer program is loaded into the RAM 503 and executed by the computing unit 501, one or more steps of the training method of the style conversion model described above may be performed. Alternatively, in other embodiments, the computing unit 501 may be configured by any other suitable means (e.g., by means of firmware) to perform the training method of the style conversion model.

Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuitry, Field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), system on a chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.

Program code for implementing the methods of the present disclosure may be written in any combination of one or more programming languages. These program codes may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the program codes, when executed by the processor or controller, cause the functions/operations specified in the flowchart and/or block diagram to be performed. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package partly on the machine and partly on a remote machine or entirely on the remote machine or server.

In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.

To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.

The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), Wide Area Networks (WANs), and the Internet.

The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server may be a cloud server, a server of a distributed system, or a server with a combined blockchain.

It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present disclosure may be executed in parallel, sequentially, or in different orders, as long as the desired results of the technical solutions disclosed in the present disclosure can be achieved, and the present disclosure is not limited herein.

The above detailed description should not be construed as limiting the scope of the disclosure. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made in accordance with design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present disclosure should be included in the scope of protection of the present disclosure.

15页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:一种不依赖于客户端环境的office文件在线编辑的方法

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!