Model training method and device, electronic equipment and storage medium

文档序号:830118 发布日期:2021-03-30 浏览:15次 中文

阅读说明:本技术 模型的训练方法、装置、电子设备及存储介质 (Model training method and device, electronic equipment and storage medium ) 是由 丁思宇 王硕寰 孙宇 于 2020-12-18 设计创作,主要内容包括:本申请公开了模型的训练方法、装置、电子设备和存储介质,涉及计算机技术领域,具体涉及自然语言处理和深度学习等人工智能技术领域。具体实现方案为:获取样本文本;对样本文本进行实体识别以生成多个实体;对多个实体之中第一数量的实体进行掩码,并对多个实体之中第二数量的实体进行乱序,以生成增强样本;以及根据增强样本对模型进行训练。本申请实施例的训练方法,能够在构造增强样本中几乎不会引入额外的消耗,同时增强模型的鲁棒性。(The application discloses a model training method and device, electronic equipment and a storage medium, and relates to the technical field of computers, in particular to the technical field of artificial intelligence such as natural language processing and deep learning. The specific implementation scheme is as follows: obtaining a sample text; performing entity recognition on the sample text to generate a plurality of entities; masking a first number of entities among the plurality of entities and de-ordering a second number of entities among the plurality of entities to generate an enhanced sample; and training the model according to the enhanced sample. The training method provided by the embodiment of the application can hardly introduce extra consumption in constructing the enhanced sample, and meanwhile, the robustness of the model is enhanced.)

1. A method of training a model, comprising:

obtaining a sample text;

performing entity recognition on the sample text to generate a plurality of entities;

masking a first number of entities among the plurality of entities and de-ordering a second number of entities among the plurality of entities to generate an enhanced sample; and

and training the model according to the enhanced sample.

2. The method of training a model of claim 1, wherein said masking a first number of entities from among said plurality of entities comprises:

randomly selecting the first number of entities from among the plurality of entities;

masking the first number of entities or masking words among the first number of entities.

3. The method of training a model of claim 1, wherein said misordering a second number of entities among said plurality of entities comprises:

selecting a non-duplicated entity from among the plurality of entities, wherein the non-duplicated entity is different from the first number of entities;

randomly selecting the second number of entities from among the non-duplicate entities and de-ordering words among the second number of entities.

4. A method of training a model as defined in claim 1, wherein the second number is less than the first number.

5. A method of training a model as claimed in claim 1, wherein the model is a pre-trained model.

6. An apparatus for training a model, comprising:

the acquisition module is used for acquiring a sample text;

a first generation module, configured to perform entity identification on the sample text to generate a plurality of entities;

a second generation module, configured to mask a first number of entities among the multiple entities and reorder a second number of entities among the multiple entities to generate an enhanced sample; and

and the training module is used for training the model according to the enhanced sample.

7. The model training apparatus according to claim 6, wherein the second generating module is specifically configured to:

randomly selecting the first number of entities from among the plurality of entities;

masking the first number of entities or masking words among the first number of entities.

8. The model training apparatus according to claim 6, wherein the second generating module is specifically configured to:

selecting a non-duplicated entity from among the plurality of entities, wherein the non-duplicated entity is different from the first number of entities;

randomly selecting the second number of entities from among the non-duplicate entities and de-ordering words among the second number of entities.

9. Training apparatus for a model according to claim 6, wherein said second number is smaller than said first number.

10. The model training apparatus of claim 6, wherein the model is a pre-trained model.

11. An electronic device, comprising:

at least one processor; and

a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,

the memory stores instructions executable by the at least one processor to enable the at least one processor to perform a method of training a model according to any one of claims 1 to 5.

12. A non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform a training method of the model of any one of claims 1-5.

13. A computer program product comprising a computer program which, when executed by a processor, implements a method of training a model according to any one of claims 1-5.

Technical Field

The application relates to the technical field of computers, in particular to the technical field of artificial intelligence such as natural language processing and deep learning, and particularly relates to a model training method and device, electronic equipment and a storage medium.

Background

In recent years, a Pre-training model represented by BERT (Bidirectional Encoder retrieval for transformations, depth model) has proposed a paradigm of "Pre-training) + Fine-tuning", and has greatly improved the effect of various NLP (Natural Language Processing) tasks. It adopts deep model structure, uses massive unsupervised text to learn context-dependent representation, and uses universal unified mode to solve various natural language processing tasks (such as text matching, text generation, emotion classification, text summarization, question answering, retrieval, etc.).

However, the current mainstream semantic representation model is pre-trained by constructing a pre-training task based on massive original texts.

Disclosure of Invention

The application provides a model training method and device, electronic equipment and a storage medium.

According to an aspect of the present application, there is provided a training method of a model, including:

obtaining a sample text;

performing entity recognition on the sample text to generate a plurality of entities;

masking a first number of entities among the plurality of entities and de-ordering a second number of entities among the plurality of entities to generate an enhanced sample; and

and training the model according to the enhanced sample.

According to another aspect of the present application, there is provided a training apparatus for a model, including:

the acquisition module is used for acquiring a sample text;

a first generation module, configured to perform entity identification on the sample text to generate a plurality of entities;

a second generation module, configured to mask a first number of entities among the multiple entities and reorder a second number of entities among the multiple entities to generate an enhanced sample; and

and the training module is used for training the model according to the enhanced sample.

According to another aspect of the present application, there is provided an electronic device including:

at least one processor; and

a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,

the memory stores instructions executable by the at least one processor to enable the at least one processor to perform a method of training a model according to an embodiment of the above-described aspect.

According to another aspect of the present application, there is provided a non-transitory computer readable storage medium having stored thereon a computer program for causing a computer to execute a method of training a model according to an embodiment of the above-described aspect.

According to another aspect of the present application, there is provided a computer program product comprising a computer program which, when executed by a processor, implements a method of training a model according to an embodiment of the above-described aspect.

It should be understood that the statements in this section do not necessarily identify key or critical features of the embodiments of the present application, nor do they limit the scope of the present application. Other features of the present application will become apparent from the following description.

Drawings

The drawings are included to provide a better understanding of the present solution and are not intended to limit the present application. Wherein:

fig. 1 is a schematic flowchart of a model training method according to an embodiment of the present disclosure;

FIG. 2 is a schematic diagram of a pre-training sample construction process provided in an embodiment of the present application;

FIG. 3 is a schematic flow chart illustrating another method for training a model according to an embodiment of the present disclosure;

FIG. 4 is a schematic flow chart illustrating another method for training a model according to an embodiment of the present disclosure;

FIG. 5 is a schematic structural diagram of a training apparatus for a model according to an embodiment of the present disclosure; and

FIG. 6 is a block diagram of an electronic device for a method of training a model according to an embodiment of the present application.

Detailed Description

The following description of the exemplary embodiments of the present application, taken in conjunction with the accompanying drawings, includes various details of the embodiments of the application for the understanding of the same, which are to be considered exemplary only. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present application. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.

A method, an apparatus, an electronic device, and a storage medium for training a model according to an embodiment of the present application are described below with reference to the drawings.

Artificial intelligence is the subject of research on the use of computers to simulate certain mental processes and intelligent behaviors (such as learning, reasoning, thinking, planning, etc.) of humans, both in the hardware and software domain. Artificial intelligence hardware technologies generally include technologies such as sensors, dedicated artificial intelligence chips, cloud computing, distributed storage, big data processing, and the like; the artificial intelligence software technology comprises a computer vision technology, a voice recognition technology, a natural language processing technology, deep learning, a big data processing technology, a knowledge map technology and the like.

Natural language processing is an important direction in the fields of computer science and artificial intelligence. It studies various theories and methods that enable efficient communication between humans and computers using natural language. Natural language processing is a science integrating linguistics, computer science and mathematics.

Deep learning is a new research direction in the field of machine learning. Deep learning is the intrinsic law and expression level of the learning sample data, and the information obtained in the learning process is very helpful for the interpretation of data such as characters, images and sounds. The final aim of the method is to enable the machine to have the analysis and learning capability like a human, and to recognize data such as characters, images and sounds. Deep learning is a complex machine learning algorithm, and achieves the effect in speech and image recognition far exceeding the prior related art.

The training method of the model provided in the embodiment of the present application may be executed by an electronic device, where the electronic device may be a Personal Computer (PC), a tablet Computer, a palmtop Computer, or the like, and is not limited herein.

In the embodiment of the application, the electronic device can be provided with a processing component, a storage component and a driving component. Optionally, the driving component and the processing component may be integrated, the storage component may store an operating system, an application program, or other program modules, and the processing component implements the model training method provided in the embodiment of the present application by executing the application program stored in the storage component.

Fig. 1 is a schematic flowchart of a model training method according to an embodiment of the present disclosure.

The training method of the model according to the embodiment of the application can be further executed by the training device of the model provided by the embodiment of the application, and the training device can be configured in electronic equipment to realize entity recognition on the obtained sample text to generate a plurality of entities, mask a first number of the entities among the plurality of entities, and disorder a second number of the entities among the plurality of entities to generate an enhanced sample, and then train the model according to the enhanced sample, so that extra consumption can hardly be introduced in constructing the enhanced sample, and meanwhile, the robustness of the model is enhanced.

As a possible situation, the training method of the model in the embodiment of the present application may also be executed at a server, where the server may be a cloud server, and the training method of the model may be executed at a cloud end.

As shown in fig. 1, the training method of the model may include:

step 101, obtaining a sample text. It should be noted that the sample text described in this embodiment may be a chinese text, such as a sentence, a paragraph, or a chapter, such as a news manuscript, and the like, and there may be a plurality of sample texts.

In the embodiment of the present application, there are multiple ways to obtain the sample text, where text information input by a relevant person through speech recognition and input content input by a user into the input method system through an input method may be obtained, the input method system may convert the input content into word candidates of the input words according to a current input mode of the user, so as to provide a user with a choice, the user may input the text information through multiple input means, such as a keyboard, a touch pad, a mouse, and the like, and at the same time, the user may select any input mode to input the text information, such as pinyin, wubi, stroke, handwriting, english, and a keypad, and the like, and no limitation is made here.

As a possible case, the sample text may be obtained by copy and paste, or some language records in video, book, audio, and the like may be collected by some software and generated into the sample text.

At step 102, entity recognition is performed on the sample text to generate a plurality of entities.

It should be noted that the entities described in this embodiment may be words, idioms, adages, poems, conjunctions, etc. in the sample text, and the words that often appear together, for example, are not necessarily, in spring, in flower, at crossroads, hoeing-up day, at noon, etc.

Further, if the content in the sample text is "research indicates that the sequence of the Chinese characters does not necessarily affect reading", entities such as "Chinese characters, sequence, non-specific, affect reading" and the like can be identified from the sample text.

In an embodiment of the application, entity recognition may be performed on the sample text by an entity recognition model to generate a plurality of entities.

Specifically, after acquiring the sample text, the electronic device may input the sample text to an entity recognition model, so that entity recognition is performed on the sample text through the entity recognition model to output a plurality of entities.

It should be noted that the entity recognition model described in this embodiment may be trained in advance and pre-stored in a storage space of the electronic device to facilitate retrieval of the application, the storage space is not limited to an entity-based storage space, such as a hard disk, and the storage space may also be a storage space of a network hard disk (cloud storage space) connected to the entity recognition model.

The training and the generation of the entity recognition model can be performed by a related server, the server can be a cloud server or a host of a computer, a communication connection is established between the server and the electronic equipment capable of executing the model training method provided by the application embodiment, and the communication connection can be at least one of a wireless network connection and a wired network connection. The server can send the trained entity recognition model to the electronic device so that the electronic device can call the trained entity recognition model when needed, and therefore computing stress of the electronic device is greatly reduced.

In other embodiments of the present application, entity recognition may be further performed on the sample text based on a preset entity recognition algorithm to generate a plurality of entities, where the preset entity recognition algorithm may be calibrated according to an actual situation.

Step 103, mask a first number of entities among the plurality of entities and disorder a second number of entities among the plurality of entities to generate an enhanced sample. Wherein, the first quantity and the second quantity can be calibrated according to the actual situation.

In the embodiment of the present application, the foregoing disorder may refer to simply exchanging the position order of the words in some entities in chinese.

It should be noted that 2-3 chinese characters can be obtained by the eyeball of a person at a time, and due to the non-attention blind vision principle, the person may ignore changes of some objects or objects under the guidance of some specific tasks, for example: "study table is clear, Chinese character sequence order is not fixed and read. When you read the sentence, although the words of part of the vocabulary in the sentence are out of order, you will not affect the semantic understanding of the sentence. That is, by simply exchanging the position sequence of the words in some entities in chinese, a text with certain noise can be generated without affecting the semantic meaning to be expressed by the text.

Specifically, after obtaining the plurality of entities, the electronic device may mask a first number of entities among the plurality of entities and disorder a second number of entities among the plurality of entities to generate an enhanced sample. For example, referring to fig. 2, the original text (i.e., the sample text) is "study showing that the order of chinese characters does not necessarily affect reading", and after a first number of entities in the plurality of entities in the original text are masked, the "study shows that the order of Mask ] [ Mask ] does not necessarily affect reading", and then, on the basis of the masked text, the "study shows that the order of Mask ] [ Mask ] does not necessarily affect reading", and then, after a second number of entities in the plurality of entities are disordered, the "study shows that the order of Mask ] [ Mask ] does not necessarily affect reading".

And 104, training the model according to the enhanced sample. Wherein, the model can be a pre-training model.

In the embodiment of the application, the pre-training model can be trained according to the sample text and the enhanced sample, so that the diversity of the training sample is increased, and the robustness of the pre-training model is enhanced.

In the embodiment of the application, a sample text is obtained, entity recognition is carried out on the sample text to generate a plurality of entities, then a first number of entities in the plurality of entities are subjected to mask, a second number of entities in the plurality of entities are subjected to disorder to generate an enhanced sample, and finally the model is trained according to the enhanced sample. Thereby, it is possible to introduce little extra consumption in constructing the enhanced samples, while enhancing the robustness of the model.

To illustrate the above embodiment, in an embodiment of the present application, as shown in fig. 3, masking a first number of entities among a plurality of entities may include:

in step 301, a first number of entities is randomly selected from among a plurality of entities.

Specifically, after obtaining the plurality of entities, the electronic device may randomly select a first number of entities from the plurality of entities, for example, if the sample text is "research indicates that the order of the chinese characters does not necessarily affect reading", the randomly selected first number of entities may be "chinese characters, not necessarily, and so on".

Step 302, a first number of entities are masked, or words within the first number of entities are masked.

Specifically, after the electronic device randomly selects a first number of entities from the plurality of entities, the first number of entities may be masked, for example, if the sample text is "study shows that the sequence of the chinese characters does not necessarily affect reading", and if the randomly selected first number of entities may be "chinese characters", the "chinese characters" may be masked, and the masked text is "study table shows that the sequence of the [ Mask ] does not necessarily affect reading". Or, the words in the first number of entities are masked, for example, the sample text is "study shows that the order of the chinese characters does not necessarily affect reading", the randomly selected first number of entities may be "chinese characters", the "words" may be masked, and the masked text is "study shows that the order of the chinese [ Mask ] does not necessarily affect reading". Thereby, the diversity of training samples can be increased and little additional expenditure is introduced in constructing the enhancement samples.

As a possible scenario, after the sample text is obtained, a certain number of words in the sample text may be randomly selected for masking.

Further, in an embodiment of the present application, as shown in fig. 4, the out-of-order the second number of entities among the plurality of entities may include:

step 401, selecting a non-repeated entity from a plurality of entities, wherein the non-repeated entity is different from the first number of entities, for example, the sample text is "study indicates that the order of Chinese characters does not necessarily affect reading", and if the first number of entities is "Chinese characters", the non-repeated entity may not include "Chinese characters", that is, the mask has not been used.

Step 402, randomly selecting a second number of entities from the non-duplicate entities, and out-of-order words in the second number of entities.

Specifically, after masking the first number of entities, the electronic device may select a different entity from the first number of entities, randomly select a second number of entities from the entities, and disorder words in the second number of entities. For example, if the masked text is "study indicates that the [ Mask ] order does not necessarily affect reading", then the second number of entities selected may be "don't necessarily" and "affect reading", and the words in "don't necessarily" and "affect reading" may be shuffled, and the shuffled text is "study indicates that the [ Mask ] order does not necessarily affect reading". Therefore, in the training process of the same sentence, due to the fact that the sequence of part of entities is disordered, a plurality of different training texts (namely, sample texts) can be generated, the diversity of training samples is increased, the robustness of the model is enhanced, no additional pre-training task is introduced, and the pre-training cost of the model is not increased.

In one embodiment of the present application, the second number is smaller than the first number.

It should be noted that, the length of a sample text is limited, the number of words or entities damaged by masks and disorder is limited (for example, 15% of the entity vocabulary), and it is necessary to ensure the number of words or entities of masks (with fixed standards in the industry) before disorder is performed, so the disorder proportion needs to be small, and thus the original semantics and context of the sample text are not damaged too much, which is beneficial to training the model.

Fig. 5 is a schematic structural diagram of a training apparatus for a model according to an embodiment of the present application.

The training device for the model in the embodiment of the application can be configured in electronic equipment to realize entity recognition on an obtained sample text to generate a plurality of entities, mask a first number of the entities in the plurality of entities, and disorder a second number of the entities in the plurality of entities to generate an enhanced sample, and then train the model according to the enhanced sample, so that extra consumption can hardly be introduced in constructing the enhanced sample, and meanwhile, the robustness of the model is enhanced.

As shown in fig. 4, the training apparatus 500 for the model may include: an acquisition module 510, a first generation module 520, a second generation module 530, and a training module 540.

The obtaining module 510 is configured to obtain a sample text. It should be noted that the sample text described in this embodiment may be a chinese text, such as a sentence, a paragraph, or a chapter, such as a news manuscript, and the like, and there may be a plurality of sample texts.

In this embodiment, there are multiple ways for the obtaining module 510 to obtain the sample text, where text information input by a relevant person through speech recognition and input content input by a user into the input method system through an input method may be obtained, the input method system may convert the input content into word candidates of the input words according to a current input mode of the user, so as to provide a user with a choice, the user may input the text information through multiple input means, such as a keyboard, a touch pad, a mouse, and the like, and the user may also select any input mode to input the text information, such as pinyin, wubi, stroke, handwriting, english, and a keypad, and the like, and no limitation is made here.

The sample text acquiring module 510 may also acquire the sample text by copy and paste, or may collect some transcripts in video, book, audio, etc. by some software and generate the sample text.

The first generation module 520 is used for performing entity recognition on the sample text to generate a plurality of entities.

It should be noted that the entities described in this embodiment may be words, idioms, adages, poems, conjunctions, etc. in the sample text, and the words that often appear together, for example, are not necessarily, in spring, in flower, at crossroads, hoeing-up day, at noon, etc.

Further, assuming that the content in the sample text is "research indicates that the order of the chinese characters does not necessarily affect reading", the first generation module 520 may identify entities such as "chinese characters, order, not necessarily, affect reading" from the sample text.

In an embodiment of the present application, the first generation module 520 may perform entity recognition on the sample text through an entity recognition model to generate a plurality of entities.

Specifically, after the obtaining module 510 obtains the sample text, the first generating module 520 may input the sample text to the entity recognition model, so as to perform entity recognition on the sample text through the entity recognition model to output a plurality of entities.

It should be noted that the entity recognition model described in this embodiment may be trained in advance and pre-stored in a storage space of the electronic device, so as to facilitate the first generation module 520 to invoke the application, where the storage space is not limited to the entity-based storage space, for example, a hard disk, and the storage space may also be a storage space (cloud storage space) of a network hard disk connected to the entity recognition model.

The training and the generation of the entity recognition model can be performed by a related server, the server can be a cloud server or a host of a computer, a communication connection is established between the server and the electronic equipment capable of executing the model training method provided by the application embodiment, and the communication connection can be at least one of a wireless network connection and a wired network connection. The server can send the trained entity recognition model to the electronic device so that the electronic device can call the trained entity recognition model when needed, and therefore computing stress of the electronic device is greatly reduced.

In other embodiments of the present application, the first generating module 520 may further perform entity identification on the sample text based on a preset entity identification algorithm to generate a plurality of entities, where the preset entity identification algorithm may be calibrated according to actual conditions.

The second generating module 530 is configured to mask a first number of entities from the plurality of entities and to reorder a second number of entities from the plurality of entities to generate the enhanced samples. Wherein, the first quantity and the second quantity can be calibrated according to the actual situation.

In the embodiment of the present application, the foregoing disorder may refer to simply exchanging the position order of the words in some entities in chinese.

It should be noted that 2-3 chinese characters can be obtained by the eyeball of a person at a time, and due to the non-attention blind vision principle, the person may ignore changes of some objects or objects under the guidance of some specific tasks, for example: "study table is clear, Chinese character sequence order is not fixed and read. When you read the sentence, although the words of part of the vocabulary in the sentence are out of order, you will not affect the semantic understanding of the sentence. That is, by simply exchanging the position sequence of the words in some entities in chinese, a text with certain noise can be generated without affecting the semantic meaning to be expressed by the text.

Specifically, after the first generating module 520 obtains the plurality of entities, the second generating module 530 may mask a first number of entities among the plurality of entities and disorder a second number of entities among the plurality of entities to generate the enhanced sample. For example, referring to fig. 2, the original text (i.e., the sample text) is "study showing that the order of chinese characters does not necessarily affect reading", and after a first number of entities in the plurality of entities in the original text are masked, the "study shows that the order of Mask ] [ Mask ] does not necessarily affect reading", and then, on the basis of the masked text, the "study shows that the order of Mask ] [ Mask ] does not necessarily affect reading", and then, after a second number of entities in the plurality of entities are disordered, the "study shows that the order of Mask ] [ Mask ] does not necessarily affect reading".

The training module 540 is used to train the model according to the enhanced sample.

In the embodiment of the present application, the training module 540 may also train the model according to the sample text and the enhanced sample, so as to increase the diversity of the training samples to enhance the robustness of the model.

In the embodiment of the application, a sample text is obtained through an obtaining module, entity recognition is carried out on the sample text through a first generating module to generate a plurality of entities, then mask masking is carried out on a first number of the entities through a second generating module, disorder is carried out on a second number of the entities to generate an enhanced sample, and finally the model is trained through a training module according to the enhanced sample. Thereby, it is possible to introduce little extra consumption in constructing the enhanced samples, while enhancing the robustness of the model.

In an embodiment of the application, the second generating module 530 is specifically configured to randomly select a first number of entities from the plurality of entities and mask the first number of entities, or mask words from the first number of entities.

In one embodiment of the present application, the second generation module 530 may be specifically configured to select a non-duplicate entity from among a plurality of entities, wherein the non-duplicate entity is different from the first number of entities, randomly select a second number of entities from among the non-duplicate entities, and disorder words among the second number of entities.

In one embodiment of the present application, the second number may be smaller than the first number.

In one embodiment of the present application, the model may be a pre-trained model.

It should be noted that the foregoing explanation on the embodiment of the model training method is also applicable to the model training apparatus of this embodiment, and details are not repeated here.

The training device for the model, provided by the embodiment of the application, firstly obtains a sample text through an obtaining module, performs entity recognition on the sample text through a first generating module to generate a plurality of entities, then performs mask masking on a first number of entities among the plurality of entities through a second generating module, and performs disorder on a second number of entities among the plurality of entities to generate an enhanced sample, and finally trains the model according to the enhanced sample through a training module. Thereby, it is possible to introduce little extra consumption in constructing the enhanced samples, while enhancing the robustness of the model.

There is also provided, in accordance with an embodiment of the present application, an electronic device, a readable storage medium, and a computer program product.

FIG. 6 illustrates a schematic block diagram of an example electronic device 600 that can be used to implement embodiments of the present application. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the present application that are described and/or claimed herein.

As shown in fig. 6, the apparatus 600 includes a computing unit 601, which can perform various appropriate actions and processes according to a computer program stored in a Read Only Memory (ROM)602 or a computer program loaded from a storage unit 606 into a Random Access Memory (RAM) 603. In the RAM 603, various programs and data required for the operation of the device 600 can also be stored. The calculation unit 601, the ROM 602, and the RAM 603 are connected to each other via a bus 604. An input/output (I/O) interface 605 is also connected to bus 604.

A number of components in the device 600 are connected to the I/O interface 605, including: an input unit 606 such as a keyboard, a mouse, or the like; an output unit 607 such as various types of displays, speakers, and the like; a storage unit 606 such as a magnetic disk, optical disk, or the like; and a communication unit 609 such as a network card, modem, wireless communication transceiver, etc. The communication unit 609 allows the device 600 to exchange information/data with other devices via a computer network such as the internet and/or various telecommunication networks.

The computing unit 601 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of the computing unit 601 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various dedicated Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, and so forth. The calculation unit 601 performs the respective methods and processes described above, such as a training method of a model. For example, in some embodiments, the training method of the model may be implemented as a computer software program tangibly embodied in a machine-readable medium, such as storage unit 606. In some embodiments, part or all of the computer program may be loaded and/or installed onto the device 600 via the ROM 602 and/or the communication unit 609. When the computer program is loaded into RAM 603 and executed by the computing unit 601, one or more steps of the training method of the model described above may be performed. Alternatively, in other embodiments, the computing unit 601 may be configured by any other suitable means (e.g., by means of firmware) to perform the training method of the model.

Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuitry, Field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), system on a chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.

Program code for implementing the methods of the present application may be written in any combination of one or more programming languages. These program codes may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the program codes, when executed by the processor or controller, cause the functions/operations specified in the flowchart and/or block diagram to be performed. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package partly on the machine and partly on a remote machine or entirely on the remote machine or server.

In the context of this application, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.

To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.

The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), Wide Area Networks (WANs), the internet, and blockchain networks.

The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The Server can be a cloud Server, also called a cloud computing Server or a cloud host, and is a host product in a cloud computing service system, so as to solve the defects of high management difficulty and weak service expansibility in the traditional physical host and VPS service ("Virtual Private Server", or simply "VPS"). The server may also be a server of a distributed system, or a server incorporating a blockchain.

It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present application may be executed in parallel, sequentially, or in different orders, and the present invention is not limited thereto as long as the desired results of the technical solutions disclosed in the present application can be achieved.

The above-described embodiments should not be construed as limiting the scope of the present application. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made in accordance with design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present application shall be included in the protection scope of the present application.

15页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:逐字歌词生成方法及装置、存储介质和电子设备

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!