Text generation method and device

文档序号:810177 发布日期:2021-03-26 浏览:26次 中文

阅读说明:本技术 一种文本生成方法及装置 (Text generation method and device ) 是由 王璐 焦阳 刘杰 杨羿 李�一 朱延峰 陈晓冬 刘林 于 2019-09-26 设计创作,主要内容包括:本申请公开了一种文本生成方法及装置、电子设备、存储有计算机指令的非瞬时计算机可读存储介质,涉及文本处理领域。具体实现方案为:获取多个文本生成需求信息;对所述多个文本生成需求信息采用多种编码方式进行编码处理,得到多个编码结果,基于所述多个编码结果得到上下文向量;基于所述多个文本生成需求信息相关的控制信号、所述多个文本生成需求信息相关的主题相关性控制方式、以及所述上下文向量,进行解码处理得到目标文本。(The application discloses a text generation method and device, electronic equipment and a non-transitory computer readable storage medium storing computer instructions, and relates to the field of text processing. The specific implementation scheme is as follows: acquiring a plurality of text generation requirement information; coding the text generation demand information by adopting a plurality of coding modes to obtain a plurality of coding results, and obtaining a context vector based on the coding results; and decoding the control signals related to the text generation demand information, the topic correlation control mode related to the text generation demand information and the context vector to obtain the target text.)

1. A text generation method, comprising:

acquiring a plurality of text generation requirement information;

coding the text generation demand information by adopting a plurality of coding modes to obtain a plurality of coding results, and obtaining a context vector based on the coding results;

and decoding the control signals related to the text generation demand information, the topic correlation control mode related to the text generation demand information and the context vector to obtain the target text.

2. The method according to claim 1, wherein different ones of the plurality of text generation requirement information correspond to different attributes;

before the encoding processing is performed on the multiple text generation requirement information by adopting multiple encoding modes, the method further includes:

and selecting coding modes corresponding to different text generation requirement information based on different attributes of the different text generation requirement information.

3. The method of claim 1, wherein deriving a context vector based on the plurality of encoding results comprises:

and processing the plurality of encoding results based on an Attention mechanism to obtain a context vector.

4. The method according to claim 1, wherein the decoding process performed based on the control signals related to the text generation requirement information, the topic relevance control manner related to the text generation requirement information, and the context vector to obtain the target text comprises:

decoding the context vector based on the control signals related to the plurality of text generation demand information, and performing theme control on the decoding processing by adopting a theme correlation control mode related to the plurality of text generation demand information to obtain at least one text message after the decoding processing;

and determining one text as the target text by calculating the integral probability of each text in the at least one text.

5. A text generation apparatus, comprising:

the encoding unit is used for acquiring a plurality of text generation requirement information; coding the text generation demand information by adopting a plurality of coding modes to obtain a plurality of coding results, and obtaining a context vector based on the coding results;

and the decoding unit is used for decoding the control signals related to the plurality of text generation requirement information, the topic correlation control modes related to the plurality of text generation requirement information and the context vector to obtain the target text.

6. The apparatus according to claim 5, wherein different ones of the plurality of text generation requirement information correspond to different attributes;

and the coding unit is used for selecting coding modes corresponding to different text generation demand information based on different attributes of the different text generation demand information.

7. The apparatus of claim 5, wherein the encoding unit is configured to process the plurality of encoding results based on an Attention-Attention mechanism to obtain a context vector.

8. The apparatus according to claim 5, wherein the decoding unit is configured to perform decoding processing on the context vector based on the control signals related to the multiple text generation requirement information, and perform topic control on the decoding processing by using a topic relevance control manner related to the multiple text generation requirement information, so as to obtain at least one text information after the decoding processing; and determining one text as the target text by calculating the integral probability of each text in the at least one text.

9. An electronic device, comprising:

at least one processor; and

a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,

the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-4.

10. A non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method of any one of claims 1-4.

Technical Field

The application relates to the field of information processing, in particular to the field of text processing, and provides a text generation method and device, electronic equipment and a non-transitory computer readable storage medium with computer instructions stored therein.

Background

Text Generation (Text Generation) refers to a technique for generating natural language-like Text using computer automation. In recent years, artificial intelligence technology is rapidly developed, and breakthrough progress is made in the field of natural language processing. The intelligent text generation can quickly and effectively cover the market demand through generating the content texts in batch, low cost and automation. One method for generating the text is based on a neural network, although the text generation based on the neural network can generate the text at low cost and has certain domain generalization. However, the machine-generated text content is likely to be mismatched with the preset semantics. And it is difficult to guarantee semantic controllability of the generated text using an Encoder-Decoder (Encoder-Decoder) structure.

Disclosure of Invention

The application provides a text generation method and device, an electronic device and a non-transitory computer readable storage medium storing computer instructions.

The embodiment of the application provides a text generation method, which comprises the following steps:

acquiring a plurality of text generation requirement information;

coding the text generation demand information by adopting a plurality of coding modes to obtain a plurality of coding results, and obtaining a context vector based on the coding results;

and decoding the control signals related to the text generation demand information, the topic correlation control mode related to the text generation demand information and the context vector to obtain the target text.

Optionally, different text generation requirement information in the plurality of text generation requirement information corresponds to different attributes;

before the encoding processing is performed on the multiple text generation requirement information by adopting multiple encoding modes, the method further includes:

and selecting coding modes corresponding to different text generation requirement information based on different attributes of the different text generation requirement information.

Optionally, the obtaining a context vector based on the plurality of encoding results includes:

and processing the plurality of encoding results based on an Attention mechanism to obtain a context vector.

Optionally, the decoding, based on the control signal related to the multiple text generation requirement information, the topic relevance control manner related to the multiple text generation requirement information, and the context vector, to obtain the target text, includes:

decoding the context vector based on the control signals related to the plurality of text generation demand information, and performing theme control on the decoding processing by adopting a theme correlation control mode related to the plurality of text generation demand information to obtain at least one text message after the decoding processing;

and determining one text as the target text by calculating the integral probability of each text in the at least one text.

An embodiment of the present application provides a text generation apparatus, including:

the encoding unit is used for acquiring a plurality of text generation requirement information; coding the text generation demand information by adopting a plurality of coding modes to obtain a plurality of coding results, and obtaining a context vector based on the coding results;

and the decoding unit is used for decoding the control signals related to the plurality of text generation requirement information, the topic correlation control modes related to the plurality of text generation requirement information and the context vector to obtain the target text.

Optionally, different text generation requirement information in the plurality of text generation requirement information corresponds to different attributes;

and the coding unit is used for selecting coding modes corresponding to different text generation demand information based on different attributes of the different text generation demand information.

Optionally, the encoding unit is configured to process the plurality of encoding results based on an Attention mechanism to obtain a context vector.

Optionally, the decoding unit is configured to perform decoding processing on the context vector based on the control signals related to the multiple text generation requirement information, and perform topic control on the decoding processing by using a topic relevance control manner related to the multiple text generation requirement information, so as to obtain at least one piece of text information after the decoding processing; and determining one text as the target text by calculating the integral probability of each text in the at least one text.

An embodiment of the present application further provides an electronic device, including:

at least one processor; and

a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,

the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of the preceding claims.

The present application also provides a non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method of any of the foregoing.

One embodiment in the above application has the following advantages or benefits: by adopting the scheme, the demand information can be coded through various coding modes, the unified context vector after the numbering is obtained is entered, and the decoding processing is carried out according to the context vector, the control signal and the preset main body related control to obtain the target text. Therefore, the controllability of the semantics of the generated text can be improved, so that the controllable factors can be adjusted according to the service requirements, and the generated text content is matched with the preset semantics and is consistent with the service target, namely the requirements of the user.

Other effects of the above-described alternative will be described below with reference to specific embodiments.

Drawings

The drawings are included to provide a better understanding of the present solution and are not intended to limit the present application. Wherein:

FIG. 1 is a first schematic diagram of a text generation method flow of the present application;

FIG. 2 is a first structural diagram of a text generating device of the present application;

FIG. 3 is a structural diagram of a text generating apparatus of the present application;

FIG. 4 is a block diagram illustrating the structure of a text generator according to the present application;

fig. 5 is a block diagram of an electronic device for implementing a text generation method according to an embodiment of the present application.

Detailed Description

The following description of the exemplary embodiments of the present application, taken in conjunction with the accompanying drawings, includes various details of the embodiments of the application for the understanding of the same, which are to be considered exemplary only. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present application. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.

The present application provides a text generation method, as shown in fig. 1, including:

s101: acquiring a plurality of text generation requirement information;

s102: coding the text generation demand information by adopting a plurality of coding modes to obtain a plurality of coding results, and obtaining a context vector based on the coding results;

s103: and decoding the control signals related to the text generation demand information, the topic correlation control mode related to the text generation demand information and the context vector to obtain the target text.

The embodiment can be applied to any device capable of generating texts, such as a server and a terminal device.

In the scheme provided by this embodiment, an Encoder (Encoder) is used to model an input text to obtain a context vector with a fixed dimension and hidden state vectors of each input word, and a Decoder (Decoder) is used to decode the hidden state vectors of each input word processed by an Attention mechanism one by one to generate an output text, with the context vector as input. The architecture of the Data-to-sequence (Data2Seq) is shown in fig. 2. In order to ensure that the generated text is matched with preset semantics and to realize the controllability of text generation, the method mainly comprises the following two steps:

data (Data) side: the primary point of achieving correct matching of the generated text semantics is to ensure correct understanding of the input text semantics. Typically, the input data is text of a variety of different attribute traits, such as advertiser bid, advertiser landing page text, etc., in the intelligent generation of an advertising creative. The basic single Encoder structure simply splices input data with different attribute characteristics to form a sentence, and semantic deviation is easily introduced at boundary points among different attributes. Therefore, the structure of the multiple Encoders is designed, the modeling of texts with different attributes is separated, and the flexibility of network structure selection is ensured by the structure of the multiple Encoders. According to the text characteristics, a suitable Encoder mode can be selected, such as an Embedding (Embedding) -Pool (Pool) mode, a CNN (convolutional neural network) structural mode, an RNN (recurrent neural network) structural mode and the like. For example, the text with different attributes in FIG. 2 can be x-1, x-2, and x-3 in the figure; and respectively adopting different encoders (Encoders) to carry out encoding processing, and finally obtaining the context vector.

Control (Control) end: the most important point for ensuring the controllability of the generated text is to fuse the business target into a model structure and enable the model to sense the control factors of the generated text required by the business. The basic Encode-Decoder structure performs model training by taking sequence loss as a target, and the target heavily depends on training data to learn the business characteristics reflected in the data and is easy to deviate from a real business target. Therefore, various Control means are designed on the network structure, the consistency of the business target and the model target is ensured, and meanwhile, the Control of various semantics required by the business can be directly effective in the model.

In this embodiment, different text generation requirement information in the plurality of text generation requirement information corresponds to different attributes;

before the encoding processing is performed on the multiple text generation requirement information by adopting multiple encoding modes, the method further includes:

and selecting coding modes corresponding to different text generation requirement information based on different attributes of the different text generation requirement information.

Said deriving a context vector based on the plurality of encoding results comprises:

and processing the plurality of encoding results based on an Attention mechanism to obtain a context vector.

In particular, the multiple Encoder structure avoids bias in understanding the semantics of the input text caused by mixing together texts with different attribute traits that affect each other. Each Encoder reads the sentences corresponding to attribute traits as input, semantically understands the input sentences through an Embedding-Pool mode, a CNN structural mode, an RNN structural mode and the like, and finally obtains a unified context vector c by combining semantic comprehension vectors of a plurality of Encoders through an Attention mechanism.

The method includes the steps of generating a plurality of texts, wherein a proper Encoder mode can be selected according to text characteristics of the plurality of texts for generating demand information, for example, short sentences can use an embedding mode, when time sequence needs to be considered, an RNN mode can be used, and a CNN coding mode can be used for efficiency.

In one example, the processing may employ the following equation:

wherein, EncoderiDenotes the ith Encoder, xi1xi2…xijWord sequence, x, representing input Encoderi1The first word representing Encoder 1. h iseijRepresentation by EncoderiThe suffix xijThe hidden layer state vector of (1). a represents the calculation function of attention, and a can be various, such as a (x, y) ═ xTy. e denotes a vector, hkRepresents a queue vector used in the attribute calculation function, and k represents a query vector at the kth time step, which is a learning parameter. Alpha is alphaijRepresents the weight of the jth hidden layer vector of the ith Encoder.

In this embodiment, the decoding, based on the control signal related to the multiple text generation requirement information, the topic relevance control mode related to the multiple text generation requirement information, and the context vector, to obtain the target text may include:

decoding the context vector based on the control signals related to the plurality of text generation demand information, and performing theme control on the decoding processing by adopting a theme correlation control mode related to the plurality of text generation demand information to obtain at least one text message after the decoding processing; and determining one text as the target text by calculating the integral probability of each text in the at least one text.

That is to say, the Control (Control) end in the scheme provided by this embodiment mainly includes three parts: soft signal control, theme correlation control and smoothness control.

Respectively, the decoding processing of the context vector may be performed based on the control signals related to the multiple text generation requirement information, and the topic control may be performed on the decoding processing by using the topic correlation control manner related to the multiple text generation requirement information, so as to obtain at least one text information after the decoding processing, that is, the decoding processing may be performed according to soft signal control, that is, the decoding processing is performed according to the control signals related to the multiple text generation requirement information, and while the decoding processing is performed, the subject correlation control may be performed on the text generated by the decoding processing;

and then, calculating the overall probability of each text in the at least one text through sequential control, and further determining a target text in the at least one text based on the overall probability of each text.

Specifically, the method comprises the following steps:

the soft signal controls the specific control signal to take effect at the input end of a Long Short-Term Memory network (LSTM) of the Decoder and the input end of a soft max layer for estimating the next word, so that the model senses the control signal to influence the final generated text and the final generated text is matched with the preset soft signal;

topic relevance control is realized by introducing a topic classification discriminator D, and finally, a text generation model is influenced by back propagation through the classified cross entropy loss, and the topic classification discriminator D ensures that the final generated text is matched with a preset topic;

the compliance control optimizes the generated text with better compliance through an optimized beam search mechanism.

See, for example, fig. 2, where a context vector C may be input to the generator along with the extracted control signal, and the discriminator may extract from the subject matter generated by the encoder for controlling the plurality of texts generated by the generator. The essence of the softmax function is to compress (map) an arbitrary real vector of one K-dimension into a real vector of another K-dimension, where each element in the vector takes on a value between (0, 1).

Specifically, the control signal may be a signal used in soft (soft) signal control, and is implemented by: the control signal is effected simultaneously from the data side and the softmax decision side by inputting a preset control signal (control signal) into the LSTM unit of the Decoder and affecting the softmax layer that generates text. The control signal may be set for the user, that is, what attribute the user needs to control, and then the text of the attribute is input as the control signal. The control signal being active means that the control signal is enabled to function to achieve the desired control effect. The data end is an abstract and refers to the input end of data; the decision end refers to the softmax unit, because softmax is ultimately the probability of generating a word, which determines which word the next word to generate a sentence is.

Topic relevance control is achieved by introducing a topic classification discriminator D, i.e. by calculating the distance between the content encoder features of a topic and the sentence features generated by the decoder to control the generated plurality of texts to conform to the topic.

hk=f(hk-1,yk-1,xcontrol_siqnal)

ok=softmax(hk,ck,xcontrol_siqnal);

Wherein x iscontrol_signalRefers to the output vector of the encoder, c, of the control signal (control _ signal)kRepresenting the concealment vector of the decoder at the k-th time step. h iskRepresenting the context vector at the k time step of the encoder. okThe output probability of softmax at the kth time step is indicated.

Unlike soft signal control, topic relevance control limits the final generated text to be consistent with the preset topic content input by the content encoder by introducing a new topic classification discriminator D.

L=lossseq+lossD

lossD=sigmoid(ycontent_encoder,y1y2…yt)

Where y represents the vector y of the outputcontent_encoderRefers to the output of the content _ encoder, y1y2…ytRefers to the output vector of the sentence being generated, 1, 2.. t represents each word of the sentence. lossDRepresenting the loss function of classifier D. lossseqRepresenting the loss function of the decoder.

The compliance control means that one text is determined as the target text by calculating the overall probability of each text in the at least one text.

Specifically, by optimizing the beam search mechanism, a plurality of beams compete simultaneously, and a generated text with better smoothness is preferred. Through more accurate understanding of input Data and various control mechanisms, the Data2seq provided by the user finally ensures the semantic controllability in the text generation process. The decoder generates a plurality of sentences in parallel, and selects an optimal one of the sentences as a target text by calculating a likelihood of the entirety of each sentence.

Fig. 3 is an example, in one example, the intelligent generation of an ad creative is taken as an example, and in the intelligent generation of a text ad, there are generally input data with various attribute traits, such as advertiser bid, advertiser landing page text, advertiser industry information, and the like. The business objective is preset to generate one or more sentences of advertising language which is smooth and is related to the words bought by the advertiser and the landing page. At a Data end, different Encoders are respectively used for text semantic extraction on advertisement owner purchase words and advertisement owner landing page texts, and finally, information extracted by the Encoders is combined through an Attention mechanism to obtain a context vector c. At a Control end, inputting the advertiser industry information serving as a Control signal (Control signal) into a Decoder and softmax layer, and ensuring that the generated advertisement text is consistent with the advertiser industry through the simultaneous influence of an input end and a decision end; meanwhile, the correlation between the generated text and the word bought by the advertiser is ensured by inputting the word bought by the advertiser and the text generated by the Decoder into a classification discriminator D; the Decoder generates a plurality of advertisement texts using the beam search mechanism.

The Decoder in fig. 3 can see that the control signal and the context vector generated by the content encoder and the topic encoder perform decoding processing together, and topic control is adopted to ensure the correlation between the generated texts and the topics, and the sentence vector generated by the Decoder uses the vector output by the unit at the bottom right corner of fig. 3 as the representation of the whole sentence.

Wherein, as shown in FIG. 3, the classification discriminator D is a classification supervised learning, if the generated sentence has poor attribute correlation with the input of the content encoder (content _ encoder), lossDThis loss (loss) will be larger and will play a role in training to make the parameter learning of the model better, i.e. better in relation to the topic.

The beam search mechanism works at the stage of inference (refer), selecting an optimal text from a plurality of texts.

In addition, in the present embodiment, the hidden (hidden) layer is a hidden vector of RNN. The Decoder adopts the RNN structure. In fig. 3, the three of Control (Control), context (context), and hidden (hidden) are simply concat (connected) together. In fig. 3, the three boxes of the Decoder (Decoder) represent different time steps, each time step generating a word that constitutes the final sentence. Time-stepping is the concept of RNN structure.

Therefore, by adopting the scheme, the demand information can be coded in various coding modes, the uniform context vector after the numbering is obtained is entered, and the decoding processing is carried out according to the context vector, the control signal and the preset main body related control to obtain the target text. Therefore, the controllability of the semantics of the generated text can be improved, so that the controllable factors can be adjusted according to the service requirements, and the generated text content is matched with the preset semantics and is consistent with the service target, namely the requirements of the user.

An embodiment of the present application provides a text generating apparatus, as shown in fig. 4, including:

an encoding unit 41 configured to acquire a plurality of text generation requirement information; coding the text generation demand information by adopting a plurality of coding modes to obtain a plurality of coding results, and obtaining a context vector based on the coding results;

a decoding unit 42, configured to perform decoding processing to obtain a target text based on the control signals related to the multiple text generation requirement information, the topic correlation control manner related to the multiple text generation requirement information, and the context vector.

Different text generation demand information in the plurality of text generation demand information corresponds to different attributes;

the encoding unit 41 is configured to select, based on different attributes of different text generation requirement information, encoding modes corresponding to the different text generation requirement information.

The encoding unit 41 is configured to process the plurality of encoding results based on an Attention mechanism to obtain a context vector.

The decoding unit 42 is configured to perform decoding processing on the context vector based on the control signals related to the multiple text generation requirement information, and perform topic control on the decoding processing by using a topic relevance control manner related to the multiple text generation requirement information, so as to obtain at least one piece of text information after the decoding processing; and determining one text as the target text by calculating the integral probability of each text in the at least one text.

Therefore, by adopting the scheme, the demand information can be coded in various coding modes, the uniform context vector after the numbering is obtained is entered, and the decoding processing is carried out according to the context vector, the control signal and the preset main body related control to obtain the target text. Therefore, the controllability of the semantics of the generated text can be improved, so that the controllable factors can be adjusted according to the service requirements, and the generated text content is matched with the preset semantics and is consistent with the service target, namely the requirements of the user.

According to an embodiment of the present application, an electronic device and a readable storage medium are also provided.

Fig. 5 is a block diagram of an electronic device according to a text generation method according to an embodiment of the present application. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the present application that are described and/or claimed herein.

As shown in fig. 5, the electronic apparatus includes: one or more processors 801, memory 802, and interfaces for connecting the various components, including a high speed interface and a low speed interface. The various components are interconnected using different buses and may be mounted on a common motherboard or in other manners as desired. The processor may process instructions for execution within the electronic device, including instructions stored in or on the memory to display Graphical information for a Graphical User Interface (GUI) on an external input/output device, such as a display device coupled to the Interface. In other embodiments, multiple processors and/or multiple buses may be used, along with multiple memories and multiple memories, as desired. Also, multiple electronic devices may be connected, with each device providing portions of the necessary operations (e.g., as a server array, a group of blade servers, or a multi-processor system). In fig. 5, a processor 801 is taken as an example.

The memory 802 is a non-transitory computer readable storage medium as provided herein. Wherein the memory stores instructions executable by at least one processor to cause the at least one processor to perform the text generation methods provided herein. The non-transitory computer-readable storage medium of the present application stores computer instructions for causing a computer to perform the text generation method provided herein.

The memory 802, which is a non-transitory computer readable storage medium, may be used to store non-transitory software programs, non-transitory computer executable programs, and modules, such as program instructions/modules corresponding to the text generation method in the embodiments of the present application. The processor 801 executes various functional applications of the server and data processing by running non-transitory software programs, instructions, and modules stored in the memory 802, that is, implements the text generation method in the above-described method embodiments.

The memory 802 may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data created from use of the electronic device generated from text, and the like. Further, the memory 802 may include high speed random access memory and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid state storage device. In some embodiments, the memory 802 optionally includes memory located remotely from the processor 801, which may be connected to a text generating electronic device via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.

The electronic device of the text generation method may further include: an input device 803 and an output device 804. The processor 801, the memory 802, the input device 803, and the output device 804 may be connected by a bus or other means, as exemplified by the bus connection in fig. 5.

The input device 803 may receive input numeric or character information and generate key signal inputs related to user settings and function controls of the text-generating electronic device, such as a touch screen, a keypad, a mouse, a track pad, a touch pad, a pointer, one or more mouse buttons, a track ball, a joystick, or other input device. The output devices 804 may include a display device, auxiliary lighting devices (e.g., LEDs), and haptic feedback devices (e.g., vibrating motors), among others. The Display device may include, but is not limited to, a Liquid Crystal Display (LCD), a Light Emitting Diode (LED) Display, and a plasma Display. In some implementations, the display device can be a touch screen.

Various implementations of the systems and techniques described here can be realized in digital electronic circuitry, Integrated circuitry, Application Specific Integrated Circuits (ASICs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.

These computer programs (also known as programs, software applications, or code) include machine instructions for a programmable processor, and may be implemented using high-level procedural and/or object-oriented programming languages, and/or assembly/machine languages. As used herein, the terms "machine-readable medium" and "computer-readable medium" refer to any computer program product, apparatus, and/or device (e.g., magnetic discs, optical disks, memory, Programmable Logic Devices (PLDs)) used to provide machine instructions and/or data to a programmable processor, including a machine-readable medium that receives machine instructions as a machine-readable signal. The term "machine-readable signal" refers to any signal used to provide machine instructions and/or data to a programmable processor.

To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: display device (e.g., CRT (Cathode Ray T)) for displaying information to a userubeCathode ray tube) or LCD (liquid crystal display) monitor); and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the computer. OthersThe kind of device may also be used to provide interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.

The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), Wide Area Networks (WANs), and the internet.

The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other.

According to the technical scheme of the embodiment of the application, a part of images with larger similarity are filtered when the images are stored, so that redundant images are filtered when the images are stored, the storage and transmission pressure is reduced, and the workload of subsequent sorting is also saved.

It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present application may be executed in parallel, sequentially, or in different orders, and the present invention is not limited thereto as long as the desired results of the technical solutions disclosed in the present application can be achieved.

The above-described embodiments should not be construed as limiting the scope of the present application. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made in accordance with design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present application shall be included in the protection scope of the present application.

15页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:页面链接的替换方法及装置

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!