Text content conversion method and device

文档序号:1406157 发布日期:2020-03-06 浏览:8次 中文

阅读说明:本技术 文本内容的转换方法及装置 (Text content conversion method and device ) 是由 杨忠伟 于 2018-08-08 设计创作,主要内容包括:本公开涉及文本内容的转换方法及装置。所述方法包括:采用分类器识别第一文本信息的风格;确定所述第一文本信息的展示场景;在所述第一文本信息的风格与展示场景不匹配时,基于与展示场景对应的机器翻译模型,将所述第一文本信息转换成风格与所述展示场景匹配的第二文本信息。根据本公开的文本内容的转换方法及装置,可以自动大批量的实现文本内容的转换,相比于人工改写,大幅提高了转换效率,并且降低改写的出错率、统一风格。(The disclosure relates to a text content conversion method and device. The method comprises the following steps: recognizing the style of the first text information by adopting a classifier; determining a display scene of the first text information; and when the style of the first text information is not matched with the display scene, converting the first text information into second text information of which the style is matched with the display scene based on a machine translation model corresponding to the display scene. According to the text content conversion method and device, the text content can be automatically converted in large batch, compared with manual rewriting, the conversion efficiency is greatly improved, and the error rate and the uniform style of the rewriting are reduced.)

1. A method for converting text content, comprising:

recognizing the style of the first text information by adopting a classifier;

determining a display scene of the first text information;

and when the style of the first text information is not matched with the display scene, converting the first text information into second text information of which the style is matched with the display scene based on a machine translation model corresponding to the display scene.

2. The method of claim 1, wherein the classifier is generated by training a convolutional neural network model based on text samples.

3. The method of claim 1 or 2, wherein identifying the style of the first text information using the classifier comprises:

acquiring a word vector set of the first text information as input of the classifier;

and determining the style of the first text information based on the one-hot codes generated by the classifier according to the word vector set.

4. The method of claim 1, wherein determining the presentation scenario of the first text message comprises:

determining a display scene according to a received request sent by the terminal,

the request represents a request for the terminal to acquire data carrying the first text information, and the request carries a scene identifier.

5. The method of claim 1, wherein the machine translation model corresponding to the presentation scenario comprises an encoder and a decoder corresponding to the presentation scenario,

converting the first text information into second text information with the style matched with the display scene based on a machine translation model corresponding to the display scene, wherein the method comprises the following steps:

encoding the first text information into a high multidimensional vector by adopting an encoder corresponding to a display scene;

decoding the multi-dimensional vector into second text information with a decoder corresponding to the presentation scene.

6. The method according to any one of claims 1 to 5, wherein the first text information is a title of a multimedia asset.

7. The method according to any one of claims 1 to 5, wherein the first text information is a title of a video.

8. The method according to any of claims 1-5, wherein the presentation scene is a home scene or a feed stream scene.

9. The method according to any one of claims 1 to 5,

when the first text information is a title, the style includes: long headline, short headline, and headline style.

10. An apparatus for converting text contents, comprising:

the classification module is used for identifying the style of the first text information by adopting a classifier;

the display scene determining module is used for determining a display scene of the first text information;

and the conversion module is used for converting the first text information into second text information of which the style is matched with the display scene based on a machine translation model corresponding to the display scene when the style of the first text information is not matched with the display scene.

11. The apparatus of claim 10, wherein the classifier is generated by training a convolutional neural network model based on text samples.

12. The apparatus of claim 10 or 11, wherein the classification module comprises:

an obtaining unit configured to obtain a word vector set of the first text information as an input of the classifier;

and the classification unit is used for determining the style of the first text information based on the unique hot codes generated by the classifier according to the word vector set.

13. The apparatus of claim 10, wherein the presentation scenario determination module comprises:

a determining unit for determining a display scene according to a received request sent by a terminal,

the request represents a request for the terminal to acquire data carrying the first text information, and the request carries a scene identifier.

14. The apparatus of claim 10, wherein the machine translation model corresponding to the presentation scenario comprises an encoder and a decoder corresponding to the presentation scenario,

the conversion module includes:

the encoding unit is used for encoding the first text information into a high multi-dimensional vector by adopting an encoder corresponding to a display scene;

a decoding unit for decoding the multi-dimensional vector into second text information with a decoder corresponding to the presentation scene.

15. The apparatus according to any one of claims 10-14, wherein the first text information is a title of a multimedia asset.

16. The apparatus according to any one of claims 10-14, wherein the first text information is a title of a video.

17. The apparatus according to any of claims 10-14, wherein the presentation scene is a top page scene or a feed stream scene.

18. The apparatus of any one of claims 10-14,

when the first text information is a title, the style includes: long headline, short headline, and headline style.

19. An apparatus for converting text contents, comprising:

a processor;

a memory for storing processor-executable instructions;

wherein the processor is configured to: the method of any one of claims 1 to 9 when executed by executable instructions.

20. A non-transitory computer readable storage medium having stored thereon computer program instructions, wherein the computer program instructions, when executed by a processor, implement the method of any one of claims 1 to 9.

Technical Field

The present disclosure relates to the field of multimedia technologies, and in particular, to a text content conversion method and apparatus.

Background

The video titles are important components of video data, and good video titles can greatly improve the video distribution efficiency and improve the click rate and flow of videos. In different scenes and channels, titles of different genres are required. At present, manual rewriting is adopted, so that the efficiency is very low, and errors are easy to make.

Disclosure of Invention

In view of this, the present disclosure provides a method and an apparatus for converting text content, which can automatically implement the conversion of text content in large batch, greatly improve the conversion efficiency, and reduce the error rate and the uniform style of the rewriting compared with manual rewriting.

According to an aspect of the present disclosure, there is provided a method of converting text content, including:

recognizing the style of the first text information by adopting a classifier;

determining a display scene of the first text information;

and when the style of the first text information is not matched with the display scene, converting the first text information into second text information of which the style is matched with the display scene based on a machine translation model corresponding to the display scene.

According to another aspect of the present disclosure, there is provided a conversion apparatus of text contents, including:

the classification module is used for identifying the style of the first text information by adopting a classifier;

the display scene determining module is used for determining a display scene of the first text information;

and the conversion module is used for converting the first text information into second text information of which the style is matched with the display scene based on a machine translation model corresponding to the display scene when the style of the first text information is not matched with the display scene.

According to another aspect of the present disclosure, there is provided a conversion apparatus of text contents, including: a processor; a memory for storing processor-executable instructions; wherein the processor is configured to perform the above method.

According to another aspect of the present disclosure, there is provided a non-transitory computer readable storage medium having computer program instructions stored thereon, wherein the computer program instructions, when executed by a processor, implement the above-described method.

By identifying the style and the display scene of the first text information, when the style and the display scene of the first text information are not matched, the style of the first text information is automatically converted by adopting a machine translation model corresponding to the display scene, and the first text information is converted into second text information of which the style is matched with the display scene. According to the text content conversion method and device, the text content can be automatically converted in large batch, compared with manual rewriting, the conversion efficiency is greatly improved, and the error rate and the uniform style of the rewriting are reduced.

Other features and aspects of the present disclosure will become apparent from the following detailed description of exemplary embodiments, which proceeds with reference to the accompanying drawings.

Drawings

The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate exemplary embodiments, features, and aspects of the disclosure and, together with the description, serve to explain the principles of the disclosure.

Fig. 1 shows a flowchart of a method of converting text content according to an embodiment of the present disclosure.

FIG. 2 shows a schematic diagram of generation and classification of a classifier according to an embodiment of the present disclosure.

Fig. 3 shows a flowchart of the method of step S11 according to an embodiment of the present disclosure.

Fig. 4 shows a flowchart of the method of step S13 according to an embodiment of the present disclosure.

FIG. 5 illustrates a schematic diagram of a machine translation model for text content conversion according to an embodiment of the present disclosure.

Fig. 6 shows a block diagram of a conversion apparatus of text content according to an embodiment of the present disclosure.

Fig. 7 shows a block diagram of a conversion apparatus of text contents according to an embodiment of the present disclosure.

Fig. 8 shows a block diagram of a conversion apparatus of text contents according to an embodiment of the present disclosure.

Detailed Description

Various exemplary embodiments, features and aspects of the present disclosure will be described in detail below with reference to the accompanying drawings. In the drawings, like reference numbers can indicate functionally identical or similar elements. While the various aspects of the embodiments are presented in drawings, the drawings are not necessarily drawn to scale unless specifically indicated.

The word "exemplary" is used exclusively herein to mean "serving as an example, embodiment, or illustration. Any embodiment described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other embodiments.

Furthermore, in the following detailed description, numerous specific details are set forth in order to provide a better understanding of the present disclosure. It will be understood by those skilled in the art that the present disclosure may be practiced without some of these specific details. In some instances, methods, means, elements and circuits that are well known to those skilled in the art have not been described in detail so as not to obscure the present disclosure.

Fig. 1 shows a flowchart of a method of converting text content according to an embodiment of the present disclosure. The method can be applied to a server or a server cluster, for example, to a multimedia resource (e.g., video, music, microblog, etc.) delivery server, or a server dedicated to text content conversion.

For example, when the method is applied to a server dedicated to text content conversion, the video delivery server may communicate with the server dedicated to text content conversion to realize conversion of a video title.

As shown in fig. 1, the method may include:

step S11, adopting a classifier to identify the style of the first text information;

step S12, determining the display scene of the first text information;

step S13, when the style of the first text information is not matched with the display scene, the first text information is converted into second text information with the style matched with the display scene based on the machine translation model corresponding to the display scene.

By identifying the style and the display scene of the first text information, when the style and the display scene of the first text information are not matched, the style of the first text information is automatically converted by adopting a machine translation model corresponding to the display scene, and the first text information is converted into second text information of which the style is matched with the display scene. According to the text content conversion method, the text content can be automatically converted in large batch, compared with manual rewriting, the conversion efficiency is greatly improved, and the error rate and the uniform style of the rewriting are reduced.

The first text information and the second text information may be titles of multimedia resources, such as titles of videos, titles of articles, and the like. The first text information may also be other content, for example, text content sent by a user on a microblog, text content sent by a friend, a section of text content in an article, and the like.

The following description will be mainly given taking a title of a multimedia asset (e.g., a title of a video) as an example. In one possible implementation manner, for the case of being applied to a video delivery server, the present disclosure may obtain the first text information by identifying a field in the multimedia resource to identify a header portion, and then identify a style of the first text information by using a classifier. In case of application to a server dedicated to text content conversion, the multimedia asset delivery server may send a title of the multimedia asset to a server dedicated to text content conversion.

The text information can be divided into different styles according to different classification standards, for example, the text information can be divided into long text and short text according to the length of the text content, for example, the contents published by platform users such as microblog, WeChat friend circle and the like are basically within 140 words, the text information can be classified into short text, the contents published by users such as blog, public number and the like are longer, the text information can be classified into long text, and the text information can be divided into long title and short title by taking the title as an example, the short title is usually within 15 words, and the long title is usually 15 to 30 words; the text content may be classified into a sarcasm style, a non-sarcasm style, a humorous style, a non-humorous style, and the like according to a linguistic style of the text content. Taking the title as an example, the form in which the title is written may be categorized into a headwear style, a barking style, etc. It should be noted that the above examples of different styles are only some examples of the present disclosure, and do not limit the present disclosure in any way.

The classifier in step S11 may be generated for training the convolutional neural network CNN model based on the text samples. Wherein the text sample can be different styles of text information collected in advance.

FIG. 2 shows a schematic diagram of generation and classification of a classifier according to an embodiment of the present disclosure. Taking a text sample as an example, titles of different styles, such as sample title 1, sample title 2, and sample title 3 … … sample title N shown in fig. 2, may be collected first. The more the number and the types of the sample titles, the more intelligent the trained classifier is, and the more accurate the identification is. And inputting the sample title into the convolutional neural network model, and training the convolutional neural network model to generate the title style classifier. After inputting a title (first text information example) to the classifier, the classifier may output the genre to which the title belongs.

The presentation scene in step S12 may refer to a scene in which the first text information is presented on the terminal, for example, a home page scene or a feed stream scene. The home page scene may refer to a web page or a first page of an application program. A feed flow scene may refer to a scene in which content is continuously updated to a user according to the user's subscription. The above two scenarios are merely examples of presentation scenarios, and the present disclosure is not limited thereto.

The specific manner of determining the presentation scene of the first text information may be: and determining a display scene according to a received request sent by the terminal, wherein the request represents a request for acquiring data carrying the first text information by the terminal, and the request carries a scene identifier.

Taking a video application program as an example, a user opens the video application program on a terminal, and after detecting an instruction opened by the user, the video application program usually shows a home page to the user, and the showing scene belongs to a home page scene. The video application may issue a request to the server to request the video content to be presented on the home page. The request sent by the video application to the server carries a scene identifier, where the scene identifier may be information capable of indicating scene uniqueness, such as a scene ID. The server, upon receiving the request, may determine the content to be presented (video) and the presentation scenario (home scenario) in which the content is to be presented. Through the above process, the display scene to be displayed by the first text information can be determined.

It should be noted that the above process of determining the presentation scenario of the first text message is only one example of the disclosure, and the disclosure is not limited thereto, for example, in a scenario where the server actively pushes content to the terminal, the pushed presentation scenario may also be determined in advance.

In step S13, the fact that the style of the first text information does not match the display scene may mean that the style of the first text information identified by the classifier is not consistent with the style corresponding to the display scene. The text content conversion system on the server may store a corresponding relationship between the display scene and the style, such as a display scene-style comparison table, and after the display scene is determined, the style corresponding to the display scene may be determined.

It should be noted that one style may correspond to multiple display scenes, for example, for both the home page scene and the feed stream scene, the style may correspond to the headline style, or both the home page scene and the feed stream scene may correspond to the subtitle, and one display scene may also correspond to multiple styles, which is not limited in this disclosure. In the case that one display scene may correspond to a plurality of styles, if the style of the first text information is not consistent with any style corresponding to the display scene, it may be determined that the first text information is not matched with any style corresponding to the display scene.

Taking the title as an example, the style of the title of the video identified by the classifier is a long title, the display scene to be displayed by the video is a first page scene, and the video information displayed on the first page has a word count requirement on the title, which cannot exceed 10 words, because of the screen limitation, so that the style of the title corresponding to the first page scene is a short title, and in this case, the style of the first text information does not match the display scene.

The machine translation model can be a model capable of text content conversion, and can be a model generated by training an in-depth artificial neural network model in advance based on a text sample. For example, it may be seq2seq model, pure attention model, etc.

In a possible implementation manner, different display scenes may be trained respectively to generate machine translation models corresponding to the display scenes, and the machine translation models corresponding to the display scenes are specially used for converting input first text information into second text information with a style matching with the display scenes.

According to the text content conversion method, the text content can be automatically converted in large batch, compared with manual rewriting, the conversion efficiency is greatly improved, and the error rate and the uniform style of the rewriting are reduced.

By recognizing the style in advance, the conversion of the content which does not need to be converted can be avoided, and the conversion efficiency can be further improved.

The manual rewriting of the text contents, such as the title, may cause errors such as wrongly written characters due to fatigue, carelessness, and the like. The machine automatically rewrites, and errors such as wrongly written characters are avoided.

The manual rewriting of the text contents, such as titles, is different in style due to different subjective recognitions of each person. The style automatically rewritten by the machine can be kept consistent.

Fig. 3 shows a flowchart of the method of step S11 according to an embodiment of the present disclosure, and as shown in fig. 3, step S11 may include:

step S111, acquiring a word vector set of the first text information as input of the classifier;

step S112, determining the style of the first text information based on the one-hot codes generated by the classifier according to the word vector set.

Wherein the Word vector set of the first text information may be generated by a Word2vec model. For example, the server may input the first text information into a Word2vec model, and the Word2vec model may map each Word in the first text information to a vector, thereby outputting a set of Word vectors for the first text information. The server takes the Word vector set of the first text information output by the Word2vec model as the input of the classifier.

One-hot code (one-hot code) is a code system that indicates different states, i.e. how many bits there are in how many states, and only one bit is 1, and the others are all 0.

The classifier can generate an unique hot code corresponding to the word vector set according to the word vector set, and determine a style corresponding to the unique hot code, namely a style of the first text information, according to a bit of the unique hot code with a state of 1.

It should be noted that, in the process of training and generating the classifier, a word vector set which may also be a text sample is input, and through continuous learning, the classifier can determine the corresponding relationship between different word vector sets and styles. After the classifier is trained, in the process of classifying the first text information by using the classifier, the classifier can perform convolution operation according to an input word vector set to obtain an one-hot code corresponding to the word vector set, so that the style of the first text information is determined according to a bit of the one-hot code with the state of 1.

Through the process, the style of the first text information can be automatically recognized, a basis is provided for the later conversion process, and the conversion efficiency is improved. When the style of first text information is not matched with a display scene, converting the first text information into second text information of which the style is matched with the display scene; when the style of the first text information is matched with the display scene, the first text information may not be converted.

Fig. 4 shows a flowchart of the method of step S13 according to an embodiment of the present disclosure. In one possible implementation, the machine translation model corresponding to the presentation scenario may include an encoder and a decoder corresponding to the presentation scenario.

In this embodiment, as shown in fig. 4, the step S13 of converting the first text information into the second text information having a style matching the presentation scene based on the machine translation model corresponding to the presentation scene may include:

step S131, the first text information is encoded into a multi-dimensional vector by adopting an encoder corresponding to the display scene.

After the encoder corresponding to the presentation scene encodes the first text information, one or more multidimensional vectors can be generated, and one multidimensional vector can be associated with one or more words in the first text information and can represent the association relationship among the words.

After the encoder generates the multi-dimensional vector, it can output to a decoder corresponding to the presentation scene.

Step S132, decoding the multi-dimensional vector into second text information by adopting a decoder corresponding to the display scene.

The decoder corresponding to the display scene can decode the multidimensional vector according to a pre-trained decoding algorithm to generate second text information with the style corresponding to the display scene.

The specific encoding modes of the encoders corresponding to different presentation scenarios may be different, and the encoding modes may be optimized through training of the text samples. The specific decoding modes of the decoder corresponding to different presentation scenarios may also be different, and the decoding modes may also be optimized through training of text samples. The present disclosure does not specifically limit the manner of encoding and decoding.

FIG. 5 illustrates a schematic diagram of a machine translation model for text content conversion according to an embodiment of the present disclosure. Taking the title conversion as an example, as shown in fig. 5, the input original title (i.e. the first text message) is "wolf prevention for training of two girls", the display scene to be displayed is a feed stream scene, the feed stream scene usually adopts a three-segment title with a head bar style, and the input original title is the title in the normal scene and is not matched with the feed stream scene. Therefore, the encoder corresponding to the feed flow scene encodes the two girls practicing wolf prevention technology to generate a plurality of multi-dimensional vectors, and outputs the multi-dimensional vectors to the decoder; the decoder decoding the plurality of multi-dimensional vectors generates an output target title (second text information) "two girls practicing wolf prevention, which is too afraid! ". The automatic rewriting of the title is completed in the process, and compared with manual rewriting, the conversion efficiency is greatly improved, and errors are not easy to make.

Application scenario example

Take the example of converting a normal title to a headline style title:

1. preparing training corpora: firstly, preparing a batch of parallel corpora of common titles and titles with the style of head bars; a collection of classified corpora containing titles of different genres is prepared.

2. Model training: the method comprises classifier training and machine translation model training.

And training the convolutional neural network model based on the classified linguistic data to obtain a classifier.

Based on parallel corpora, a set of seq2seq deep artificial neural network models are trained to obtain a machine translation model.

3. By using the trained classifier and the seq2seq model, the style of the input original title can be identified and converted in batch, and the common title is converted into the title in the style of the head bar.

Specifically, an original title (a common title) of the multimedia resource is obtained, and a presentation scene to be delivered by the multimedia resource is determined. The method comprises the steps of firstly generating a Word vector set from an original title through a Word2vec model, using the Word vector set as input of a classifier, adopting the classifier to identify the style of the input original title as a common title according to the vector set, judging whether the common title is matched with a display scene, and if not, converting the original title into a title of which the style is matched with the display scene based on a machine translation model corresponding to the display scene. The specific conversion process is that the original title is coded into a multi-dimensional vector through a coder, and then the multi-dimensional vector is decoded into a corresponding text through a decoder, so that the title with the style matched with the display scene can be obtained.

In a business scenario, according to an example of scenario automatic transition:

example 1, example of delivering video to a home scene. The video information displayed on the home page, because of screen limitations, has a word count requirement for the title that cannot exceed 10 words. At this time, the delivery system transmits the scene identifier (of the display scene) back to the title style conversion system (an example of a text content conversion system and a server specially used for text content conversion), and the title style conversion system selects a corresponding machine translation model according to the scene identifier, automatically converts the title of the video to be delivered into a short title suitable for the first page scene, and transmits the short title to the delivery system.

Example 2, example of delivering video to a feed stream scene. When people want to catch eyes in a Feed stream scene, a three-segment title with a head bar style is generally adopted. At this time, the launching system may transmit the scene identifier (showing the scene) back to the title style conversion system, and the title style conversion system selects the corresponding machine translation model according to the scene identifier, automatically converts the title of the video to be launched into the title of the headline style, and sends the title to the launching system.

Fig. 6 shows a block diagram of a conversion apparatus of text content according to an embodiment of the present disclosure. The apparatus may be applied to a server or a server cluster, for example, to a video delivery server, or may also be a server dedicated to text content conversion.

As shown in fig. 6, the text content converting apparatus may include:

a classification module 61, configured to identify a style of the first text information by using a classifier;

a display scene determining module 62, configured to determine a display scene of the first text information;

a converting module 63, configured to, when the style of the first text information does not match the display scene, convert the first text information into second text information of which the style matches the display scene based on a machine translation model corresponding to the display scene.

By identifying the style and the display scene of the first text information, when the style and the display scene of the first text information are not matched, the style of the first text information is automatically converted by adopting a machine translation model corresponding to the display scene, and the first text information is converted into second text information of which the style is matched with the display scene. According to the text content conversion device disclosed by the invention, the text content can be automatically converted in a large batch, compared with manual rewriting, the conversion efficiency is greatly improved, and the error rate and the uniform style of the rewriting are reduced.

Fig. 7 shows a block diagram of a conversion apparatus of text contents according to an embodiment of the present disclosure. As shown in fig. 7, in a possible implementation manner, the classification module 61 may include:

an obtaining unit 611, configured to obtain a word vector set of the first text information as an input of the classifier;

a classifying unit 612, configured to determine a style of the first text information based on the unique hot codes generated by the classifier according to the word vector set.

In one possible implementation, the presentation scenario determination module 62 may include:

a determining unit 621, configured to determine a presentation scenario according to a received request sent by a terminal,

the request represents a request for the terminal to acquire data carrying the first text information, and the request carries a scene identifier.

In one possible implementation, the machine translation model corresponding to the presentation scenario includes an encoder and a decoder corresponding to the presentation scenario,

the conversion module 63 may include:

an encoding unit 631, configured to encode the first text information into a high multidimensional vector by using an encoder corresponding to the presentation scene;

a decoding unit 632, configured to decode the multidimensional vector into second text information by using a decoder corresponding to the presentation scene.

In one possible implementation, the classifier is generated by training a convolutional neural network model based on text samples.

In a possible implementation manner, the first text information is a title of a multimedia resource.

In one possible implementation, the first text information is a title of a video.

In a possible implementation manner, the presentation scene is a home page scene or a feed stream scene.

In a possible implementation manner, when the first text information is a title, the style includes: long headline, short headline, and headline style.

Fig. 8 is a block diagram illustrating an apparatus 1900 for conversion of textual content according to an example embodiment. For example, the apparatus 1900 may be provided as a server. Referring to FIG. 8, the device 1900 includes a processing component 1922 further including one or more processors and memory resources, represented by memory 1932, for storing instructions, e.g., applications, executable by the processing component 1922. The application programs stored in memory 1932 may include one or more modules that each correspond to a set of instructions. Further, the processing component 1922 is configured to execute instructions to perform the above-described method.

The device 1900 may also include a power component 1926 configured to perform power management of the device 1900, a wired or wireless network interface 1950 configured to connect the device 1900 to a network, and an input/output (I/O) interface 1958. The device 1900 may operate based on an operating system stored in memory 1932, such as Windows Server, MacOS XTM, UnixTM, LinuxTM, FreeBSDTM, or the like.

In an exemplary embodiment, a non-transitory computer readable storage medium, such as the memory 1932, is also provided that includes computer program instructions executable by the processing component 1922 of the apparatus 1900 to perform the above-described methods.

The present disclosure may be systems, methods, and/or computer program products. The computer program product may include a computer-readable storage medium having computer-readable program instructions embodied thereon for causing a processor to implement various aspects of the present disclosure.

The computer readable storage medium may be a tangible device that can hold and store the instructions for use by the instruction execution device. The computer readable storage medium may be, for example, but not limited to, an electronic memory device, a magnetic memory device, an optical memory device, an electromagnetic memory device, a semiconductor memory device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), a Static Random Access Memory (SRAM), a portable compact disc read-only memory (CD-ROM), a Digital Versatile Disc (DVD), a memory stick, a floppy disk, a mechanical coding device, such as punch cards or in-groove projection structures having instructions stored thereon, and any suitable combination of the foregoing. Computer-readable storage media as used herein is not to be construed as transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission medium (e.g., optical pulses through a fiber optic cable), or electrical signals transmitted through electrical wires.

The computer-readable program instructions described herein may be downloaded from a computer-readable storage medium to a respective computing/processing device, or to an external computer or external storage device via a network, such as the internet, a local area network, a wide area network, and/or a wireless network. The network may include copper transmission cables, fiber optic transmission, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. The network adapter card or network interface in each computing/processing device receives computer-readable program instructions from the network and forwards the computer-readable program instructions for storage in a computer-readable storage medium in the respective computing/processing device.

The computer program instructions for carrying out operations of the present disclosure may be assembler instructions, Instruction Set Architecture (ISA) instructions, machine-related instructions, microcode, firmware instructions, state setting data, or source or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The computer-readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider). In some embodiments, the electronic circuitry that can execute the computer-readable program instructions implements aspects of the present disclosure by utilizing the state information of the computer-readable program instructions to personalize the electronic circuitry, such as a programmable logic circuit, a Field Programmable Gate Array (FPGA), or a Programmable Logic Array (PLA).

Various aspects of the present disclosure are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer-readable program instructions.

These computer-readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer-readable program instructions may also be stored in a computer-readable storage medium that can direct a computer, programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer-readable medium storing the instructions comprises an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.

The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer, other programmable apparatus or other devices implement the functions/acts specified in the flowchart and/or block diagram block or blocks.

The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.

Having described embodiments of the present disclosure, the foregoing description is intended to be exemplary, not exhaustive, and not limited to the disclosed embodiments. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terms used herein were chosen in order to best explain the principles of the embodiments, the practical application, or technical improvements to the techniques in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.

16页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:一种在显示屏上进行批注的方法及其系统

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!