Cover image generation method, device, equipment, storage medium and program product

文档序号:1846116 发布日期:2021-11-16 浏览:15次 中文

阅读说明:本技术 封面图像生成方法、装置、设备、存储介质和程序产品 (Cover image generation method, device, equipment, storage medium and program product ) 是由 徐传任 于 2021-08-20 设计创作,主要内容包括:本公开提供了一种封面图像生成方法、装置、设备、存储介质和程序产品,涉及图像处理技术领域,尤其涉及视频技术领域。具体实现方案为:从视频中获取多个关键视频片段;分别对所述多个关键视频片段执行封面图像生成操作,得到多个封面图像;对所述多个封面图像进行评分,确定所述多个封面图像中评分最高的目标封面图像;将所述目标封面图像确定为所述视频的封面图像。本公开可以提高封面图像对视频的传播效果。(The present disclosure provides a cover image generation method, apparatus, device, storage medium, and program product, which relate to the technical field of image processing, and in particular, to the technical field of video. The specific implementation scheme is as follows: acquiring a plurality of key video clips from a video; executing front cover image generation operation on the plurality of key video clips respectively to obtain a plurality of front cover images; scoring the plurality of cover images, and determining a highest-scoring target cover image in the plurality of cover images; determining the target cover image as a cover image of the video. The method and the device can improve the propagation effect of the cover image on the video.)

1. A cover image generation method, comprising:

acquiring a plurality of key video clips from a video;

executing front cover image generation operation on the plurality of key video clips respectively to obtain a plurality of front cover images;

scoring the plurality of cover images, and determining a highest-scoring target cover image in the plurality of cover images;

determining the target cover image as a cover image of the video.

2. The method of claim 1, wherein said scoring the plurality of cover images, determining a highest scoring target cover image of the plurality of cover images, comprises:

inputting the cover images into a neural network model for grading to obtain a grading screening result, wherein the grading screening result is used for representing a target cover image with the highest grade in the cover images, and the neural network model is a pre-trained model for grading the cover images.

3. The method of claim 2, wherein the inputting the plurality of cover images into a neural network model for scoring results in scoring filter results comprises:

and inputting the cover images into a neural network model, and grading the cover images in a propagation effect dimension and/or a content quality dimension through the neural network model to obtain a grading screening result.

4. The method according to any one of claims 1 to 3, wherein the obtaining a plurality of key video snippets from a video comprises:

selecting a plurality of video clips meeting preset conditions from the video, wherein the video clips meeting the preset conditions comprise at least one of the following items: the comment data meet the preset comment conditions, and the bullet screen data meet the preset bullet screen conditions;

identifying the video clips to obtain identification results of the video clips;

and according to the identification results of the plurality of video clips, performing editing operation on part or all of the plurality of video clips to obtain the plurality of key video clips.

5. The method of claim 4, wherein the portion or all of the video segments comprise a target video segment, and the editing operation performed on the target video segment comprises at least one of:

adding content in the target video clip, wherein the adding operation comprises adding video content in the video, which is associated with the target video clip, in the target video clip;

and deleting the content in the target video segment.

6. The method of claim 5, wherein the content identifying the plurality of video segments results in the plurality of video segments comprising at least one of:

performing scene recognition on the plurality of video clips to obtain scene recognition results of the plurality of video clips;

performing content identification on the plurality of video clips to obtain content identification results of the plurality of video clips;

and performing language identification on the plurality of video clips to obtain language identification results of the plurality of video clips.

7. The method of any of claims 1-6, wherein the cover image generation operation includes at least one of:

character editing, image editing and audio editing.

8. A cover image generation apparatus comprising:

the acquisition module is used for acquiring a plurality of key video clips from a video;

the generating module is used for respectively executing front cover image generating operation on the plurality of key video clips to obtain a plurality of front cover images;

the scoring module is used for scoring the cover images and determining a highest-scoring target cover image in the cover images;

a determination module to determine the target cover image as a cover image of the video.

9. The apparatus of claim 8, wherein the scoring module is configured to input the plurality of cover images into a neural network model for scoring, resulting in scoring filter results that characterize a highest scoring target cover image of the plurality of cover images, wherein the neural network model is a pre-trained model for scoring cover images.

10. The apparatus of claim 9, wherein the scoring module is configured to input the plurality of cover images into a neural network model, and score the plurality of cover images in a propagation effect dimension and/or a content quality dimension through the neural network model to obtain a scoring screening result.

11. The apparatus of any of claims 8 to 9, wherein the obtaining means comprises:

a selecting unit, configured to select a plurality of video segments satisfying preset conditions from the video, where the preset conditions include at least one of: the comment data meet the preset comment conditions, and the bullet screen data meet the preset bullet screen conditions;

the identification unit is used for identifying the video clips to obtain identification results of the video clips;

and the editing unit is used for executing editing operation on part or all of the video clips according to the identification results of the video clips to obtain the key video clips.

12. The apparatus of claim 11, wherein the part or all of the video segments comprise a target video segment, and the editing operation performed on the target video segment comprises at least one of:

adding content in the target video clip, wherein the adding operation comprises adding video content in the video, which is associated with the target video clip, in the target video clip;

and deleting the content in the target video segment.

13. The apparatus of claim 12, wherein the identifying unit is to at least one of:

performing scene recognition on the plurality of video clips to obtain scene recognition results of the plurality of video clips;

performing content identification on the plurality of video clips to obtain content identification results of the plurality of video clips;

and performing language identification on the plurality of video clips to obtain language identification results of the plurality of video clips.

14. The apparatus of any of claims 8 to 13, wherein the cover image generation operation comprises at least one of:

character editing, image editing and audio editing.

15. An electronic device, comprising:

at least one processor; and

a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,

the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-7.

16. A non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method of any one of claims 1-7.

17. A computer program product comprising a computer program which, when executed by a processor, implements the method according to any one of claims 1-7.

Technical Field

The present disclosure relates to the field of image processing technology, and in particular, to the field of video technology.

Background

Each video is provided with a cover image, and the cover image of the currently determined video is mainly a photo selected by a video creator as a cover image of the video, for example: the video creator makes a cover image of the video with his own photograph.

Disclosure of Invention

The present disclosure provides a cover image generation method, apparatus, device, storage medium, and program product.

According to an aspect of the present disclosure, there is provided a cover image generation method including:

acquiring a plurality of key video clips from a video;

executing front cover image generation operation on the plurality of key video clips respectively to obtain a plurality of front cover images;

scoring the plurality of cover images, and determining a highest-scoring target cover image in the plurality of cover images;

determining the target cover image as a cover image of the video.

According to another aspect of the present disclosure, there is provided a cover image generating apparatus including:

the acquisition module is used for acquiring a plurality of key video clips from a video;

the generating module is used for respectively executing front cover image generating operation on the plurality of key video clips to obtain a plurality of front cover images;

the scoring module is used for scoring the cover images and determining a highest-scoring target cover image in the cover images;

a determination module to determine the target cover image as a cover image of the video.

According to another aspect of the present disclosure, there is provided an electronic device including:

at least one processor; and

a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,

the memory stores instructions executable by the at least one processor to enable the at least one processor to perform a cover image generation method provided by the present disclosure.

According to another aspect of the present disclosure, there is provided a non-transitory computer-readable storage medium storing computer instructions for causing a computer to execute a cover image generation method provided by the present disclosure.

According to another aspect of the present disclosure, there is provided a computer program product comprising a computer program which, when executed by a processor, implements the cover image generation method provided by the present disclosure.

According to the method and the device, the plurality of key video clips are acquired from the video, the plurality of key video clips are used for generating the plurality of cover images, and the target cover image with the highest score in the plurality of cover images is determined as the cover image of the video, so that the spreading effect of the cover image on the video can be improved.

It should be understood that the statements in this section do not necessarily identify key or critical features of the embodiments of the present disclosure, nor do they limit the scope of the present disclosure. Other features of the present disclosure will become apparent from the following description.

Drawings

The drawings are included to provide a better understanding of the present solution and are not to be construed as limiting the present disclosure. Wherein:

FIG. 1 is a flow chart of a cover image generation method provided by the present disclosure;

FIG. 2 is a flow chart of another cover image generation method provided by the present disclosure;

FIG. 3 is a block diagram of a cover image generation apparatus provided by the present disclosure;

FIG. 4 is a block diagram of another cover image generation apparatus provided by the present disclosure;

fig. 5 is a block diagram of an electronic device for implementing a message processing method according to an embodiment of the present disclosure.

Detailed Description

Exemplary embodiments of the present disclosure are described below with reference to the accompanying drawings, in which various details of the embodiments of the disclosure are included to assist understanding, and which are to be considered as merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.

Referring to fig. 1, fig. 1 is a flowchart of a cover image generation method provided by the present disclosure, as shown in fig. 1, including the following steps:

step S101, acquiring a plurality of key video clips from a video.

The video may be one or more videos, such as: the video may be all or part of a single video, or the video may be all or part of a single video of the same type of single video.

The key video clips may be key video clips of the video acquired according to the comments, the barrage, the playing times, and the like.

And S102, executing cover image generation operation on the plurality of key video clips respectively to obtain a plurality of cover images.

The performing of the cover image generation operation on the plurality of key video clips may be generating a plurality of corresponding cover images according to the video contents in the plurality of key video clips.

The cover images may be moving images or static images.

And step S103, scoring the cover images and determining a highest-scoring target cover image in the cover images.

The scoring of the cover images may be performed by scoring the cover images in dimensions such as propagation effect and content quality.

The scoring may be to calculate a score of each cover image, or a scoring level, etc.

In the present disclosure, the higher the score of the cover image, the better the transmission effect of the video, for example: the better the propagation effect, the better the content quality.

And step S104, determining the target cover image as the cover image of the video.

The determining of the target cover image as the cover image of the video may be establishing an association relationship between the target cover image and the video. For example: and storing the selected target cover page image at a server, matching the selected target cover page image with the video, and issuing the target cover page image to a viewer user of the video in the future.

According to the method and the device, the plurality of key video clips are acquired from the video, the plurality of key video clips are used for generating the plurality of cover images, and the target cover image with the highest score in the plurality of cover images is determined as the cover image of the video, so that the spreading effect of the cover image on the video can be improved.

It should be noted that the cover image generation method provided by the present disclosure may be applied to a server, so that the cover image of the video is generated by the server without the setting of a video producer, so as to save time and labor cost of the video producer. In some scenarios, the cover image generation method may be applied to electronic devices such as mobile phones and computers, and the disclosure is not limited thereto.

As an alternative embodiment, the scoring the cover images and determining the highest scoring target cover image in the cover images includes:

inputting the cover images into a neural network model for grading to obtain a grading screening result, wherein the grading screening result is used for representing a target cover image with the highest grade in the cover images, and the neural network model is a pre-trained model for grading the cover images.

The Neural network model may include, but is not limited to, a Convolutional Neural Network (CNN) model, a Long Short-Term Memory network (LSTM) model, and the like.

The neural network model is a pre-trained model for scoring the cover image. Further, the input of the neural network model may include a cover image, and the output may include a scoring result of the cover image, and the scoring screening result is obtained through the scoring result of each cover image, or the input of the neural network model may include a plurality of cover images, and the output may include the scoring screening result, that is, the scoring screening result is directly output through the neural network model. And the neural network model can be scored based on dimensions such as cover image integrity, facial expression and the like.

In this embodiment, since the scoring is performed by the neural network model, the accuracy of scoring the cover image can be improved.

It should be noted that the scoring by the neural network model is not limited in the present disclosure, for example: in some scenarios or embodiments, the cover image may be scored by an image quality assessment algorithm.

Optionally, the inputting the cover images into a neural network model for scoring to obtain a scoring screening result includes:

and inputting the cover images into a neural network model, and grading the cover images in a propagation effect dimension and/or a content quality dimension through the neural network model to obtain a grading screening result.

The above-mentioned propagation effect can be understood as a promotional effect on the video.

The neural network model can be obtained by training a plurality of cover image training samples and propagation effect scoring training samples and/or content quality scoring training samples corresponding to the cover image training samples.

The neural network model can predict the scoring result of each cover image in the propagation effect dimension and/or the content quality dimension. Under the condition of predicting the scoring result of the propagation effect dimension and the scoring result of the content quality dimension, corresponding weights can be configured for the two dimensions in advance, and the final scoring of the cover image is obtained through the scoring result of the propagation effect dimension, the scoring result of the content quality dimension and the corresponding weights, so that the scoring screening result is obtained.

In this embodiment, a plurality of cover images are scored in the propagation effect dimension and/or the content quality dimension, so that the determined propagation effect and/or content quality of the target cover image can be better, and the propagation effect of the cover image on the video can be further improved.

As an optional implementation, the obtaining a plurality of key video snippets from a video includes:

selecting a plurality of video clips meeting preset conditions from the video, wherein the video clips meeting the preset conditions comprise at least one of the following items: the comment data meet the preset comment conditions, and the bullet screen data meet the preset bullet screen conditions;

identifying the video clips to obtain identification results of the video clips;

and according to the identification results of the plurality of video clips, performing editing operation on part or all of the plurality of video clips to obtain the plurality of key video clips.

The comment data meeting the preset comment condition may be that the data volume of the comment data reaches a preset threshold, or the comment data density is sorted within the top N, where N is a positive integer.

The bullet screen data meeting the preset bullet screen condition can be that the data volume of the bullet screen data reaches a preset threshold value, or the bullet screen data density degree is sequenced in the front N.

The above-mentioned performing, according to the recognition results of the plurality of video segments, an editing operation on part or all of the plurality of video segments to obtain the plurality of key video segments may be, according to the recognition results of the plurality of video segments, performing a complete video segment editing operation on part or all of the plurality of video segments to obtain a plurality of complete key video segments, where complete refers to complete video content, for example: a complete dance video, a complete introduction video, etc.

In addition, in the case where an editing operation is performed on a part of a plurality of video clips, it is understood that another part of the video clips need not perform an editing operation, and these video clips can be directly used as key video clips.

In this embodiment, a plurality of video clips that are interested by a plurality of viewers can be selected according to the preset condition, and editing operation is performed on part or all of the plurality of video clips according to the recognition results of the plurality of video clips, so that a key video clip with better video content effect can be obtained, and the propagation effect of the cover image on the video is further improved.

It should be noted that the video may be one or more videos, and in the case that the video is multiple videos (for example, all videos under a certain broadcaster), the selected multiple video clips may be from one or some videos, so that the video clip extracted from one video can be used as a cover page of another video, for example: according to the scheme provided by the disclosure, the key video clips of the video can be used for generating the cover image, and the cover image is used as the cover image of all videos of the broadcaster, so that the cover effect of all videos is improved.

Optionally, the part or all of the video clips include a target video clip, and the editing operation performed on the target video clip includes at least one of:

adding content in the target video clip, wherein the adding operation comprises adding video content in the video, which is associated with the target video clip, in the target video clip;

and deleting the content in the target video segment.

The target video segment may be any one of the plurality of video segments.

The video content associated with the target video segment in the video may be video content that is continuous with the target video segment in the video.

For example: and if the target video segment comprises the dance video, and the recognition result shows that the dance video included in the video segment is incomplete through the recognition operation, adding content continuous with the dance video in the video content associated with the video segment to the target video segment to obtain a complete dance video.

Another example is: through the identification operation, the identification result shows that the target video segment comprises a complete dance video and other video contents, so that the other video contents included by the target video segment can be deleted, and only a complete dance video is reserved.

In this embodiment, the key video clips can be more complete or concise through at least one of the adding operation and the deleting operation, so that the key video clips have a better propagation effect.

Optionally, the content identification of the plurality of video segments to obtain the results of the plurality of video segments includes at least one of:

performing scene recognition on the plurality of video clips to obtain scene recognition results of the plurality of video clips;

performing content identification on the plurality of video clips to obtain content identification results of the plurality of video clips;

and performing language identification on the plurality of video clips to obtain language identification results of the plurality of video clips.

The scene recognition may be to recognize a scene to which the video clip belongs, the content recognition may be to recognize video content of the video clip, and the language recognition may be to recognize audio content of the video clip.

In the embodiment, as scene, content and language identification is carried out on the video clip, the result of the video clip can be accurately identified, and the editing operation effect on the video clip is favorably improved.

As an alternative embodiment, the cover image generating operation includes at least one of:

character editing, image editing and audio editing.

The text editing may be to modify, delete or add text corresponding to the key video clip, the image editing may be to modify, delete or add image content corresponding to the key video clip, and the audio editing may be to modify, delete or add audio content corresponding to the key video clip.

At least one of the text editing, the image editing and the audio editing can be edited according to preset configured front cover image generation logic.

Further, at least one of the above text editing, image editing, and audio editing may also be performed according to at least one of the scenes and languages of the key video clips, for example: different characters, images, audios and the like can be edited for different scenes, and different characters and audios can be edited for different languages.

In this embodiment, since at least one of text editing, image editing, and audio editing is performed on the key video clip, the cover effect of the cover image is improved.

For example, videos of a cate broadcaster often include some cates and a video segment saying "today, happy and happy", and the videos are accompanied by exaggerated expressions, fun and emotional images, and viewers often comment on the video segment. Therefore, the video clips can be obtained as key video clips, and the dynamic cover page images of the key video clips are automatically generated by matching characters through the cover page image generation operation. The jacket photograph image is used in the subsequent video, thereby more easily attracting the viewer to watch.

According to the method and the device, the plurality of key video clips are acquired from the video, the plurality of key video clips are used for generating the plurality of cover images, and the target cover image with the highest score in the plurality of cover images is determined as the cover image of the video, so that the spreading effect of the cover image on the video can be improved.

The method for generating a cover image provided by the present disclosure is exemplified by the embodiment shown in fig. 2, and as shown in fig. 2, the method includes the following steps:

and step S201, guiding the comment.

The guiding comment can be that the server guides the audience to watch the video and comment through animation, tasks and the like.

And S202, screening video segments of which the comment data meet preset conditions in the video.

The screening can be performed through big data so as to accurately acquire the interested part of the audience and prevent the broadcaster from not knowing the characteristics and the positioning of the broadcaster or misjudging the characteristics and the positioning of the broadcaster.

And step S203, identifying scenes or languages of the video clips.

This step may be scene recognition by Augmented Reality (AR) technology.

And step S204, under the condition that the scene or language matching is successful, generating a cover image according to the scene or language.

The scene or language matching success may be a scene or language including the video clip in a plurality of pre-configured scenes or languages.

The generating of the cover image according to the scene or the language may be generating the video clip into the cover image corresponding to the scene or the language.

And step S205, scoring the generated cover images and selecting the highest-scoring target cover image.

And step S206, displaying the video by using the target cover image.

In the technical scheme of the disclosure, the acquisition, storage, application and the like of the personal information of the related user all accord with the regulations of related laws and regulations, and do not violate the good customs of the public order.

Referring to fig. 3, fig. 3 is a front cover image generating apparatus provided by the present disclosure, and as shown in fig. 3, the front cover image generating apparatus 300 includes:

an obtaining module 301, configured to obtain a plurality of key video clips from a video;

a generating module 302, configured to perform a cover image generating operation on the plurality of key video clips, respectively, to obtain a plurality of cover images;

a scoring module 303, configured to score the cover images and determine a highest-scoring target cover image in the cover images;

a determination module 304 for determining the target cover image as a cover image of the video.

Optionally, the scoring module 303 is configured to input the cover images into a neural network model for scoring to obtain a scoring and screening result, where the scoring and screening result is used to represent a highest-scoring target cover image in the cover images, and the neural network model is a pre-trained model for scoring the cover images.

Optionally, the scoring module 303 is configured to input the cover images into a neural network model, and score the cover images in a propagation effect dimension and/or a content quality dimension through the neural network model to obtain a scoring and screening result.

Optionally, as shown in fig. 4, the obtaining module 301 includes:

a selecting unit 3011, configured to select, from the videos, a plurality of video segments that satisfy preset conditions, where the preset conditions include at least one of the following: the comment data meet the preset comment conditions, and the bullet screen data meet the preset bullet screen conditions;

an identifying unit 3012, configured to identify the multiple video segments to obtain identification results of the multiple video segments;

an editing unit 3013, configured to perform an editing operation on some or all of the video segments according to the identification results of the video segments, so as to obtain the key video segments.

Optionally, the part or all of the video clips include a target video clip, and the editing operation performed on the target video clip includes at least one of:

adding content in the target video clip, wherein the adding operation comprises adding video content in the video, which is associated with the target video clip, in the target video clip;

and deleting the content in the target video segment.

Optionally, the identifying unit 3012 is configured to at least one of:

performing scene recognition on the plurality of video clips to obtain scene recognition results of the plurality of video clips;

performing content identification on the plurality of video clips to obtain content identification results of the plurality of video clips;

and performing language identification on the plurality of video clips to obtain language identification results of the plurality of video clips.

Optionally, the cover image generating operation includes at least one of:

character editing, image editing and audio editing.

The present disclosure also provides an electronic device, a readable storage medium, and a computer program product according to embodiments of the present disclosure.

FIG. 5 illustrates a schematic block diagram of an example electronic device 500 that can be used to implement embodiments of the present disclosure. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the disclosure described and/or claimed herein.

As shown in fig. 5, the apparatus 500 comprises a computing unit 501 which may perform various appropriate actions and processes in accordance with a computer program stored in a Read Only Memory (ROM)502 or a computer program loaded from a storage unit 508 into a Random Access Memory (RAM) 503. In the RAM 503, various programs and data required for the operation of the device 500 can also be stored. The calculation unit 501, the ROM 502, and the RAM 503 are connected to each other by a bus 504. An input/output (I/O) interface 505 is also connected to bus 504.

A number of components in the device 500 are connected to the I/O interface 505, including: an input unit 506 such as a keyboard, a mouse, or the like; an output unit 507 such as various types of displays, speakers, and the like; a storage unit 508, such as a magnetic disk, optical disk, or the like; and a communication unit 509 such as a network card, modem, wireless communication transceiver, etc. The communication unit 509 allows the device 500 to exchange information/data with other devices through a computer network such as the internet and/or various telecommunication networks.

The computing unit 501 may be a variety of general-purpose and/or special-purpose processing components having processing and computing capabilities. Some examples of the computing unit 501 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various dedicated Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, and so forth. The calculation unit 501 executes the respective methods and processes described above, such as the cover image generation method. For example, in some embodiments, the cover image generation method may be implemented as a computer software program tangibly embodied in a machine-readable medium, such as storage unit 508. In some embodiments, part or all of the computer program may be loaded and/or installed onto the device 500 via the ROM 502 and/or the communication unit 509. When the computer program is loaded into the RAM 503 and executed by the computing unit 501, one or more steps of the cover image generation method described above may be performed. Alternatively, in other embodiments, the computing unit 501 may be configured to perform the cover image generation method in any other suitable manner (e.g., by way of firmware).

Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuitry, Field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), system on a chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.

Program code for implementing the methods of the present disclosure may be written in any combination of one or more programming languages. These program codes may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the program codes, when executed by the processor or controller, cause the functions/operations specified in the flowchart and/or block diagram to be performed. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package partly on the machine and partly on a remote machine or entirely on the remote machine or server.

In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.

To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.

The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), Wide Area Networks (WANs), and the Internet.

The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server may be a cloud server, a server of a distributed system, or a server with a combined blockchain.

It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present disclosure may be executed in parallel or sequentially or in different orders, and are not limited herein as long as the desired results of the technical solutions disclosed in the present disclosure can be achieved.

The above detailed description should not be construed as limiting the scope of the disclosure. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made in accordance with design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present disclosure should be included in the scope of protection of the present disclosure.

15页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:一种使用AI分析观影心情的算法

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!