Style migration system for automatically generating stylized video

文档序号:1964741 发布日期:2021-12-14 浏览:12次 中文

阅读说明:本技术 自动生成风格化视频的风格迁移系统 (Style migration system for automatically generating stylized video ) 是由 易佳慧 艾志博 叶香莲 于 2021-08-30 设计创作,主要内容包括:本发明公开了一种自动生成风格化视频的风格迁移系统,涉及图像处理领域,包括:至少一个存储器,被配置为存储程序指令;至少一个处理器,被配置为执行所述程序指令,所述程序指令使所述至少一个处理器执行以下步骤:接收视频序列的第一图像和第二图像,其中第一图像和第二图像是连续的图像帧;将与风格图像相关联的风格网络模型应用于第一图像和第二图像,以分别生成风格图像的风格中的第一风格化图像和第二风格化图像,本公开通过考虑源连续帧和风格化连续帧的运动变化来指导神经网络的学习过程,这可以减轻风格化连续帧之间的闪烁伪影,从而在稳定视频风格转移方面提供更好的结果。(The invention discloses a style migration system for automatically generating stylized video, which relates to the field of image processing and comprises the following steps: at least one memory configured to store program instructions; at least one processor configured to execute the program instructions, the program instructions causing the at least one processor to perform the steps of: receiving a first image and a second image of a video sequence, wherein the first image and the second image are consecutive image frames; applying a style network model associated with the style image to the first image and the second image to generate a first stylized image and a second stylized image, respectively, in a style of the style image, the present disclosure guides a learning process of a neural network by taking into account motion changes of the source continuous frames and the stylized continuous frames, which may mitigate flicker artifacts between the stylized continuous frames, providing better results in stabilizing video style transitions.)

1. A method for stylizing a video frame, comprising: receiving a first image and a second image of a video sequence, wherein the first image and the second image are consecutive image frames; applying a stylistic network model associated with the stylistic image to the first image and the second image to generate a first stylized image and a second stylized image, respectively, in a style of the stylistic image; applying a loss network model to the first image, the second image, the first stylized image, the second stylized image, and the stylized image to generate a loss function; determining a set of weights for the style network model based on the generated loss function; and stylizing, by the at least one processor, the video frame by applying a style network model having the determined set of weights to the video frame.

2. The method of claim 1, wherein the stylistic network model comprises a first stylistic network and a second stylistic network, and applying the stylistic network model to the first image and the second image comprises: applying the first stylistic network to the first image to generate the first stylized image having the stylistic image style; and applying the second stylistic network to the second image to generate a second stylized image of a stylistic image style.

3. The method of claim 1, wherein the set of weights for the stylistic network model is determined by minimizing a loss function.

4. The method of claim 3, wherein the loss function includes a content loss related to how well the content of the first image matches the content of the first stylized image and how well the content of the second image matches the content of the second stylized image, a style loss related to how well the first stylized image matches the style of the style image and how well the second stylized image matches the style of the style image, and a time loss related to how well the motion change between the first image and the second image matches the motion change between the first stylized image and the second stylized image.

5. The method of claim 4, wherein applying the lossy network model to generate the loss function comprises: generating a first content loss associated with a difference between spatial features of the first image and the first stylized image and a second content loss associated with a difference between spatial features of the second image and the second stylized image; generating a first stylistic loss associated with a difference between stylistic features of the first stylized image and the stylistic image and a second stylistic loss associated with a difference between stylistic features of the second stylized image and the stylistic image; generating a time penalty associated with a difference between a change in motion between the first image and the second image and a change in motion between the first stylized image and the second stylized image; and combining the first content loss, the second content loss, the first style loss, the second style loss, and the time loss to generate a loss function.

6. The method of claim 5, wherein the first stylistic penalty is a squared Frobenius norm of a difference between the first stylized image and a gram matrix of the stylized image, and the second stylistic penalty is a squared Frobenius norm of a difference between the second stylized image and the gram matrix of the stylized image.

7. The method of claim 5, wherein the lossy network model comprises a first lossy network and a second lossy network, and applying the lossy network model to generate the loss function comprises: applying the first loss network to the first image and the first stylized image to generate the first loss of content, and applying the first loss network to the first stylized image and the stylized image to generate the first loss of style; and applying a second loss network to the second image and the second stylized image to generate a second content loss, and applying the second loss network to the second stylized image and the stylized image to generate a second style loss.

8. The method of claim 1, in which the style network model and the loss network model are convolutional neural network models.

9. A style migration system for automatically generating stylized video, comprising: at least one memory configured to store program instructions; at least one processor configured to execute the program instructions, the program instructions causing the at least one processor to perform the steps of: receiving a first image and a second image of a video sequence, wherein the first image and the second image are consecutive image frames; applying a stylistic network model associated with the stylistic image to the first image and the second image to generate a first stylized image and a second stylized image, respectively, in a style of the stylistic image; applying a loss network model to the first image, the second image, the first stylized image, the second stylized image, and the stylized image to generate a loss function; determining a set of weights for the style network model based on the generated loss function; and stylizing the video frame by applying the style network model with the determined set of weights to the video frame.

10. The system of claim 9, wherein the stylistic network model comprises a first stylistic network and a second stylistic network, and applying the stylistic network model to the first image and the second image comprises: applying the first stylistic network to the first image to generate the first stylized image having the stylistic image style; and applying the second stylistic network to the second image to generate a second stylized image of a stylistic image style.

Technical Field

The present disclosure relates to the field of image processing, and more particularly, to a method for stylizing video frames and a style migration system for automatically generating stylized video.

Background

The image or video may be reconstructed in the style of the stylized image or the reference image using style transfer techniques. For example, stylized video frames have a "starry sky night" style of VincentVanGogh.

The video style conversion converts an original frame sequence to another stylized frame sequence. This may provide a more impressive effect to the user than a conventional filter that merely changes hue or color distribution. In addition, the number of style filters that can be created is not limited, which can greatly enrich products (such as video albums) in electronic devices such as smartphones.

The techniques used in video style transfer can be classified into image-based solutions and video-based solutions, as described below.

1) Image-based solution

The image style transfer method is characterized by learning a style and applying it to other images. Briefly, the image style transfer method uses a gradient descent from white noise to synthesize images matching the content of the source image and the style of the reference image, respectively. A feed forward network can be used to reduce computation time and to effect image style transfer.

Most image-based video style transfer methods are based on image style transfer methods, where they apply image-based style transfer to the video on a frame-by-frame basis. However, this scheme inevitably introduces temporal inconsistencies in video formatting, thus causing severe flicker artifacts and stylistic inconsistencies of moving objects between successive stylized frames.

2) Video-based solution

Video-based solutions attempt to achieve video style transfer directly over the video domain. For example, the conventional method is to obtain a stable video by punishing a deviation from the optical flow of an input video. As elements in the original video move, style features remain present from frame to frame. However, this approach is computationally too heavy for real-time style transfer, requiring several minutes per frame.

Therefore, there is a need to solve the problems of the prior art in this field.

Disclosure of Invention

The purpose of the present disclosure is to provide a style migration system for automatically generating stylized video.

In a first aspect of the disclosure, a method for stylizing a video frame, comprises:

receiving a first image and a second image of a video sequence, wherein the first image and the second image are consecutive image frames;

applying a stylistic network model associated with the stylistic image to the first image and the second image to generate a first stylized image and a second stylized image, respectively, in a style of the stylistic image;

applying a loss network model to the first image, the second image, the first stylized image, the second stylized image, and the stylized image to generate a loss function;

determining a set of weights for the style network model based on the generated loss function; and

the video frame is stylized by the at least one processor by applying a style network model having the determined set of weights to the video frame.

According to an embodiment in combination with the first aspect of the disclosure, the style network model comprises a first style network and a second style network, and the style network model is applied to the first image, the second image comprising:

applying the first stylistic network to the first image to generate the first stylized image having the stylistic image style; and

the second stylistic network is applied to the second image to generate a second stylized image of a stylistic image style.

According to an embodiment incorporating the first aspect of the present disclosure, the set of weights of the stylistic network model is determined by minimizing a loss function.

According to an embodiment incorporating the first aspect of the present disclosure, the loss function comprises a content loss related to a degree of content matching of a content of the first image with a first stylized image and a degree of content matching of a content of the second image with a second stylized image, a style loss, a time loss related to a degree of style matching of the first stylized image with the style image and a degree of style matching of the second stylized image with the style image, and a degree of motion change matching between the first image and the second image with a motion change between the first stylized image and the second stylized image.

According to an embodiment in combination with the first aspect of the disclosure, applying the lossy network model to generate the loss function comprises:

generating a first content loss associated with a difference between spatial features of the first image and the first stylized image and a second content loss associated with a difference between spatial features of the second image and the second stylized image;

generating a first stylistic loss associated with a difference between stylistic features of the first stylized image and the stylistic image and a second stylistic loss associated with a difference between stylistic features of the second stylized image and the stylistic image;

generating a time penalty associated with a difference between a change in motion between the first image and the second image and a change in motion between the first stylized image and the second stylized image; and

the first content loss, the second content loss, the first style loss, the second style loss, and the time loss are combined to generate a loss function.

According to an embodiment incorporating the first aspect of the present disclosure, the first stylistic penalty is a squared Frobenius norm of a difference between the first stylized image and a gram matrix of the stylized image, and the second stylistic penalty is a squared Frobenius norm of a difference between the second stylized image and the gram matrix of the stylized image.

According to an embodiment in combination with the first aspect of the disclosure, the lossy network model comprises a first lossy network and a second lossy network, and applying the lossy network model to generate the loss function comprises:

applying the first loss network to the first image and the first stylized image to generate the first loss of content, and applying the first loss network to the first stylized image and the stylized image to generate the first loss of style; and

applying a second loss network to the second image and the second stylized image to generate a second content loss, and applying the second loss network to the second stylized image and the stylized image to generate a second style loss.

According to an embodiment incorporating the first aspect of the disclosure, the style network model and the loss network model are convolutional neural network models. In a second aspect of the disclosure, a style migration system for automatically generating stylized video, comprising:

at least one memory configured to store program instructions;

at least one processor configured to execute the program instructions, the program instructions causing the at least one processor to perform the steps of:

receiving a first image and a second image of a video sequence, wherein the first image and the second image are consecutive image frames;

applying a stylistic network model associated with the stylistic image to the first image and the second image to generate a first stylized image and a second stylized image, respectively, in a style of the stylistic image;

applying a loss network model to the first image, the second image, the first stylized image, the second stylized image, and the stylized image to generate a loss function;

determining a set of weights for the style network model based on the generated loss function; and

the video frame is stylized by applying a style network model having the determined set of weights to the video frame.

According to an embodiment incorporating the second aspect of the present disclosure, the style network model comprises a first style network and a second style network, and the style network model is applied to the first image, the second image comprising:

applying the first stylistic network to the first image to generate the first stylized image having the stylistic image style; and

the second stylistic network is applied to the second image to generate a second stylized image of a stylistic image style.

According to an embodiment incorporating the second aspect of the present disclosure, the set of weights of the stylistic network model is determined by minimizing a loss function.

According to an embodiment incorporating the second aspect of the present disclosure, the loss function comprises a content loss related to a degree of content matching of the content of the first image to the first stylized image and a degree of content matching of the content of the second image to the second stylized image, a style loss, a time loss related to a degree of style matching of the first stylized image to the style image and a degree of style matching of the second stylized image to the style image, and a degree of motion change matching between the first image and the second image to a motion change between the first stylized image and the second stylized image. According to an embodiment in combination with the second aspect of the disclosure, applying the lossy network model to generate the loss function comprises:

generating a first content loss associated with a difference between spatial features of the first image and the first stylized image and a second content loss associated with a difference between spatial features of the second image and the second stylized image;

generating a first stylistic loss associated with a difference between stylistic features of the first stylized image and the stylistic image and a second stylistic loss associated with a difference between stylistic features of the second stylized image and the stylistic image;

generating a time penalty associated with a difference between a change in motion between the first image and the second image and a change in motion between the first stylized image and the second stylized image; and

the first content loss, the second content loss, the first style loss, the second style loss, and the time loss are combined to generate a loss function.

According to an embodiment in combination with the second aspect of the disclosure, the lossy network model comprises a first lossy network and a second lossy network, and applying the lossy network model to generate the loss function comprises:

applying the first loss network to the first image and the first stylized image to generate the first loss of content, and applying the first loss network to the first stylized image and the stylized image to generate the first loss of style; and

applying a second loss network to the second image and the second stylized image to generate a second content loss, and applying the second loss network to the second stylized image and the stylized image to generate a second style loss.

In a third aspect of the disclosure, a non-transitory computer readable medium storing program instructions that, when executed by at least one processor, cause the at least one processor to perform steps comprising:

receiving a first image and a second image of a video sequence, wherein the first image and the second image are consecutive image frames;

applying a stylistic network model associated with the stylistic image to the first image and the second image to generate a first stylized image and a second stylized image, respectively, in a style of the stylistic image;

applying a loss network model to the first image, the second image, the first stylized image, the second stylized image, and the stylized image to generate a loss function;

determining a set of weights for the style network model based on the generated loss function; and

the video frame is stylized by applying a style network model having the determined set of weights to the video frame.

According to an embodiment in combination with the third aspect of the disclosure, the style network model comprises a first style network and a second style network, and applying the style network model to the first image and the second image comprises:

applying the first stylistic network to the first image to generate the first stylized image having the stylistic image style; and

the second stylistic network is applied to the second image to generate a second stylized image of a stylistic image style.

According to an embodiment incorporating the third aspect of the present disclosure, the set of weights of the stylistic network model is determined by minimizing a loss function.

According to an embodiment incorporating the third aspect of the present disclosure, the loss function comprises a content loss related to a degree of content matching of a content of the first image with a first stylized image and a content matching of a content of the second image with a content of the second stylized image, a style loss related to a degree of style matching of the first stylized image with the style image and a degree of style matching of the second stylized image with the style image, and a time loss related to a degree of motion variation between the first image and the second image matching a motion variation between the first stylized image and the second stylized image.

According to an embodiment in combination with the third aspect of the disclosure, applying the lossy network model to generate the loss function comprises:

generating a first content loss associated with a difference between spatial features of the first image and the first stylized image and a second content loss associated with a difference between spatial features of the second image and the second stylized image;

generating a first stylistic loss associated with a difference between stylistic features of the first stylized image and the stylistic image and a second stylistic loss associated with a difference between stylistic features of the second stylized image and the stylistic image;

generating a time penalty associated with a difference between a change in motion between the first image and the second image and a change in motion between the first stylized image and the second stylized image; and

the first content loss, the second content loss, the first style loss, the second style loss, and the time loss are combined to generate a loss function.

According to an embodiment in combination with the third aspect of the disclosure, the lossy network model comprises a first lossy network and a second lossy network, and applying the lossy network model to generate the loss function comprises:

applying the first loss network to the first image and the first stylized image to generate the first loss of content, and applying the first loss network to the first stylized image and the stylized image to generate the first loss of style; and

applying a second loss network to the second image and the second stylized image to generate a second content loss, and applying the second loss network to the second stylized image and the stylized image to generate a second style loss.

In the present disclosure, a loss function is constructed in consideration of a first image, a second image, a first stylized image, a second stylized image, and a style image to improve stability of a video style transition. Rather than blindly forcing consecutive frames to be exactly the same, the present disclosure guides the learning process of the neural network by considering the motion changes of the source consecutive frames and the stylized consecutive frames, which may mitigate flicker artifacts between stylized consecutive frames, providing better results in stabilizing the video style transition. Other advantages of the present disclosure include better network convergence performance (due to better time loss) and no additional computational burden during runtime.

Detailed Description

The embodiments of the present disclosure, technical problems, structural features, and objects thereof are described in detail below. In particular, the terminology used in the embodiments of the present disclosure is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. In video style transfer, the present disclosure introduces a temporal stability mechanism that takes into account the motion changes of the source and stylized continuous frames, i.e., the source and stylized motion changes are synchronized. This yields better results in terms of stabilizing the video style transition. Unlike some conventional style transfer methods, which introduce a large computational burden during runtime, the present disclosure allows for real-time wrinkle-free style transfer of video.

In a first aspect of the disclosure, a method for stylizing a video frame, comprises:

receiving a first image and a second image of a video sequence, wherein the first image and the second image are consecutive image frames;

applying a stylistic network model associated with the stylistic image to the first image and the second image to generate a first stylized image and a second stylized image, respectively, in a style of the stylistic image;

applying a loss network model to the first image, the second image, the first stylized image, the second stylized image, and the stylized image to generate a loss function;

determining a set of weights for the style network model based on the generated loss function; and

the video frame is stylized by the at least one processor by applying a style network model having the determined set of weights to the video frame.

According to an embodiment in combination with the first aspect of the disclosure, the style network model comprises a first style network and a second style network, and the style network model is applied to the first image, the second image comprising:

applying the first stylistic network to the first image to generate the first stylized image having the stylistic image style; and

the second stylistic network is applied to the second image to generate a second stylized image of a stylistic image style.

According to an embodiment incorporating the first aspect of the present disclosure, the set of weights of the stylistic network model is determined by minimizing a loss function.

According to an embodiment incorporating the first aspect of the present disclosure, the loss function comprises a content loss related to a degree of content matching of a content of the first image with a first stylized image and a degree of content matching of a content of the second image with a second stylized image, a style loss, a time loss related to a degree of style matching of the first stylized image with the style image and a degree of style matching of the second stylized image with the style image, and a degree of motion change matching between the first image and the second image with a motion change between the first stylized image and the second stylized image.

According to an embodiment in combination with the first aspect of the disclosure, applying the lossy network model to generate the loss function comprises:

generating a first content loss associated with a difference between spatial features of the first image and the first stylized image and a second content loss associated with a difference between spatial features of the second image and the second stylized image;

generating a first stylistic loss associated with a difference between stylistic features of the first stylized image and the stylistic image and a second stylistic loss associated with a difference between stylistic features of the second stylized image and the stylistic image;

generating a time penalty associated with a difference between a change in motion between the first image and the second image and a change in motion between the first stylized image and the second stylized image; and

the first content loss, the second content loss, the first style loss, the second style loss, and the time loss are combined to generate a loss function.

According to an embodiment incorporating the first aspect of the present disclosure, the first stylistic penalty is a squared Frobenius norm of a difference between the first stylized image and a gram matrix of the stylized image, and the second stylistic penalty is a squared Frobenius norm of a difference between the second stylized image and the gram matrix of the stylized image.

According to an embodiment in combination with the first aspect of the disclosure, the lossy network model comprises a first lossy network and a second lossy network, and applying the lossy network model to generate the loss function comprises:

applying the first loss network to the first image and the first stylized image to generate the first loss of content, and applying the first loss network to the first stylized image and the stylized image to generate the first loss of style; and

applying a second loss network to the second image and the second stylized image to generate a second content loss, and applying the second loss network to the second stylized image and the stylized image to generate a second style loss.

According to an embodiment incorporating the first aspect of the disclosure, the style network model and the loss network model are convolutional neural network models. In a second aspect of the disclosure, a style migration system for automatically generating stylized video, comprising:

at least one memory configured to store program instructions;

at least one processor configured to execute the program instructions, the program instructions causing the at least one processor to perform the steps of:

receiving a first image and a second image of a video sequence, wherein the first image and the second image are consecutive image frames;

applying a stylistic network model associated with the stylistic image to the first image and the second image to generate a first stylized image and a second stylized image, respectively, in a style of the stylistic image;

applying a loss network model to the first image, the second image, the first stylized image, the second stylized image, and the stylized image to generate a loss function;

determining a set of weights for the style network model based on the generated loss function; and

the video frame is stylized by applying a style network model having the determined set of weights to the video frame.

According to an embodiment incorporating the second aspect of the present disclosure, the style network model comprises a first style network and a second style network, and the style network model is applied to the first image, the second image comprising:

applying the first stylistic network to the first image to generate the first stylized image having the stylistic image style; and

the second stylistic network is applied to the second image to generate a second stylized image of a stylistic image style.

According to an embodiment incorporating the second aspect of the present disclosure, the set of weights of the stylistic network model is determined by minimizing a loss function.

According to an embodiment incorporating the second aspect of the present disclosure, the loss function comprises a content loss related to a degree of content matching of the content of the first image to the first stylized image and a degree of content matching of the content of the second image to the second stylized image, a style loss, a time loss related to a degree of style matching of the first stylized image to the style image and a degree of style matching of the second stylized image to the style image, and a degree of motion change matching between the first image and the second image to a motion change between the first stylized image and the second stylized image. According to an embodiment in combination with the second aspect of the disclosure, applying the lossy network model to generate the loss function comprises:

generating a first content loss associated with a difference between spatial features of the first image and the first stylized image and a second content loss associated with a difference between spatial features of the second image and the second stylized image;

generating a first stylistic loss associated with a difference between stylistic features of the first stylized image and the stylistic image and a second stylistic loss associated with a difference between stylistic features of the second stylized image and the stylistic image;

generating a time penalty associated with a difference between a change in motion between the first image and the second image and a change in motion between the first stylized image and the second stylized image; and

the first content loss, the second content loss, the first style loss, the second style loss, and the time loss are combined to generate a loss function.

According to an embodiment in combination with the second aspect of the disclosure, the lossy network model comprises a first lossy network and a second lossy network, and applying the lossy network model to generate the loss function comprises:

applying the first loss network to the first image and the first stylized image to generate the first loss of content, and applying the first loss network to the first stylized image and the stylized image to generate the first loss of style; and

applying a second loss network to the second image and the second stylized image to generate a second content loss, and applying the second loss network to the second stylized image and the stylized image to generate a second style loss.

In a third aspect of the disclosure, a non-transitory computer readable medium storing program instructions that, when executed by at least one processor, cause the at least one processor to perform steps comprising:

receiving a first image and a second image of a video sequence, wherein the first image and the second image are consecutive image frames;

applying a stylistic network model associated with the stylistic image to the first image and the second image to generate a first stylized image and a second stylized image, respectively, in a style of the stylistic image;

applying a loss network model to the first image, the second image, the first stylized image, the second stylized image, and the stylized image to generate a loss function;

determining a set of weights for the style network model based on the generated loss function; and

the video frame is stylized by applying a style network model having the determined set of weights to the video frame.

According to an embodiment in combination with the third aspect of the disclosure, the style network model comprises a first style network and a second style network, and applying the style network model to the first image and the second image comprises:

applying the first stylistic network to the first image to generate the first stylized image having the stylistic image style; and

the second stylistic network is applied to the second image to generate a second stylized image of a stylistic image style.

According to an embodiment incorporating the third aspect of the present disclosure, the set of weights of the stylistic network model is determined by minimizing a loss function.

According to an embodiment incorporating the third aspect of the present disclosure, the loss function comprises a content loss related to a degree of content matching of a content of the first image with a first stylized image and a content matching of a content of the second image with a content of the second stylized image, a style loss related to a degree of style matching of the first stylized image with the style image and a degree of style matching of the second stylized image with the style image, and a time loss related to a degree of motion variation between the first image and the second image matching a motion variation between the first stylized image and the second stylized image.

According to an embodiment in combination with the third aspect of the disclosure, applying the lossy network model to generate the loss function comprises:

generating a first content loss associated with a difference between spatial features of the first image and the first stylized image and a second content loss associated with a difference between spatial features of the second image and the second stylized image;

generating a first stylistic loss associated with a difference between stylistic features of the first stylized image and the stylistic image and a second stylistic loss associated with a difference between stylistic features of the second stylized image and the stylistic image;

generating a time penalty associated with a difference between a change in motion between the first image and the second image and a change in motion between the first stylized image and the second stylized image; and

the first content loss, the second content loss, the first style loss, the second style loss, and the time loss are combined to generate a loss function.

According to an embodiment in combination with the third aspect of the disclosure, the lossy network model comprises a first lossy network and a second lossy network, and applying the lossy network model to generate the loss function comprises:

applying the first loss network to the first image and the first stylized image to generate the first loss of content, and applying the first loss network to the first stylized image and the stylized image to generate the first loss of style; and

applying a second loss network to the second image and the second stylized image to generate a second content loss, and applying the second loss network to the second stylized image and the stylized image to generate a second style loss.

In the present disclosure, a loss function is constructed in consideration of a first image, a second image, a first stylized image, a second stylized image, and a style image to improve stability of a video style transition. Rather than blindly forcing consecutive frames to be exactly the same, the present disclosure guides the learning process of the neural network by considering the motion changes of the source consecutive frames and the stylized consecutive frames, which may mitigate flicker artifacts between stylized consecutive frames, providing better results in stabilizing the video style transition. Other advantages of the present disclosure include better network convergence performance (due to better time loss) and no additional computational burden during runtime.

It will be understood by those of ordinary skill in the art that each of the units, modules, algorithms and steps described and disclosed in the embodiments of the present disclosure is implemented using electronic hardware or a combination of software and electronic hardware for a computer. Whether these functions are implemented in hardware or software depends on the application and the design requirements of the solution. Those of ordinary skill in the art may implement the functionality of each particular application in different ways without departing from the scope of the present disclosure.

It will be appreciated by those skilled in the art that since the operation of the system, apparatus and modules described above is substantially the same, he/she may refer to the operation of the system, apparatus and modules in the embodiments described above. For ease of description and simplicity, these operations will not be described in detail.

It is to be understood that the systems, devices, and methods disclosed in the embodiments of the present disclosure may be implemented in other ways. The above embodiments are merely exemplary. The partitioning of the modules is based solely on logic functions, while other partitions exist in the implementation. It is possible to combine or integrate a plurality of modules or components in another system. Some features may also be omitted or skipped. On the other hand, the mutual coupling, direct coupling or communicative coupling shown or discussed is operated through some port, device or module, whether indirectly or communicatively operated through electrical, mechanical or other types of forms.

Modules that are discrete components for illustration are physically separated or not. The modules used for display may or may not be physical modules, i.e. located in one place or distributed over a plurality of network modules. Some or all of the modules are used for purposes of embodiments.

Further, each functional module in each embodiment may be integrated in one processing module, physically independent, or integrated in one processing module having two or more modules.

If the software functional module is implemented and used and sold as a product, it may be stored in a readable storage medium in a computer. Based on this understanding, the technical solutions proposed by the present disclosure can be implemented basically or partially in the form of software products. Alternatively, some of the solutions advantageous to the prior art may be implemented in the form of a software product. The software product in a computer is stored in a storage medium that includes a plurality of commands for a computing device (such as a personal computer, server, or network device) to execute all or some of the steps disclosed by embodiments of the present disclosure. The storage medium includes a USB disk, a removable hard disk, a Read Only Memory (ROM), a Random Access Memory (RAM), a floppy disk or other type of medium capable of storing program code.

While the present disclosure has been described in connection with what is presently considered to be the most practical and preferred embodiment, it is to be understood that the disclosure is not to be limited to the disclosed embodiment, but is intended to cover various configurations which may be made without departing from the broadest interpretation of the appended claims.

12页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:用于图像处理的方法、装置、设备、存储介质和程序产品

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!