Virtual article display method and device and computer readable storage medium

文档序号:138102 发布日期:2021-10-22 浏览:35次 中文

阅读说明:本技术 虚拟物品展示方法、装置和计算机可读存储介质 (Virtual article display method and device and computer readable storage medium ) 是由 段庆龙 于 2020-04-16 设计创作,主要内容包括:本申请实施例公开了一种虚拟物品展示方法、装置和计算机可读存储介质;本申请实施例可以接收虚拟物品发送指令,该虚拟物品发送指令携带虚拟物品标识,基于该虚拟物品标识获取组成虚拟物品的物品图像和目标动效视频,该目标动效视频包括至少一个视频帧,然后确定该视频帧内物品图像融合区域中每个像素点的位置信息,再根据该位置信息在该物品图像中获取对应位置的像素信息,将该像素信息与目标动效视频中对应位置的像素信息进行融合,得到虚拟物品动效视频,将该虚拟物品动效视频发送给目标对象的终端进行虚拟物品的动效展示;该方案可以有效地提高虚拟物品展示的丰富性,改善视觉效果。(The embodiment of the application discloses a virtual article display method, a virtual article display device and a computer readable storage medium; the method comprises the steps of receiving a virtual article sending instruction, wherein the virtual article sending instruction carries a virtual article identifier, obtaining an article image and a target dynamic effect video which form a virtual article based on the virtual article identifier, wherein the target dynamic effect video comprises at least one video frame, then determining position information of each pixel point in an article image fusion area in the video frame, obtaining pixel information of a corresponding position in the article image according to the position information, fusing the pixel information and the pixel information of the corresponding position in the target dynamic effect video to obtain a virtual article dynamic effect video, and sending the virtual article dynamic effect video to a terminal of a target object for dynamic effect display of the virtual article; the scheme can effectively improve the richness of virtual article display and improve the visual effect.)

1. A virtual article display method, comprising:

receiving a virtual article sending instruction, wherein the virtual article sending instruction carries a virtual article identifier;

acquiring an article image and a target dynamic effect video which form a virtual article based on the virtual article identifier, wherein the target dynamic effect video comprises at least one video frame;

determining the position information of each pixel point in the article image fusion area in the video frame;

acquiring pixel information of a corresponding position in the article image according to the position information, and fusing the pixel information with the pixel information of the corresponding position in the target dynamic effect video to obtain a virtual article dynamic effect video;

and sending the virtual article dynamic effect video to a terminal of a target object for dynamic effect display of the virtual article.

2. The method of claim 1, wherein the video frame includes an animation blending region, the location information includes global location information and relative location information, and determining the location information for each pixel point in the item image blending region in the video frame comprises:

carrying out pixel identification on the animation fusion area, and determining an article image fusion area in the animation fusion area;

calculating the global position information of each pixel point in the article image fusion area in the animation fusion area; and

and calculating the relative position information of each pixel point in the article image fusion area.

3. The method according to claim 2, wherein the calculating the relative position information of each pixel point in the item image fusion area comprises:

calculating the percentage of each pixel point in the article image fusion area in the first direction in the article image fusion area;

calculating the percentage of each pixel point in the article image fusion area in the second direction in the article image fusion area;

and determining the relative position information of the pixel points in the article image fusion area according to the percentage of the first direction and the percentage of the first direction.

4. The method according to claim 3, wherein the obtaining pixel information of the corresponding position in the article image according to the position information comprises:

determining the position information of the pixel points in the first direction in the article image according to the percentage of the first direction and the length of the article image;

determining the position information of the pixel points in the second direction in the article image according to the percentage of the second direction and the width of the article image;

and acquiring pixel information of a corresponding position based on the position information in the first direction and the position information in the second direction.

5. The method according to claim 2, wherein the video frame includes an animated special effect region, and the obtaining pixel information of a corresponding position in the item image according to the position information and fusing the pixel information with pixel information of a corresponding position in a target dynamic effect video to obtain a virtual item dynamic effect video includes:

acquiring pixel information of a corresponding position in the article image according to the relative position information;

mapping the global position information to the animation special effect area to obtain corresponding pixel points in the animation special effect area;

and fusing the pixel information with pixel information of corresponding pixel points in the animation special effect area to obtain a virtual article dynamic effect video.

6. The method according to claim 1, wherein the fusing the pixel information with the pixel information of the corresponding position in the target dynamic effect video to obtain the virtual article dynamic effect video comprises:

judging whether pixel points of the article image are located at the edge of the article image or not;

if the pixel point is located at the edge of the article image, performing fusion processing on pixel information of the pixel point to obtain fused pixel information, and replacing the pixel information of the corresponding position in the target dynamic effect video by using the fused pixel information to obtain a virtual article dynamic effect video;

and if the pixel point is not positioned at the edge of the article image, replacing the pixel information of the corresponding position in the target dynamic effect video by using the pixel information to obtain the virtual article dynamic effect video.

7. The method of claim 6, wherein determining whether the pixel points of the item image are located at the edge of the item image comprises:

acquiring pixel information and position information of each pixel point in an article image;

determining an adjacent pixel point of the pixel points based on the position information;

and acquiring the pixel information of the adjacent pixel points, and judging whether the pixel points are positioned at the edge of the article image according to the pixel information of the pixel points and the pixel information of the adjacent pixel points.

8. The method according to claim 7, wherein the fusing the pixel information of the pixel point to obtain fused pixel information includes:

respectively determining the pixel value of the pixel point and the adjacent pixel value of the adjacent pixel point according to the pixel information of the pixel point and the pixel information of the adjacent pixel point;

calculating an average of the pixel value and the neighboring pixel value;

and carrying out fusion processing on the pixel value and the average value to obtain fused pixel information.

9. The method of claim 7, wherein the obtaining pixel information and position information for each pixel point in the image of the item comprises:

loading the article image to obtain a target texture corresponding to the article image;

and sampling the target texture to obtain pixel information and position information of each pixel point.

10. The method according to claim 1, wherein after fusing the pixel information with the pixel information of the corresponding position in the target dynamic effect video, further comprising:

acquiring a non-article image fusion area in a target dynamic effect video;

and setting each pixel point in the non-article image fusion area as a transparent pixel point.

11. The method of claim 1, wherein receiving the virtual item send instruction comprises:

displaying a user interaction page, the user interaction page including a virtual item selection control;

when the triggering operation of the user for the virtual article selection control is detected, displaying a virtual article selection page;

and when the user selects the virtual article sending operation of the page aiming at the virtual article, receiving a virtual article sending instruction.

12. The method according to claim 1, wherein the obtaining of the item image and the target animation video composing the virtual item based on the virtual item identifier, the target animation video comprising at least one video frame comprises:

loading an item image and a target animation video which form a virtual item by utilizing an animation component based on the virtual item identification, wherein the animation component comprises a decoder;

and analyzing the target dynamic effect video by using the decoder to obtain at least one video frame.

13. A virtual good display device, comprising:

the receiving unit is used for receiving a virtual article sending instruction, and the virtual article sending instruction carries a virtual article identifier;

the acquisition unit is used for acquiring an article image and a target dynamic effect video which form a virtual article based on the virtual article identification, wherein the target dynamic effect video comprises at least one video frame;

the determining unit is used for determining the position information of each pixel point in the article image fusion area in the video frame;

the fusion unit is used for acquiring pixel information of a corresponding position in the article image according to the position information, and fusing the pixel information and the pixel information of the corresponding position in the target dynamic effect video to obtain a virtual article dynamic effect video;

and the sending unit is used for sending the virtual article dynamic effect video to a terminal of a target object for dynamic effect display of the virtual article.

14. A computer-readable storage medium storing a plurality of instructions adapted to be loaded by a processor to perform the steps of the virtual good display method of any one of claims 1 to 12.

15. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the steps of the method according to any of claims 1 to 12 are implemented when the program is executed by the processor.

Technical Field

The present application relates to the field of communications technologies, and in particular, to a method and an apparatus for displaying a virtual object, and a computer-readable storage medium.

Background

With the development of electronic technology and the great popularization of electronic devices (such as mobile phones, tablet computers, and the like), electronic devices have more and more applications and more powerful functions, and are developed towards diversification and personalization, and become indispensable electronic products in user life.

In the application of the existing electronic device, a user may select a specific virtual gift from candidate virtual gifts provided by the application platform to send to a target object (such as relatives, friends, coworkers, etc.), and when the user sends the gift to the target object, the gift may be displayed in the electronic device of the target object or the electronic device of the user and the electronic device of the target object may simultaneously display the gift. However, the present gift is too monotonous to be displayed, and the visual effect is not good.

Disclosure of Invention

The embodiment of the application provides a virtual article display method, a virtual article display device and a computer-readable storage medium, which can effectively improve the richness of virtual article display and improve the visual effect.

The embodiment of the application provides a virtual article display method, which comprises the following steps:

receiving a virtual article sending instruction, wherein the virtual article sending instruction carries a virtual article identifier;

acquiring an article image and a target dynamic effect video which form a virtual article based on the virtual article identifier, wherein the target dynamic effect video comprises at least one video frame;

determining the position information of each pixel point in the article image fusion area in the video frame;

acquiring pixel information of a corresponding position in the article image according to the position information, and fusing the pixel information with the pixel information of the corresponding position in the target dynamic effect video to obtain a virtual article dynamic effect video;

and sending the virtual article dynamic effect video to a terminal of a target object for dynamic effect display of the virtual article.

Correspondingly, the embodiment of the present application further provides a virtual article display device, including:

the receiving unit is used for receiving a virtual article sending instruction, and the virtual article sending instruction carries a virtual article identifier;

the acquisition unit is used for acquiring an article image and a target dynamic effect video which form a virtual article based on the virtual article identification, wherein the target dynamic effect video comprises at least one video frame;

the computing unit is used for determining the position information of each pixel point in the article image fusion area in the video frame;

the fusion unit is used for acquiring pixel information of a corresponding position in the article image according to the position information, and fusing the pixel information and the pixel information of the corresponding position in the target dynamic effect video to obtain a virtual article dynamic effect video;

and the sending unit is used for sending the virtual article dynamic effect video to a terminal of a target object for dynamic effect display of the virtual article.

Optionally, in some embodiments, the video frame includes an animation fusion region, the location information includes global location information and relative location information, and the calculation unit may include a determination subunit, a first calculation subunit, and a second calculation subunit, as follows:

the determining subunit is configured to perform pixel identification on the animation fusion region, and determine an article image fusion region in the animation fusion region;

the first calculating subunit is configured to calculate global position information of each pixel point in the article image fusion region in the animation fusion region;

and the second calculating subunit is used for calculating the relative position information of each pixel point in the article image fusion area.

Optionally, in some embodiments, the second calculating subunit may be specifically configured to calculate a percentage of each pixel point in the item image fusion area in the first direction in the item image fusion area; calculating the percentage of each pixel point in the article image fusion area in the second direction in the article image fusion area; and determining the relative position information of the pixel points in the article image fusion area according to the percentage of the first direction and the percentage of the first direction.

Optionally, in some embodiments, the fusion unit may be specifically configured to determine, according to the percentage of the first direction and the length of the item image, position information of the pixel point in the item image in the first direction; determining the position information of the pixel points in the second direction in the article image according to the percentage of the second direction and the width of the article image; and acquiring pixel information of a corresponding position based on the position information in the first direction and the position information in the second direction.

Optionally, in some embodiments, the video frame includes an animation special effect region, and the fusion unit may be specifically configured to obtain pixel information of a corresponding position in the article image according to the relative position information; mapping the global position information to the animation special effect area to obtain corresponding pixel points in the animation special effect area; and fusing the pixel information with pixel information of corresponding pixel points in the animation special effect area to obtain a virtual article dynamic effect video.

Optionally, in some embodiments, the fusion unit may include a judgment subunit and a fusion subunit, as follows:

the judging subunit is used for judging whether the pixel points of the article image are positioned at the edge of the article image;

the fusion subunit is configured to perform fusion processing on the pixel information of the pixel point to obtain fused pixel information if the pixel point is located at the edge of the article image, and replace the pixel information at the corresponding position in the target dynamic effect video with the fused pixel information to obtain a virtual article dynamic effect video; and if the pixel point is not positioned at the edge of the article image, replacing the pixel information of the corresponding position in the target dynamic effect video by using the pixel information to obtain the virtual article dynamic effect video.

Optionally, in some embodiments, the determining subunit may be specifically configured to obtain pixel information and position information of each pixel point in the article image; determining an adjacent pixel point of the pixel points based on the position information; and acquiring the pixel information of the adjacent pixel points, and judging whether the pixel points are positioned at the edge of the article image according to the pixel information of the pixel points and the pixel information of the adjacent pixel points.

Optionally, in some embodiments, the fusion subunit may be specifically configured to determine the pixel value of the pixel point and the adjacent pixel value of the adjacent pixel point according to the pixel information of the pixel point and the pixel information of the adjacent pixel point respectively; calculating an average of the pixel value and the neighboring pixel value; and carrying out fusion processing on the pixel value and the average value to obtain fused pixel information.

Optionally, in some embodiments, the determining subunit may be specifically configured to load the article image by using an open graphics library, so as to obtain a target texture corresponding to the article image; and sampling the target texture to obtain pixel information and position information of each pixel point.

Optionally, in some embodiments, the virtual article display apparatus may further include a transparent processing unit, where the transparent processing unit may be specifically configured to acquire a non-article image fusion region in the target dynamic effect video; and setting each pixel point in the non-article image fusion area as a transparent pixel point.

Optionally, in some embodiments, the receiving unit may be specifically configured to display a user interaction page, where the user interaction page includes a virtual article selection control; when the triggering operation of the user for the virtual article selection control is detected, displaying a virtual article selection page; and when the user selects the virtual article sending operation of the page aiming at the virtual article, receiving a virtual article sending instruction.

Optionally, in some embodiments, the obtaining unit may be specifically configured to load, based on the virtual article identifier, an article image and a target animation video that constitute a virtual article by using an animation component, where the animation component includes a decoder; and analyzing the target dynamic effect video by using the decoder to obtain at least one video frame.

In addition, a computer-readable storage medium is provided, where multiple instructions are stored, and the instructions are suitable for being loaded by a processor to perform steps in any one of the virtual article display methods provided in the embodiments of the present application.

In addition, an electronic device is further provided in an embodiment of the present application, and includes a memory, a processor, and a computer program stored on the memory and executable on the processor, where the processor executes the computer program to implement the steps in any one of the virtual article display methods provided in the embodiment of the present application.

The method comprises the steps of receiving a virtual article sending instruction, wherein the virtual article sending instruction carries a virtual article identifier, then acquiring an article image and a target dynamic effect video which form a virtual article based on the virtual article identifier, wherein the target dynamic effect video comprises at least one video frame, then determining position information of each pixel point in an article image fusion area in the video frame, then acquiring pixel information of a corresponding position in the article image according to the position information, fusing the pixel information and the pixel information of the corresponding position in the target dynamic effect video to obtain a virtual article dynamic effect video, and then sending the virtual article dynamic effect video to a terminal of a target object for dynamic effect display of the virtual article; the scheme can effectively improve the richness of virtual article display and improve the visual effect.

Drawings

In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.

Fig. 1a is a schematic view of a scene of a virtual article display method provided in an embodiment of the present application;

fig. 1b is a flowchart of a virtual article displaying method provided in an embodiment of the present application;

FIG. 1c is an architecture diagram of an animation assembly provided by an embodiment of the present application;

FIG. 1d is a flow diagram of the processing of an animation component provided by an embodiment of the application;

FIG. 1e is a schematic flow chart of edge blending provided in the embodiments of the present application;

fig. 2a is another flowchart of a virtual article displaying method provided in an embodiment of the present application;

FIG. 2b is a diagram of a user interaction page provided by an embodiment of the present application;

FIG. 2c is a diagram of a virtual item selection page provided by an embodiment of the present application;

fig. 2d is a schematic switching page of a virtual article according to an embodiment of the present application;

fig. 2e is a schematic view of processing a pixel point in the virtual article display method according to the embodiment of the present application;

FIG. 2f is a comparison graph before and after the filtering smoothing process provided by the embodiment of the present application;

fig. 2g is a page display diagram of a virtual article sent according to an embodiment of the present application;

FIG. 3 is a schematic structural diagram of a virtual article display apparatus according to an embodiment of the present application;

fig. 4 is a schematic structural diagram of an electronic device provided in an embodiment of the present application.

Detailed Description

The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.

The embodiment of the application provides a virtual article display method, a virtual article display device and a computer-readable storage medium. The virtual article display device may be integrated in an electronic device, and the electronic device may be a server or a terminal.

For example, referring to fig. 1a, first, the terminal integrated with the virtual article presentation apparatus may receive a virtual article transmission instruction triggered by a user, the virtual article sending instruction carries a virtual article identifier, then an article image and a target dynamic effect video which form the virtual article are obtained based on the virtual article identifier, the target dynamic effect video comprises at least one video frame, then an article image fusion area in the video frame is determined, the position information of each pixel point in the article image fusion area is calculated, and then, acquiring pixel information of corresponding position in the object image according to the position information, fusing the pixel information with the pixel information of corresponding position in the target dynamic effect video to obtain a virtual object dynamic effect video, and then, and sending the virtual article dynamic effect video to a terminal of a target object to perform dynamic effect display of the virtual article.

According to the scheme, the combination of animation effects can be realized by utilizing pixel identification and fusion, rich animation effects are provided, elements in animation video frames can be replaced and synthesized, multiplexing can be realized under similar use scenes, the problem of low efficiency under similar scenes is solved, and meanwhile, the customized characteristic is provided, and the visual effect is rich and personalized. The transparent animation effect is realized by utilizing the transparent processing technology of the video frame to solve the shielding problem, and the visual experience of a user is improved. According to the scheme, after the user selects the virtual article, the virtual article can be presented on the video in a transparent animation mode, a new animation effect can be formed by fusion with a background dynamic effect and presented on the video, a user-defined filtering algorithm is applied when target textures are sampled, edge sawteeth appearing when pixels are fused into video frames are eliminated, the richness of virtual article display is effectively improved, and the visual effect is improved.

The following are detailed below. It should be noted that the following description of the embodiments is not intended to limit the preferred order of the embodiments.

The embodiment will be described from the perspective of a virtual article display apparatus, which may be specifically integrated in an electronic device, where the electronic device may include a mobile phone, a tablet Computer, a notebook Computer, a terminal, a Personal Computer (PC), and other devices.

A virtual item display method, comprising: receiving a virtual article sending instruction, wherein the virtual article sending instruction carries a virtual article identifier, then acquiring an article image and a target dynamic effect video which form a virtual article based on the virtual article identifier, wherein the target dynamic effect video comprises at least one video frame, then determining position information of each pixel point in an article image fusion area in the video frame, then acquiring pixel information of a corresponding position in the article image according to the position information, fusing the pixel information and the pixel information of the corresponding position in the target dynamic effect video to obtain a virtual article dynamic effect video, and then sending the virtual article dynamic effect video to a terminal of a target object for dynamic effect display of the virtual article.

As shown in fig. 1b, the specific flow of the virtual article display method may be as follows:

101. and receiving a virtual article sending instruction, wherein the virtual article sending instruction carries a virtual article identifier.

The virtual article sending instruction may carry a virtual article identifier. The virtual article identifier refers to information that can be used to identify a virtual article, such as a number or a code of a virtual cake, a virtual flower, a virtual duck, and the like, and specifically may be an Identity Document (ID) of the virtual article.

The virtual object is a non-physical object and an invisible object. The other production mode and the other production field are visible and untouchable and are a product of the virtual world. For example, the virtual gift may be sent through a network, such as a virtual flower, cake, etc. sent to a friend or stranger, a gift sent to a main broadcaster when watching a live broadcast, or an item derived from a virtual network game world, such as equipment, a weapon, etc.

The virtual article sending instruction may be generated by the virtual article display device receiving a virtual article request sent by a user, or may be generated by other devices, such as a terminal, after receiving the virtual article request sent by the user, and then provided to the virtual article display device, that is, the virtual article display device may specifically receive the virtual article sending instruction, and may also receive the virtual article sending instruction sent by other devices.

For example, a user interaction page may be displayed, the user interaction page including a virtual article selection control, the virtual article selection page being displayed when a triggering operation of the user on the virtual article selection control is detected, and the virtual article sending instruction being received when the user sends an operation on a virtual article of the virtual article selection page.

The representation form of the control can be in the form of an icon, an input box, a button and the like. For example, a user interaction page for a user to communicate with a target object may be specifically displayed on a terminal of the user, the user interaction interface may include a virtual article selection button, for example, a "know-want" button in friend making software, the user may perform a trigger operation on the button, such as clicking, sliding, or the like, when the terminal detects the trigger operation of the user on the virtual article selection control, a virtual article selection page may be displayed, in the virtual article selection page, the user may select a gift that is desired to be sent to the target object, for example, the virtual article selection page may include a virtual article switching control, such as a refresh button, a switching button, and the like, and when the user performs a trigger operation on the virtual article switching control, a different virtual article may be switched for the user to select. When a user performs a triggering operation on a virtual article sending control of the virtual article selection page, the virtual article displayed on the current virtual article selection page is determined to be a gift to be sent to a target object by the user, at this time, a virtual article sending instruction can be triggered, and the terminal receives the instruction. The target object refers to an object to which the user wants to send the virtual item, for example, the target user.

102. And acquiring an article image and a target dynamic effect video which form the virtual article based on the virtual article identification, wherein the target dynamic effect video comprises at least one video frame.

For example, based on the virtual article identifier, an animation component may be used to load an article image and a target animation video that constitute a virtual article, where the animation component includes a decoder, and the decoder is used to parse the target animation video to obtain at least one video frame.

The target animation video refers to a video which is intended to show a specific animation special effect on a virtual article, the target animation video may be bound with an article image in advance in a one-to-one correspondence manner and stored in a device, or may be selected by a user in a personalized manner according to the preference of the user, and the like, which is not limited herein. The animation fusion special effect of the virtual article can be realized by a video file with animation effect (namely a target dynamic effect video), a target image (such as an article image) needing to be fused into a video frame and a JavaScript file defining specific fusion logic.

The JavaScript (JS or JS for short) is a lightweight, interpreted, or just-in-time programming language with function priority. Although it is named as a scripting language for developing Web pages, it is also used in many non-browser environments, JavaScript is based on prototypical programming, multi-modal dynamic scripting languages, and supports object-oriented, imperative, and declarative (e.g., functional programming) styles.

For example, as shown in fig. 1c and 1d, an animation component may be used to load a service animation resource, that is, a js animation logic, an animation material, and an animation-related video file, then, a js rendering engine inside the animation component is used to interpret a js code, the js rendering engine is used to load the animation resource, the js animation logic is analyzed, and an Open Graphics Library (OpenGL) or a Metal is used to load an object image to obtain corresponding texture data, that is, a target texture corresponding to the object image. An internal decoding module (i.e., decoder) is called to load a video file related to the animation, such as a target animation video, and the video file can be decoded by using AVFoundation or FFmpeg to obtain at least one video frame. Therein, a video frame may be divided into two equal regions: an animation special effect area and a position information area of the fusion target texture.

Among them, the open graphics library is a cross-language, cross-platform Application Programming Interface (API) for rendering 2D, 3D vector graphics. This interface consists of nearly 350 different function calls to draw from simple graphics bits to complex three-dimensional scenes. Metal is a brand-new technology, specially created for developers who develop high-presence host games, and can make the developers fully exert the performance of the A7 and A8 chips. The technique is optimized to enable the processor and the graphics processor to work cooperatively to achieve optimal performance. It is designed specifically for multithreading and provides various excellent tools to integrate all material in Xcode. The Metal is a low-level rendering application programming interface, provides the lowest level required by software, and ensures that the software can run on different graphic chips.

Among these, AVFoundation is the iOS's own framework for processing audio-video frames, such as audio-video codec, etc., which can be used to play and create one of several frameworks of time-based audio-visual media, which provides an OC interface over a detailed interface of time-based audio-visual data. It can be used to examine, create, edit, re-encode media files. An advanced OC framework for manipulating video during real-time capture playback and processing of time-based media data is also available from the device. The advantages of multi-core hardware are fully utilized, and a large number of block and multi-thread programming (GCD) mechanisms are used for putting complex computing processes into background threads to run. Hardware acceleration is automatically provided to ensure that applications run at optimal performance on most devices.

The FFmpeg is a frame of a real audio and video frame which is open-sourced by a third party, and is a set of open-sourced computer programs which can be used for recording and converting digital audio and video and can convert the digital audio and video into streams. LGPL or GPL licenses are used. It provides a complete solution for recording, converting and streaming audio and video. It contains a very advanced audio/video codec (libavcodec), in which many codes are developed from scratch in order to guarantee high portability and codec quality. The FFmpeg multimedia video processing tool has very powerful functions including video capture function, video format conversion, video capture, watermarking of video, etc.

The programmable part in fig. 1c is js service logic code for implementing specific animation effects, controlling specific effects of video frames, such as video frame transparent processing, synthesizing video frames, and the like. For example, js1 and js2 represent different js animation logics, and the js rendering engine can select different js animation logics according to the video fusion requirement.

103. And determining the position information of each pixel point in the article image fusion area in the video frame.

The location information may include global location information and relative location information. For example, pixel recognition may be performed on the animation fusion region, an article image fusion region in the animation fusion region is determined, global position information of each pixel point in the article image fusion region in the animation fusion region is calculated, and relative position information of each pixel point in the article image fusion region is calculated.

For example, an article image fusion region in the video frame may be determined, and position information of each pixel point in the article image fusion region may be calculated, for example, each pixel point in the animation fusion region may be specifically identified by using an RGBA value of the pixel to determine the article image fusion region in the animation fusion region, and then position information of each pixel point in the article image fusion region is calculated.

Where RGBA is a color space representing Red (Red) Green (Green) Blue (Blue) and Alpha. Although it is sometimes described as a color space, it is actually just the RGB model with the additional information added. The color used is RGB, which can belong to any RGB color space, but Catmull (name of a person) and Smith (name of a person) propose this indispensable alpha value between 1971 and 1972, making alpha rendering and alpha synthesis possible.

The alpha channel (alpha channel) is typically used as the opacity parameter. If a pixel has an alpha channel value of 0%, it is completely transparent (i.e., invisible), and a value of 100% means a completely opaque pixel (a conventional digital image). A value between 0% and 100% allows the pixel to be displayed through the background, as though it were through glass (translucency), which effect is simply binary transparency (transparent or opaque) does not. It facilitates digital synthesis. The alpha channel values may be expressed in percentage, integer, or real numbers from 0 to 1 as with the RGB parameters.

For example, the relative position information of the pixel point can be determined by calculating the relative percentage of the pixel point in each direction, for example, the 2D image can calculate the x and y directions, and the 3D image can calculate the x, y and z directions, and the like.

For example, the percentage of each pixel point in the item image fusion region in the first direction in the item image fusion region may be specifically calculated, the percentage of each pixel point in the item image fusion region in the second direction in the item image fusion region may be calculated, and the relative position information of the pixel point in the item image fusion region may be determined according to the percentage of the first direction and the percentage of the first direction.

104. And acquiring pixel information of a corresponding position in the article image according to the position information, and fusing the pixel information and the pixel information of the corresponding position in the target dynamic effect video to obtain the virtual article dynamic effect video.

For example, the position information of the pixel point in the article image in the first direction may be determined according to the percentage of the first direction and the length of the article image, the position information of the pixel point in the article image in the second direction may be determined according to the percentage of the second direction and the width of the article image, and the pixel information of the corresponding position may be obtained based on the position information in the first direction and the position information in the second direction.

The pixel information may refer to color information, brightness information, and/or gray scale information of the pixel point, such as a pixel value. The pixel value is a value given by a computer when the document image is digitized.

For example, a js engine may be used to perform image processing on a video frame and a target texture to obtain an animation effect texture with a final effect, that is, a virtual article animation effect video, for example, the identified relative position information may be used to sample the target texture to obtain a pixel value of a corresponding point, the pixel value is replaced with the pixel value of the corresponding point in the original video frame, and the same technique is applied to other pixel points to finally blend other image materials (such as an article image) into the video frame, so as to achieve the effect of blending animation.

For example, pixel information of a corresponding position in the article image may be specifically obtained according to the relative position information, the global position information is mapped to the animation special effect region to obtain a corresponding pixel point in the animation special effect region, and the pixel information of the corresponding pixel point in the animation special effect region are fused to obtain a virtual article dynamic effect video.

In order to eliminate edge sawteeth appearing when pixels are fused into a video frame, smoothing can be performed on edge pixel points, for example, a user-defined filtering algorithm is applied when target textures are sampled, for example, whether the pixel points of an article image are located at the edge of the article image can be specifically judged, if the pixel points are located at the edge of the article image, fusion processing is performed on the pixel information of the pixel points to obtain fused pixel information, the fused pixel information is used for replacing the pixel information of the corresponding position in the target dynamic effect video to obtain a virtual article dynamic effect video, and if the pixel points are not located at the edge of the article image, the pixel information is used for replacing the pixel information of the corresponding position in the target dynamic effect video to obtain the virtual article dynamic effect video.

There are many ways to determine whether a pixel point of the article image is located at the edge of the article image, for example, whether the pixel point is located at the edge can be determined by an adjacent pixel point of the pixel point, and so on. The adjacent pixels can be the upper, lower, left and right pixels of the pixel, or the pixels in the east, south, east, west, north and east directions, and the like.

For example, as shown in fig. 1e, pixel information and position information of each pixel point in the article image may be specifically obtained, an adjacent pixel point of the pixel point is determined based on the position information, the pixel information of the adjacent pixel point is obtained, and whether the pixel point is located at the edge of the article image is determined according to the pixel information of the pixel point and the pixel information of the adjacent pixel point. Then, respectively determining the pixel value of the pixel point and the adjacent pixel value of the adjacent pixel point according to the pixel information of the pixel point and the pixel information of the adjacent pixel point, calculating the average value of the pixel value and the adjacent pixel value, carrying out fusion processing on the pixel value and the average value to obtain fused pixel information, and replacing the pixel information of the corresponding position in the target dynamic effect video by utilizing the fused pixel information to obtain the virtual article dynamic effect video.

The method for obtaining the pixel information and the position information of each pixel point in the article image may be various, for example, the article image may be loaded by using an open graphic library to obtain a target texture corresponding to the article image, and the target texture is sampled to obtain the pixel information and the position information of each pixel point.

In order to present the virtual article motion effect video on the video in a transparent form, a transparent processing technology of a video frame can be utilized to achieve a transparent animation effect to solve the problem of occlusion, for example, a non-article image fusion area in the target motion effect video can be specifically obtained, and each pixel point in the non-article image fusion area is set as a transparent pixel point.

105. And sending the virtual article dynamic effect video to a terminal of a target object to perform dynamic effect display of the virtual article.

For example, after the virtual article motion effect video is generated, the virtual article motion effect video may be directly sent to the terminal of the target object, or the virtual article motion effect video may be stored in the server, so that the server sends the virtual article motion effect video to the terminal of the target object, so as to perform motion effect display of the virtual article in the terminal of the target object, or simultaneously display the virtual article on the terminal of the user while sending the virtual article motion effect video, and so on.

In this case, an interface for displaying an animation effect, such as raantimationview (animation view interface), may be utilized, and the video frame output by the rendering engine is finally projected onto the display interface to display the final specific effect.

As can be seen from the above, this embodiment may receive a virtual article sending instruction, where the virtual article sending instruction carries a virtual article identifier, then obtain an article image and a target dynamic effect video that form a virtual article based on the virtual article identifier, where the target dynamic effect video includes at least one video frame, determine location information of each pixel point in an article image fusion region in the video frame, then obtain pixel information of a corresponding location in the article image according to the location information, fuse the pixel information with pixel information of a corresponding location in the target dynamic effect video, obtain a virtual article dynamic effect video, and then send the virtual article dynamic effect video to a terminal of a target object for dynamic effect display of the virtual article. According to the scheme, the combination of animation effects can be realized by utilizing pixel identification and fusion, rich animation effects are provided, elements in a video frame can be replaced and synthesized, multiplexing can be realized under similar use scenes, the problem of low efficiency under similar scenes is solved, and meanwhile, the customized characteristic and rich and personalized visual effects are achieved. The transparent animation effect is realized by utilizing the transparent processing technology of the video frame to solve the shielding problem, and the visual experience of a user is improved. According to the scheme, after the user selects the virtual article, the virtual article can be presented on the video in a transparent animation mode, a new animation effect can be formed by fusion with a background dynamic effect and presented on the video, a user-defined filtering algorithm is applied when target textures are sampled, edge sawteeth appearing when pixels are fused into video frames are eliminated, the richness of virtual article display is effectively improved, and the visual effect is improved.

The method described in the previous embodiment is further detailed by way of example.

In this embodiment, the virtual article display device is specifically integrated in an electronic device, an article image of a virtual article is a duck image, and a target dynamic effect video is a firework video.

As shown in fig. 2a, a virtual article display method may specifically include the following processes:

201. the electronic device displays a user interaction page that includes a virtual item selection control.

For example, the electronic device may specifically display a user interaction page for the user to communicate with the target object, and the user interaction page may include a virtual item selection control, for example, as shown in fig. 2b, a "know-want" button, which may be clicked by the user.

202. When the triggering operation of the user for the virtual article selection control is detected, the electronic equipment displays a virtual article selection page.

For example, specifically, when the electronic device detects a triggering operation of the user on the virtual article selection control, for example, a click operation is performed on a "know-want" button, a virtual article selection page is displayed, as shown in fig. 2c, the virtual article selection page may include a virtual article switching control, for example, a refresh button, and the user may switch a different virtual article, for example, a gift, by clicking the refresh button, as shown in fig. 2d, from which a virtual article that is desired to be sent to the target object is selected.

203. When the user selects the virtual item sending operation of the page for the virtual item, the electronic equipment receives a virtual item sending instruction.

For example, the virtual item selection page may include a virtual item sending control, which may be a "send" button as shown in fig. 2c, and specifically, when the user performs a triggering operation on the virtual item sending control of the virtual item selection page, for example, a click operation is performed on the "send" button, it is determined that a virtual item, such as a duck, displayed on the current virtual item selection page is used as a gift to be sent to a target object by the user, at this time, a virtual item sending instruction may be triggered, the electronic device will receive the instruction, and the virtual item sending instruction may carry a virtual item identifier.

204. The electronic equipment acquires an article image and a target dynamic effect video which form the virtual article based on the virtual article identification, wherein the target dynamic effect video comprises at least one video frame.

For example, the electronic device may load, based on the virtual article identifier, an article image (e.g., a duck image) and a target animation video (e.g., a firework animation video) that constitute the virtual article by using an animation component, and a js animation logic, interpret js codes by using a js engine inside the animation component, and call a decoding module inside the animation component to parse the firework animation video to obtain at least one video frame. Therein, a video frame may be divided into two equal regions: an animation special effect area and a position information area of the fusion target texture. And loading the duck image by utilizing OpenGL to obtain the target texture corresponding to the duck image.

205. And the electronic equipment carries out pixel identification on the animation fusion area and determines an article image fusion area in the animation fusion area.

For example, as shown in fig. 2e, the position of the target texture in the animation fusion region may be marked with a solid color block, and the electronic device may specifically identify each pixel point in the animation fusion region by using the RGBA value, for example, pixel scan identification may be performed by using underlying Graphics Processing Unit (GPU) hardware to determine a duck image fusion region in the animation fusion region, such as a solid color small block in the middle of the animation fusion region in fig. 2 e.

206. And the electronic equipment calculates the position information of each pixel point in the article image fusion area.

The location information may include global location information and relative location information. For example, the electronic device may calculate global position information of each pixel point in the duck image fusion region in the animation fusion region, for example, as shown in fig. 2e, when a pixel point in the duck image fusion region is identified, coordinate information of the pixel point in the animation fusion region is calculated, such as a pixel point a (x0, y0), and the pixel point a is mapped to the left animation special effect region to obtain coordinate information of a C point (x2, y2), where the C point is a pixel point to be fused.

Then, the electronic device can calculate the relative position information of each pixel point in the duck image fusion area. For example, the percentage of each pixel point in the duck image fusion region in the first direction in the duck image fusion region may be specifically calculated, for example, the percentage in the x direction, the percentage of each pixel point in the duck image fusion region in the second direction in the duck image fusion region may be calculated, for example, the percentage in the y direction, and the relative position information of the pixel point in the duck image fusion region may be determined according to the percentage in the first direction and the percentage in the first direction.

207. And the electronic equipment acquires pixel information of a corresponding position in the article image according to the position information.

For example, the electronic device may determine the position information of the pixel point in the first direction in the duck image according to the percentage of the first direction and the length of the duck image, determine the position information of the pixel point in the second direction in the duck image according to the percentage of the second direction and the width of the duck image, obtain the pixel information of the corresponding position in the duck image based on the position information in the first direction and the position information in the second direction, for example, map the a point to the duck image to obtain the position information of the B point (x2, y2), sample the target texture of the duck image to obtain the RGBA pixel information of the B point in the target texture.

208. And the electronic equipment fuses the pixel information and the pixel information of the corresponding position in the target dynamic effect video to obtain the virtual article dynamic effect video.

For example, the electronic device may obtain pixel information of a corresponding position in the duck image according to the relative position information, map the global position information to the animation special effect region to obtain a corresponding pixel point in the animation special effect region, fuse the pixel information with the pixel information of the corresponding pixel point in the animation special effect region, and perform the operations of step 205 to step 208 on each frame of video frame until all the video frames are processed to obtain the virtual article motion effect video.

For example, in order to eliminate edge jaggies occurring when pixels are fused into a video frame, smoothing may be performed on edge pixel points, for example, a user-defined filtering algorithm is applied when sampling a target texture, for example, the electronic device may specifically obtain pixel information and position information of each pixel point in a duck image, determine an adjacent pixel point of the pixel point based on the position information, obtain pixel information of the adjacent pixel point, and determine whether the pixel point is located at an edge of the duck image according to the pixel information of the pixel point and the pixel information of the adjacent pixel point. For example, whether the pixel point is at the edge of the target texture may be determined according to whether the pixel-based alpha channel is less than 1.

If the pixel point is located at the edge of the duck image, respectively determining the pixel value of the pixel point and the adjacent pixel value of the adjacent pixel point according to the pixel information of the pixel point and the pixel information of the adjacent pixel point, calculating the average value of the pixel value and the adjacent pixel value, fusing the pixel value and the average value to obtain fused pixel information, and replacing the fused pixel information with the pixel information of the corresponding position in the firework dynamic effect video to obtain the virtual article dynamic effect video. And if the pixel point is not positioned at the edge of the duck image, replacing the pixel information of the corresponding position in the firework dynamic effect video with the pixel information to obtain the virtual article dynamic effect video.

For example, as shown in fig. 1e, a southeast pixel ES, a southwest pixel WS, a northwest pixel WN, and a northeast pixel EN of the pixel M can be taken as the neighboring pixels. Then, calculating an average value of pixel values of the pixel point M, the southeast pixel point ES, the southwest pixel point WS, the northwest pixel point WN and the northeast pixel point EN, and fusing the pixel value and the average value of the pixel point M to obtain fused pixel information. For example, as shown in fig. 2f, before and after the filtering and smoothing process, fig. 2f (1) is an effect diagram of the pixel point before the smoothing process is performed by applying the customized filtering algorithm, and fig. 2f (2) is an effect diagram of the pixel point after the smoothing process is performed by applying the customized filtering algorithm.

The part of codes for eliminating the edge saw teeth generated by fusing the pixels into the video frame by applying the self-defined filtering algorithm can be as follows:

in order to present the virtual article motion effect video on the video in a transparent form, the electronic device may utilize a transparent processing technology of the video frame to achieve a transparent animation effect to solve the problem of occlusion, for example, the electronic device may specifically obtain a non-duck image fusion region in the firework motion effect video, set each pixel point in the non-duck image fusion region as a transparent pixel point, and set each pixel point in a non-solid block region of the animation fusion region as a transparent pixel point as shown in fig. 2 e.

209. And the electronic equipment sends the virtual article dynamic effect video to a terminal of the target object.

For example, in order to multiplex the obtained virtual article motion effect video in a similar usage scenario and improve efficiency, the electronic device may store the virtual article motion effect video in its own memory, or store the virtual article motion effect video in the server, so that the server sends the virtual article motion effect video to the terminal of the target object, so as to perform motion effect display of the virtual article in the terminal of the target object, or simultaneously display the virtual article on the terminal of the user while sending the virtual article motion effect video, and so on. For example, after the user clicks 'send', the duck gift can be presented on the video by the animation in the transparent form, and can also be fused with the background dynamic effect firework effect to form a new animation effect to be presented on the video, as shown in fig. 2g, the duck is sprayed upwards along with the fireworks to be scattered, and the duck itself also moves upwards and slowly becomes larger, just like the effect of spraying a dynamic duck in the fireworks.

As can be seen from the above, this embodiment may receive a virtual article sending instruction, where the virtual article sending instruction carries a virtual article identifier, then obtain an article image and a target dynamic effect video that form a virtual article based on the virtual article identifier, where the target dynamic effect video includes at least one video frame, determine location information of each pixel point in an article image fusion region in the video frame, then obtain pixel information of a corresponding location in the article image according to the location information, fuse the pixel information with pixel information of a corresponding location in the target dynamic effect video, obtain a virtual article dynamic effect video, and then send the virtual article dynamic effect video to a terminal of a target object for dynamic effect display of the virtual article. According to the scheme, the combination of animation effects can be realized by utilizing pixel identification and fusion, rich animation effects are provided, elements in a video frame can be replaced and synthesized, multiplexing can be realized under similar use scenes, the problem of low efficiency under similar scenes is solved, and meanwhile, the customized characteristic and rich and personalized visual effects are achieved. The transparent animation effect is realized by utilizing the transparent processing technology of the video frame to solve the shielding problem, and the visual experience of a user is improved. This scheme makes the user after selecting virtual article, and virtual article not only can present on the video with the animation of transparent form, can also move the effect with the background and fuse and form new animation effect and present on the video, for example, virtual article duck upwards sprays along with fireworks and scatters, and duck self also upwards moves and slowly grow, just like the effect of spouting a dynamic duck in the fireworks. When the target texture is sampled, a user-defined filtering algorithm is applied, edge sawteeth generated when pixels are fused into video frames are eliminated, the richness of virtual article display is effectively improved, and the visual effect is improved.

In order to better implement the method, correspondingly, an embodiment of the present application further provides a virtual article display apparatus, where the virtual article display apparatus may be specifically integrated in a terminal, and the terminal may include a mobile phone, a tablet computer, a notebook computer, a personal computer, and other devices capable of implementing virtual article display.

For example, as shown in fig. 3, the virtual article exhibition apparatus may include a receiving unit 301, an obtaining unit 302, a calculating unit 303, a fusing unit 304, and a transmitting unit 305, as follows:

(1) a receiving unit 301;

a receiving unit 301, configured to receive a virtual article sending instruction, where the virtual article sending instruction carries a virtual article identifier.

Optionally, in some embodiments, the receiving unit 301 may be specifically configured to display a user interaction page, where the user interaction page includes a virtual article selection control; when the triggering operation of the user for the virtual article selection control is detected, displaying a virtual article selection page; and when the user selects the virtual item sending operation of the page aiming at the virtual item, receiving a virtual item sending instruction.

(2) An acquisition unit 302;

an obtaining unit 302, configured to obtain, based on the virtual article identifier, an article image and a target dynamic effect video that constitute a virtual article, where the target dynamic effect video includes at least one video frame.

Optionally, in some embodiments, the obtaining unit 302 may be specifically configured to load, based on the virtual article identifier, an article image and a target animation video that constitute a virtual article by using an animation component, where the animation component includes a decoder; and analyzing the target dynamic effect video by using the decoder to obtain at least one video frame.

(3) A calculation unit 303;

the calculating unit 303 is configured to determine position information of each pixel point in the article image fusion region in the video frame.

Optionally, in some embodiments, the video frame includes an animation fusion region, the position information includes global position information and relative position information, and the calculating unit 303 may include a determining subunit, a first calculating subunit, and a second calculating subunit, as follows:

the determining subunit is used for carrying out pixel identification on the animation fusion area and determining an article image fusion area in the animation fusion area;

the first calculating subunit is used for calculating the global position information of each pixel point in the article image fusion area in the animation fusion area;

and the second calculating subunit is used for calculating the relative position information of each pixel point in the article image fusion area.

Optionally, in some embodiments, the second calculating subunit may be specifically configured to calculate a percentage of each pixel point in the item image fusion region in the first direction in the item image fusion region; calculating the percentage of each pixel point in the article image fusion area in the second direction in the article image fusion area; and determining the relative position information of the pixel point in the article image fusion area according to the percentage of the first direction and the percentage of the first direction.

(4) A fusion unit 304;

and the fusion unit 304 is configured to obtain pixel information of a corresponding position in the article image according to the position information, and fuse the pixel information with the pixel information of the corresponding position in the target dynamic effect video to obtain a virtual article dynamic effect video.

Optionally, in some embodiments, the fusing unit 304 may be specifically configured to determine, according to the percentage of the first direction and the length of the item image, position information of the pixel point in the item image in the first direction; determining the position information of the pixel point in the second direction in the article image according to the percentage of the second direction and the width of the article image; based on the position information in the first direction and the position information in the second direction, pixel information of a corresponding position is acquired.

Optionally, in some embodiments, the video frame includes an animation special effect region, and the fusion unit 304 may be specifically configured to obtain pixel information of a corresponding position in the article image according to the relative position information; mapping the global position information to the animation special effect area to obtain corresponding pixel points in the animation special effect area; and fusing the pixel information with pixel information of corresponding pixel points in the animation special effect area to obtain a virtual article dynamic effect video.

Optionally, in some embodiments, the fusion unit 304 may include a judgment subunit and a fusion subunit, as follows:

the judging subunit is used for judging whether the pixel points of the article image are positioned at the edge of the article image;

the fusion subunit is configured to perform fusion processing on the pixel information of the pixel point to obtain fused pixel information if the pixel point is located at the edge of the article image, and replace the pixel information at the corresponding position in the target dynamic effect video with the fused pixel information to obtain a virtual article dynamic effect video; and if the pixel point is not positioned at the edge of the article image, replacing the pixel information of the corresponding position in the target dynamic effect video by using the pixel information to obtain the virtual article dynamic effect video.

Optionally, in some embodiments, the determining subunit may be specifically configured to obtain pixel information and position information of each pixel point in the article image; determining an adjacent pixel point of the pixel point based on the position information; and acquiring the pixel information of the adjacent pixel point, and judging whether the pixel point is positioned at the edge of the article image according to the pixel information of the pixel point and the pixel information of the adjacent pixel point.

Optionally, in some embodiments, the fusion subunit may be specifically configured to determine the pixel value of the pixel point and the adjacent pixel value of the adjacent pixel point according to the pixel information of the pixel point and the pixel information of the adjacent pixel point respectively; calculating an average of the pixel value and the neighboring pixel value; and carrying out fusion processing on the pixel value and the average value to obtain fused pixel information.

Optionally, in some embodiments, the determining subunit may be specifically configured to load the article image by using an open graphics library, so as to obtain a target texture corresponding to the article image; and sampling the target texture to obtain pixel information and position information of each pixel point.

Optionally, in some embodiments, the virtual article display apparatus may further include a transparent processing unit 306, where the transparent processing unit 306 may be specifically configured to obtain a non-article image fusion region in the target dynamic effect video; and setting each pixel point in the non-article image fusion area as a transparent pixel point.

(5) A transmission unit 305;

a sending unit 305, configured to send the virtual article dynamic effect video to a terminal of a target object for dynamic effect display of a virtual article.

In a specific implementation, the above units may be implemented as independent entities, or may be combined arbitrarily to be implemented as the same or several entities, and the specific implementation of the above units may refer to the foregoing method embodiments, which are not described herein again.

As can be seen from the above, in this embodiment, the receiving unit 301 may receive a virtual article sending instruction, where the virtual article sending instruction carries a virtual article identifier, then the obtaining unit 302 obtains an article image and a target dynamic effect video forming a virtual article based on the virtual article identifier, where the target dynamic effect video includes at least one video frame, then the calculating unit 303 determines position information of each pixel point in an article image fusion region in the video frame, then the fusing unit 304 obtains pixel information of a corresponding position in the article image according to the position information, fuses the pixel information and the pixel information of the corresponding position in the target dynamic effect video, so as to obtain a virtual article dynamic effect video, and then the sending unit 305 sends the virtual article dynamic effect video to a terminal of a target object for dynamic effect display of the virtual article. According to the scheme, the combination of animation effects can be realized by utilizing pixel identification and fusion, rich animation effects are provided, elements in a video frame can be replaced and synthesized, multiplexing can be realized under similar use scenes, the problem of low efficiency under similar scenes is solved, and meanwhile, the customized characteristic and rich and personalized visual effects are achieved. The transparent animation effect is realized by utilizing the transparent processing technology of the video frame to solve the shielding problem, and the visual experience of a user is improved. According to the scheme, after the user selects the virtual article, the virtual article can be presented on the video in a transparent animation mode, a new animation effect can be formed by fusion with a background dynamic effect and presented on the video, a user-defined filtering algorithm is applied when target textures are sampled, edge sawteeth appearing when pixels are fused into video frames are eliminated, the richness of virtual article display is effectively improved, and the visual effect is improved.

In addition, an electronic device according to an embodiment of the present application is further provided, as shown in fig. 4, which shows a schematic structural diagram of the electronic device according to an embodiment of the present application, and specifically:

the electronic device may include components such as a processor 401 of one or more processing cores, memory 402 of one or more computer-readable storage media, a power supply 403, and an input unit 404. Those skilled in the art will appreciate that the electronic device configuration shown in fig. 4 does not constitute a limitation of the electronic device and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components. Wherein:

the processor 401 is a control center of the electronic device, connects various parts of the whole electronic device by various interfaces and lines, performs various functions of the electronic device and processes data by running or executing software programs and/or modules stored in the memory 402 and calling data stored in the memory 402, thereby performing overall monitoring of the electronic device. Optionally, processor 401 may include one or more processing cores; preferably, the processor 401 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 401.

The memory 402 may be used to store software programs and modules, and the processor 401 executes various functional applications and data processing by operating the software programs and modules stored in the memory 402. The memory 402 may mainly include a program storage area and a data storage area, wherein the program storage area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data created according to use of the electronic device, and the like. Further, the memory 402 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device. Accordingly, the memory 402 may also include a memory controller to provide the processor 401 access to the memory 402.

The electronic device further comprises a power supply 403 for supplying power to the various components, and preferably, the power supply 403 is logically connected to the processor 401 through a power management system, so that functions of managing charging, discharging, and power consumption are realized through the power management system. The power supply 403 may also include any component of one or more dc or ac power sources, recharging systems, power failure detection circuitry, power converters or inverters, power status indicators, and the like.

The electronic device may further include an input unit 404, and the input unit 404 may be used to receive input numeric or character information and generate keyboard, mouse, joystick, optical or trackball signal inputs related to user settings and function control.

Although not shown, the electronic device may further include a display unit and the like, which are not described in detail herein. Specifically, in this embodiment, the processor 401 in the electronic device loads the executable file corresponding to the process of one or more application programs into the memory 402 according to the following instructions, and the processor 401 runs the application program stored in the memory 402, thereby implementing various functions as follows:

receiving a virtual article sending instruction, wherein the virtual article sending instruction carries a virtual article identifier, then acquiring an article image and a target dynamic effect video which form a virtual article based on the virtual article identifier, wherein the target dynamic effect video comprises at least one video frame, then determining position information of each pixel point in an article image fusion area in the video frame, then acquiring pixel information of a corresponding position in the article image according to the position information, fusing the pixel information and the pixel information of the corresponding position in the target dynamic effect video to obtain a virtual article dynamic effect video, and then sending the virtual article dynamic effect video to a terminal of a target object for dynamic effect display of the virtual article.

The above operations can be implemented in the foregoing embodiments, and are not described in detail herein.

As can be seen from the above, this embodiment may receive a virtual article sending instruction, where the virtual article sending instruction carries a virtual article identifier, then obtain an article image and a target dynamic effect video that form a virtual article based on the virtual article identifier, where the target dynamic effect video includes at least one video frame, determine location information of each pixel point in an article image fusion region in the video frame, then obtain pixel information of a corresponding location in the article image according to the location information, fuse the pixel information with pixel information of a corresponding location in the target dynamic effect video, obtain a virtual article dynamic effect video, and then send the virtual article dynamic effect video to a terminal of a target object for dynamic effect display of the virtual article. According to the scheme, the combination of animation effects can be realized by utilizing pixel identification and fusion, rich animation effects are provided, elements in a video frame can be replaced and synthesized, multiplexing can be realized under similar use scenes, the problem of low efficiency under similar scenes is solved, and meanwhile, the customized characteristic and rich and personalized visual effects are achieved. The transparent animation effect is realized by utilizing the transparent processing technology of the video frame to solve the shielding problem, and the visual experience of a user is improved. According to the scheme, after the user selects the virtual article, the virtual article can be presented on the video in a transparent animation mode, a new dynamic effect can be formed by fusion with a background dynamic effect and presented on the video, a user-defined filtering algorithm is applied when target textures are sampled, edge sawteeth appearing when pixels are fused into video frames are eliminated, the richness of virtual article display is effectively improved, and the visual effect is improved.

It will be understood by those skilled in the art that all or part of the steps of the methods of the above embodiments may be performed by instructions or by associated hardware controlled by the instructions, which may be stored in a computer readable storage medium and loaded and executed by a processor.

To this end, embodiments of the present application further provide a computer-readable storage medium, in which a plurality of instructions are stored, where the instructions can be loaded by a processor to execute the steps in any one of the virtual article display methods provided in the embodiments of the present application. For example, the instructions may perform the steps of:

receiving a virtual article sending instruction, wherein the virtual article sending instruction carries a virtual article identifier, then acquiring an article image and a target dynamic effect video which form a virtual article based on the virtual article identifier, wherein the target dynamic effect video comprises at least one video frame, then determining position information of each pixel point in an article image fusion area in the video frame, then acquiring pixel information of a corresponding position in the article image according to the position information, fusing the pixel information and the pixel information of the corresponding position in the target dynamic effect video to obtain a virtual article dynamic effect video, and then sending the virtual article dynamic effect video to a terminal of a target object for dynamic effect display of the virtual article.

The above operations can be implemented in the foregoing embodiments, and are not described in detail herein.

Wherein the computer-readable storage medium may include: read Only Memory (ROM), Random Access Memory (RAM), magnetic or optical disks, and the like.

Since the instructions stored in the computer-readable storage medium can execute the steps in any virtual article display method provided in the embodiments of the present application, the beneficial effects that can be achieved by any virtual article display method provided in the embodiments of the present application can be achieved, which are detailed in the foregoing embodiments and will not be described herein again.

The virtual article display method, the virtual article display device, and the computer-readable storage medium provided in the embodiments of the present application are described in detail above, and specific examples are applied herein to illustrate the principles and implementations of the present application, and the descriptions of the above embodiments are only used to help understand the method and the core ideas of the present application; meanwhile, for those skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.

31页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:图像融合方法

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!