Virtual live broadcast method and device, electronic equipment and storage medium

文档序号:1630899 发布日期:2020-01-14 浏览:29次 中文

阅读说明:本技术 一种虚拟直播的方法、装置、电子设备及存储介质 (Virtual live broadcast method and device, electronic equipment and storage medium ) 是由 马里千 郑文 张国鑫 黄旭为 刘晓强 张博宁 于 2019-08-13 设计创作,主要内容包括:本公开关于一种虚拟直播的方法、装置、电子设备和存储介质,该方法包括:获取实际直播场景中的非直播者特征和直播者特征;根据所述非直播者特征和所述直播者特征,建立虚拟直播场景,所述虚拟直播场景包含根据所述非直播者特征建立的虚拟场景、以及根据所述直播者特征建立的虚拟形象;识别直播视频流中所述直播者的姿态变化信息;根据所述姿态变化信息,驱动所述虚拟直播场景中虚拟形象的姿态变化。本公开根据真实场景与真实人物的特征构建虚拟直播中的虚拟场景与虚拟形象,因此,能够在提供虚拟直播的同时,通过保留直播场景与直播者的个性特征,避免千篇一律的虚拟直播场景。(The present disclosure relates to a virtual live broadcast method, apparatus, electronic device, and storage medium, the method comprising: acquiring non-live player characteristics and live player characteristics in an actual live scene; establishing a virtual live broadcast scene according to the characteristics of the non-live broadcast persons and the characteristics of the live broadcast persons, wherein the virtual live broadcast scene comprises a virtual scene established according to the characteristics of the non-live broadcast persons and an avatar established according to the characteristics of the live broadcast persons; identifying gesture change information of the live broadcast person in a live broadcast video stream; and driving the posture change of the virtual image in the virtual live broadcast scene according to the posture change information. The virtual scene and the virtual image in the virtual live broadcast are constructed according to the characteristics of the real scene and the real character, so that the virtual live broadcast can be provided, and simultaneously, the uniform virtual live broadcast scene is avoided by keeping the individual characteristics of the live broadcast scene and the live broadcast person.)

1. A method of virtual live broadcasting, comprising:

acquiring non-live player characteristics and live player characteristics in an actual live scene;

establishing a virtual live broadcast scene according to the characteristics of the non-live broadcast persons and the characteristics of the live broadcast persons, wherein the virtual live broadcast scene comprises a virtual scene established according to the characteristics of the non-live broadcast persons and an avatar established according to the characteristics of the live broadcast persons;

identifying gesture change information of the live broadcast person in a live broadcast video stream;

and driving the posture change of the virtual image in the virtual live broadcast scene according to the posture change information.

2. The method of claim 1, wherein after establishing a virtual live scene according to the characteristics of the non-live player and the characteristics of the live player, the method further comprises:

executing editing operation on the established virtual live scene, wherein the editing operation at least comprises any one of the following operations: editing the characteristics of a virtual scene in the established virtual live broadcast scene, editing the characteristics of an avatar in the established virtual live broadcast scene, adjusting a first virtual degree of the virtual scene in the established virtual live broadcast scene, and adjusting a second virtual degree of the avatar in the established virtual live broadcast scene;

and saving the edited virtual live broadcast scene.

3. The method according to claim 1, wherein if the number of live players in the actual live scene is greater than 1, after the establishing of the virtual live scene, the method further comprises:

establishing a first mapping relation between live player characteristics and an avatar in the actual live scene;

the gesture change information of the live player in the live video stream is identified, and the gesture change information comprises:

identifying attitude change information of each live player in the live video stream;

establishing a second mapping relation between a live player and the posture change information in the live video stream;

the driving the posture change of the virtual image in the virtual live broadcast scene according to the posture change information comprises the following steps:

determining gesture change information corresponding to each live player in the live video stream according to the second mapping relation;

and driving the posture change of each virtual image in the virtual live broadcast scene according to the first mapping relation and the posture change information corresponding to each live broadcast person in the live broadcast video stream.

4. The method according to claim 1, wherein after driving the change of the avatar's pose in the virtual live scene according to the pose change information, the method further comprises:

generating a virtual live broadcast video stream according to the image frame corresponding to the virtual live broadcast scene;

and sending the virtual live video stream to a receiving end through a pre-established communication link.

5. The method of claim 1, wherein the non-live player characteristics comprise: indoor scene features and/or outdoor scene features; wherein the content of the first and second substances,

the indoor scene features include: at least one of indoor three-dimensional modeling information, indoor material mapping information and indoor illumination information;

the outdoor scene features include: at least one of 360-degree environment image information and outdoor illumination information of the outdoor scene.

6. The method of claim 1, wherein the live player characteristics comprise: the head information, body information and the related information of the live broadcast; wherein the content of the first and second substances,

the live broadcast header information includes: at least one of the three-dimensional information of the head of the live player and the three-dimensional information of the face of the live player;

the live player body information includes: at least one of live broadcast body three-dimensional information, live broadcast clothing three-dimensional information and clothing material information;

the information related to the live broadcast comprises: and at least one of the expression three-dimensional information of the live broadcast person, the sound information of the live broadcast person and the accessory information of the live broadcast person.

7. The method of claim 1, wherein the pose change information comprises: at least one of head posture change information, facial expression change information, body posture change information, and hand posture change information.

8. An apparatus for virtual live broadcasting, comprising:

the system comprises a characteristic acquisition module, a live broadcast processing module and a live broadcast processing module, wherein the characteristic acquisition module is configured to acquire non-live broadcast characteristics and live broadcast characteristics in an actual live broadcast scene;

the virtual construction module is configured to establish a virtual live broadcast scene according to the characteristics of the non-live broadcast persons and the characteristics of the live broadcast persons, wherein the virtual live broadcast scene comprises a virtual scene established according to the characteristics of the non-live broadcast persons and an avatar established according to the characteristics of the live broadcast persons;

a live player identification module configured to identify gesture change information of the live player in a live video stream;

and the live broadcast driving module is configured to drive the posture change of the virtual image in the virtual live broadcast scene according to the posture change information.

9. An electronic device, comprising:

a processor;

a memory for storing the processor-executable instructions;

wherein the processor is configured to execute the instructions to implement a method of virtual live as claimed in any one of claims 1 to 7.

10. A storage medium having instructions that, when executed by a processor of an electronic device, enable the electronic device to perform a method of virtual live broadcasting as claimed in any one of claims 1 to 7.

Technical Field

The present disclosure relates to the field of live webcasting, and in particular, to a virtual live webcasting method, apparatus, electronic device, and storage medium.

Background

In order to enhance the interest and interactivity of live broadcasting, live virtual broadcasting is an important part of live broadcasting services, and in recent years, live broadcasting services occupy a larger and larger proportion. In the live broadcasting process, the preset virtual image, such as a panda, a bunny and the like, can be used for live broadcasting instead of the actual image of the anchor, and the action of the preset virtual image is controlled according to the action of the anchor, so that the interaction between the anchor and the audience is carried out.

However, the current live broadcast of the avatar usually pre-designs specific virtual scenes and avatars before live broadcast, and the anchor can only select the most appropriate avatar and avatar in a pre-designed model library before live broadcast, and due to limited materials in the model library, it is difficult to avoid the situation that the avatar adopted by the anchor is uniform in live broadcast in any way, and the personalized requirements of users in the live broadcast process of the avatar cannot be met.

Disclosure of Invention

The present disclosure provides a method and an apparatus for virtual live broadcast, so as to at least solve the problem that in the related art, the virtual images and virtual scenes in the virtual live broadcast cannot meet the personalized requirements of users. The technical scheme of the disclosure is as follows:

according to a first aspect of the embodiments of the present disclosure, there is provided a method for virtual live broadcasting, including:

acquiring non-live player characteristics and live player characteristics in an actual live scene;

establishing a virtual live broadcast scene according to the characteristics of the non-live broadcast persons and the characteristics of the live broadcast persons, wherein the virtual live broadcast scene comprises a virtual scene established according to the characteristics of the non-live broadcast persons and an avatar established according to the characteristics of the live broadcast persons;

identifying gesture change information of the live broadcast person in a live broadcast video stream;

and driving the posture change of the virtual image in the virtual live broadcast scene according to the posture change information.

Optionally, after the virtual live broadcast scene is established according to the characteristics of the non-live broadcast users and the characteristics of the live broadcast users, the method further includes:

executing editing operation on the established virtual live scene, wherein the editing operation at least comprises any one of the following operations: editing the characteristics of a virtual scene in the established virtual live broadcast scene, editing the characteristics of an avatar in the established virtual live broadcast scene, adjusting a first virtual degree of the virtual scene in the established virtual live broadcast scene, and adjusting a second virtual degree of the avatar in the established virtual live broadcast scene;

and saving the edited virtual live broadcast scene.

Optionally, if the number of live players in the actual live scene is greater than 1, after the virtual live scene is established, the method further includes:

establishing a first mapping relation between live player characteristics and an avatar in the actual live scene;

the gesture change information of the live player in the live video stream is identified, and the gesture change information comprises:

identifying attitude change information of each live player in the live video stream;

establishing a second mapping relation between a live player and the posture change information in the live video stream;

the driving the posture change of the virtual image in the virtual live broadcast scene according to the posture change information comprises the following steps:

determining gesture change information corresponding to each live player in the live video stream according to the second mapping relation;

and driving the posture change of each virtual image in the virtual live broadcast scene according to the first mapping relation and the posture change information corresponding to each live broadcast person in the live broadcast video stream.

Optionally, after the driving the change of the pose of the avatar in the virtual live broadcast scene according to the pose change information, the method further includes:

generating a virtual live broadcast video stream according to the image frame corresponding to the virtual live broadcast scene;

and sending the virtual live video stream to a receiving end through a pre-established communication link.

Optionally, the non-live player characteristics include: indoor scene characteristics and outdoor scene characteristics; wherein the content of the first and second substances,

the indoor scene features include: at least one of indoor three-dimensional modeling information, indoor material mapping information and indoor illumination information;

the outdoor scene features include: at least one of 360-degree environment image information and outdoor illumination information of an outdoor scene;

optionally, the live player characteristics include: the head information, body information and the related information of the live broadcast; wherein the content of the first and second substances,

the live broadcast header information includes: at least one of the three-dimensional information of the head of the live player and the three-dimensional information of the face of the live player;

the live player body information includes: at least one of live broadcast body three-dimensional information, live broadcast clothing three-dimensional information and clothing material information;

the information related to the live broadcast comprises: and at least one of the expression three-dimensional information of the live broadcast person, the sound information of the live broadcast person and the accessory information of the live broadcast person.

Optionally, the posture change information includes: head posture change information, facial expression change information, body posture change information, hand posture change information.

According to a second aspect of the embodiments of the present disclosure, there is provided an apparatus for virtual live broadcasting, including:

the system comprises a characteristic acquisition module, a live broadcast processing module and a live broadcast processing module, wherein the characteristic acquisition module is configured to acquire non-live broadcast characteristics and live broadcast characteristics in an actual live broadcast scene;

the virtual construction module is configured to establish a virtual live broadcast scene according to the characteristics of the non-live broadcast persons and the characteristics of the live broadcast persons, wherein the virtual live broadcast scene comprises a virtual scene established according to the characteristics of the non-live broadcast persons and an avatar established according to the characteristics of the live broadcast persons;

a live player identification module configured to identify gesture change information of the live player in a live video stream;

and the live broadcast driving module is configured to drive the posture change of the virtual image in the virtual live broadcast scene according to the posture change information.

Optionally, the apparatus further comprises:

a virtual editing module configured to perform an editing operation on the established virtual live scene, where the editing operation includes at least any one of: editing the characteristics of a virtual scene in the established virtual live broadcast scene, editing the characteristics of an avatar in the established virtual live broadcast scene, adjusting a first virtual degree of the virtual scene in the established virtual live broadcast scene, and adjusting a second virtual degree of the avatar in the established virtual live broadcast scene;

and the editing and saving module is configured to save the edited virtual live broadcast scene.

Optionally, the number of live players in the actual live scene is greater than 1, and the apparatus further includes:

the first mapping module is configured to establish a first mapping relation between live player characteristics and an avatar in the actual live scene;

the live player identification module comprises:

a multi-live player identification submodule configured to identify pose change information of each live player in the live video stream;

a second mapping submodule configured to establish a second mapping relationship between a live player and the pose change information in the live video stream;

live drive module includes:

the second mapping relation confirming submodule is configured to determine posture change information corresponding to each live player in the live video stream according to the second mapping relation;

and the first mapping relation confirming submodule is configured to drive the posture change of each virtual image in the virtual live broadcast scene according to the first mapping relation and the posture change information corresponding to each live player in the live broadcast video stream.

Optionally, the apparatus further comprises:

the live broadcast generation module is configured to generate a virtual live broadcast video stream according to the image frame corresponding to the virtual live broadcast scene;

and the live broadcast module is used for sending the virtual live broadcast video stream to a receiving end through a pre-established communication link.

Optionally, the non-live player characteristics include: indoor scene characteristics and outdoor scene characteristics; wherein the content of the first and second substances,

the indoor scene features include: at least one of indoor three-dimensional modeling information, indoor material mapping information and indoor illumination information;

the outdoor scene features include: at least one of 360-degree environment image information and outdoor illumination information of an outdoor scene;

optionally, the live player characteristics include: the head information, body information and the related information of the live broadcast; wherein the content of the first and second substances,

the live broadcast header information includes: at least one of the three-dimensional information of the head of the live player and the three-dimensional information of the face of the live player;

the live player body information includes: at least one of live broadcast body three-dimensional information, live broadcast clothing three-dimensional information and clothing material information;

the information related to the live broadcast comprises: and at least one of the expression three-dimensional information of the live broadcast person, the sound information of the live broadcast person and the accessory information of the live broadcast person.

Optionally, the posture change information includes: at least one of head posture change information, facial expression change information, body posture change information, and hand posture change information.

According to a third aspect of the embodiments of the present disclosure, there is provided an electronic apparatus, including:

a processor;

a memory for storing the processor-executable instructions;

wherein the processor is configured to execute the instructions to implement the above-mentioned method of virtual live broadcasting.

According to a fourth aspect of embodiments of the present disclosure, there is provided a storage medium having instructions that, when executed by a processor of a virtual live electronic device, enable the virtual live electronic device to perform the method of virtual live as described above.

According to a fifth aspect of embodiments of the present disclosure, there is provided a computer program product comprising one or more instructions which, when executed by a processor of a virtual live electronic device, enable the virtual live electronic device to perform the above-mentioned method of virtual live.

The technical scheme provided by the embodiment of the disclosure at least brings the following beneficial effects:

the method and the device can adaptively construct the virtual scene and the virtual image in the virtual live broadcast according to the real scene and the real character by acquiring the live broadcast scene and the characteristics of the actual live broadcast person, and can keep the characteristics of the real live broadcast scene in the virtual live broadcast, so that the virtual live broadcast can be provided, and simultaneously, the uniform virtual live broadcast image can be avoided by keeping the individual characteristics of the live broadcast scene and the live broadcast person.

It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.

Drawings

The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the principles of the disclosure and are not to be construed as limiting the disclosure.

Fig. 1 is a flow diagram illustrating a method of virtual live broadcasting in accordance with an exemplary embodiment.

Fig. 2 is a flow diagram illustrating another method of virtual live broadcasting in accordance with an example embodiment.

Fig. 3 is a flow diagram illustrating yet another method of virtual live broadcasting in accordance with an exemplary embodiment.

Fig. 4 is a block diagram illustrating an apparatus of a virtual live broadcast in accordance with an example embodiment.

Fig. 5 is a block diagram illustrating another apparatus for virtual live according to an example embodiment.

Fig. 6 is a block diagram illustrating yet another apparatus for virtual live according to an example embodiment.

Fig. 7 is a block diagram illustrating a virtual live electronic device in accordance with an exemplary embodiment.

Detailed Description

In order to make the technical solutions of the present disclosure better understood by those of ordinary skill in the art, the technical solutions in the embodiments of the present disclosure will be clearly and completely described below with reference to the accompanying drawings.

It should be noted that the terms "first," "second," and the like in the description and claims of the present disclosure and in the above-described drawings are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the disclosure described herein are capable of operation in sequences other than those illustrated or otherwise described herein. The implementations described in the exemplary embodiments below are not intended to represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present disclosure, as detailed in the appended claims.

The method is applied to a live broadcast scene and is realized by the sending end and the at least one receiving end, and the sending end and the at least one receiving end establish a communication link in advance through a streaming media technology, so that live broadcast audio and video stream transmission is carried out, and a live broadcast effect is achieved.

Fig. 1 is a flow diagram illustrating a method of virtual live broadcasting, which may include the following steps, as shown in fig. 1, according to an example embodiment.

In step S101, a non-live player characteristic and a live player characteristic in an actual live scene are acquired.

The embodiment of the disclosure is applied to a sending end of virtual live broadcasting, before virtual live broadcasting is carried out, characteristics of a non-live broadcaster and characteristics of a live broadcaster in an actual live broadcasting scene need to be acquired, optionally, shooting and extraction or direct scanning can be carried out on the actual live broadcasting scene, namely, video or picture acquisition is carried out on the actual live broadcasting scene, and extraction is carried out according to the video or the picture, or three-dimensional point cloud or grid information, illumination information, material information and the like of the actual live broadcasting scene are output through scanning of the actual live broadcasting scene, optionally, shooting can be carried out through a camera for sending end live broadcasting, other equipment can be used, such as a depth camera, an infrared camera, a scanner and the like, and the embodiment of the disclosure does not specifically limit the situation.

In the embodiment of the disclosure, the non-live player characteristic and the live player characteristic of the actual live broadcast scene can be acquired by setting user perception, namely, the user is reminded to start acquiring before the acquisition starts, and the user is prompted to cooperate, optionally, the user can be prompted to adjust the angle, the height and the like of the camera in a voice mode, or the indoor brightness and the user posture and the like are adjusted, so that a more accurate result can be obtained, the subsequent virtual scene and the virtual image can be conveniently constructed, or the user can be set to acquire without perception, the camera at the sending end is utilized within the preset time before the live broadcast starts, or other equipment directly shoots or scans the actual live broadcast scene in the shooting range, so that the non-live player characteristic and the live player characteristic can be more quickly acquired, and the virtual scene and the virtual image construction efficiency can be improved.

In the embodiment of the present disclosure, optionally, the characteristics of the live broadcast include a morphological characteristic of a live broadcast in the live broadcast to be started, or may further include a voice characteristic of the live broadcast, the characteristics of the non-live broadcast include an actual scene characteristic of the live broadcast to be started, which may be an indoor scene or an outdoor scene, and by collecting real information related to the live broadcast, it can be ensured that the virtual live broadcast retains more personalized characteristics of the actual live broadcast scene and the actual live broadcast scene on a virtual basis.

Optionally, the non-live player characteristics include: indoor scene characteristics and outdoor scene characteristics; wherein the content of the first and second substances,

the indoor scene features include: at least one of indoor three-dimensional modeling information, indoor material mapping information and indoor illumination information;

the outdoor scene features include: at least one of 360-degree environment image information and outdoor illumination information of the outdoor scene.

In the embodiment of the disclosure, when the non-live broadcast person is characterized by an indoor scene, indoor three-dimensional modeling information, indoor material mapping information, indoor illumination information and the like can be acquired; when the non-live broadcast characteristics are outdoor scenes, 360-degree environment image information, outdoor illumination information and the like of the outdoor scenes can be collected, and optionally, outdoor geographic position information, outdoor weather information and the like can also be collected.

Optionally, the live player characteristics include: the head information, body information and the related information of the live broadcast; wherein the content of the first and second substances,

the live broadcast header information includes: at least one of the three-dimensional information of the head of the live player and the three-dimensional information of the face of the live player;

the live player body information includes: at least one of live broadcast body three-dimensional information, live broadcast clothing three-dimensional information and clothing material information;

the information related to the live broadcast comprises: and at least one of the expression three-dimensional information of the live broadcast person, the sound information of the live broadcast person and the accessory information of the live broadcast person.

In the embodiment of the present disclosure, the characteristics of the live broadcast may include head information of the live broadcast, body information of the live broadcast, and the like, wherein the head information of the live broadcast may include three-dimensional head information of the live broadcast to describe information such as a shape of the head of the live broadcast, a hairstyle of hair, and the like, and may further include face information of the live broadcast to describe information such as a position, a shape, an eyebrow position, and a shape of facial features of the live broadcast; live broadcast person's health information includes live broadcast person's health three-dimensional information to describe the shape of live broadcast person's trunk, four limbs, can also include live broadcast person's dress information, with the style, the size etc. of description live broadcast person's dress, simultaneously, can gather dress material information, with the colour, the feel etc. of description live broadcast person's dress.

In the embodiment of the present disclosure, besides the above-mentioned head information and body information of the live broadcast, the information related to the live broadcast can be included to further improve the collection of the characteristics of the live broadcast, alternatively, the live player related information may include at least one of live player expression three-dimensional information, live player voice information, live player accessory information, and the like, live player expression three-dimensional information, can be acquired by acquiring the change of the shape of the five sense organs of the live broadcast in a preset time, the sound information of the live broadcast can be acquired by prompting a user to pronounce a preset sentence, the accessory information of the live broadcast can comprise the position, the shape and the material information of a cap, a hair accessory and the like on the head, the position, the shape and the material information of an earring, a necklace, a bracelet, a brooch and the like on the body, optionally, the live player clothing information of the corresponding position may be collected together when the live player characteristics of the corresponding position are collected.

In step S102, a virtual live scene is established according to the characteristics of the non-live player and the characteristics of the live player; the virtual live broadcast scene comprises a virtual scene established according to the characteristics of the non-live broadcast persons and an avatar established according to the characteristics of the live broadcast persons.

In the embodiment of the disclosure, a virtual live broadcast scene can be constructed according to the collected characteristics of the non-live broadcast persons, and optionally, for the characteristics of the indoor scene, the three-dimensional model of the indoor scene is established according to indoor three-dimensional modeling information, an indoor virtual light source is constructed according to indoor illumination information, and the three-dimensional model of the indoor scene is rendered according to indoor material mapping information, so that the virtual scene is constructed; for the outdoor scene characteristics, a 360-degree model of the outdoor scene is established according to 360-degree environment image information of the outdoor scene so as to ensure virtualization of all scenes and facilitate large-scale virtual live broadcast, then a virtual light source of the outdoor scene is established according to outdoor illumination information, and the 360-degree model of the outdoor scene is rendered, so that the virtual scene is established.

In the embodiment of the disclosure, optionally, before the virtual scene is constructed, a setting of a user for a virtual scene construction style may be received, where the setting may be a picture style such as a cartoon style, a watercolor style, an oil painting style, a water ink style, or the like, or may also be a scene style such as a study room scene, a classroom scene, a studio scene, or the like, and at this time, a virtual scene with a corresponding style may be established according to characteristics of a non-live broadcast person, and actual indoor scene characteristics or outdoor scene characteristics included in real characteristics of the non-live broadcast person may be embodied in the virtual scene in a preset style.

In the embodiment of the disclosure, the avatar can be constructed for the live broadcast according to the collected live broadcast characteristics, optionally, an avatar head model can be constructed according to the live broadcast head three-dimensional information, an avatar face three-dimensional model is constructed according to the live broadcast face three-dimensional information, a three-dimensional model of a trunk and four limbs of the avatar is constructed according to the live broadcast body three-dimensional information, a three-dimensional model of the avatar clothes is constructed according to the live broadcast clothes three-dimensional information, and the avatar clothes three-dimensional model can be rendered according to the clothes material information, so that the avatar close to the actual live broadcast characteristics is obtained.

In the embodiment of the present disclosure, optionally, before the constructing, a setting of a user for an avatar constructing style may be received, which may be a picture style, similar to a virtual scene, or an image style, such as a cartoon doll, a little rabbit image, a lionet image, a little panda image, etc., at this time, taking the little rabbit image as an example, the avatar of the little rabbit image may be constructed according to characteristics of a live broadcaster, such as a head size, a hair style, a hair color, etc., of the little rabbit image according to head information of the live broadcaster, a five-sense organ ratio, an eyebrow and beard position, a color, etc., of the little rabbit image according to face information of the live broadcaster, a trunk and four-limb ratio of the little rabbit image according to three-dimensional information of a body of the live broadcaster, a dress of the little rabbit image according to three-dimensional information of the dress of the live broadcaster, etc., so that, under the condition of retaining real characteristics, and creating an avatar.

In the embodiment of the disclosure, optionally, different live broadcast characteristics can be chosen for different image styles, taking a small rabbit image as an example, when the small rabbit image is taken as a reference, and when the virtual image is created according to the live broadcast characteristics, information such as ear position, size and the like in the face information of the live broadcast can be discarded.

In the embodiment of the present disclosure, optionally, the expression of the avatar may be constructed according to the live player expression three-dimensional information in the live player related information, that is, the change of the position and shape of the five sense organs of the live player, and further, the expression of the avatar may be exaggerated within a preset range on the basis of the live player expression three-dimensional information, so as to enhance the interactivity and the interest of the avatar, and in addition, the sound of the avatar may be constructed virtually according to the live player sound information, such as an echo effect, a rising tone, a falling tone, and the like, which is not specifically limited in the embodiment of the present disclosure.

In step S103, pose change information of the live player in the live video stream is identified.

In the embodiment of the disclosure, live broadcasting can be started after virtual scenes and virtual images are built, in the live broadcasting process, live broadcasting video streams can be collected through a camera of a sending end, image frames of the live broadcasting video streams comprise actual scenes and actual live broadcasters, and posture change information of the live broadcasters in the image frames is identified to obtain posture change of the live broadcasters in the live broadcasting process.

Optionally, the posture change information includes: at least one of head posture change information, facial expression change information, body posture change information, and hand posture change information.

The head posture change information may include a head shaking amplitude, a head position change, and the like, the facial expression change information may include a shape change of five sense organs, a position change of five sense organs, and the like, the body posture change information may include a position change of limbs, a position change of a trunk, and the like, the hand posture change information may include a position change of a palm, a position change of a finger, a shape change of a finger, and the like, a change in the embodiment of the present disclosure may refer to a change between adjacent frames of an image frame, and a person skilled in the art may select more posture change information and other rule definition changes according to an actual situation, which is not limited in the embodiment of the present disclosure.

In step S104, driving the pose change of the avatar in the virtual live scene according to the pose change information.

In the embodiment of the disclosure, the posture change information obtained by recognizing the image frame can drive the posture change of the avatar in the virtual live broadcast scene, optionally, the avatar can nod, shake, wave, blink and the like according to the posture change information, optionally, when the posture change information shows that the live broadcast person interacts with the actual scene, if the cup is taken up, the corresponding interaction can be performed on the virtual scene, if the avatar is driven to take up the virtualized cup in the virtual scene, and the like.

Optionally, after the step S104, the method further includes:

in step S105, a virtual live video stream is generated according to the image frame corresponding to the virtual live scene.

In the embodiment of the disclosure, after the avatar in the virtual scene is driven according to the posture change information, at this time, the avatar can be converted into a corresponding virtual live video stream conforming to the communication link transmission protocol according to the image frame corresponding to the virtual live scene, so that the avatar in the virtual live scene and the virtual live scene can be converted into the virtual live video stream.

In step S106, the virtual live video stream is sent to a receiving end through a pre-established communication link.

In the embodiment of the disclosure, before the live broadcast service is initiated, a communication link between the sending end and the receiving end can be established in advance, so as to ensure the stability of virtual live broadcast video stream transmission in the live broadcast process.

The method and the device can adaptively construct the virtual scene and the virtual image in the virtual live broadcast according to the real scene and the real character by acquiring the live broadcast scene and the characteristics of the actual live broadcast person, and can keep the characteristics of the real live broadcast scene in the virtual live broadcast, so that the virtual live broadcast can be provided, and simultaneously, the uniform virtual live broadcast image can be avoided by keeping the individual characteristics of the live broadcast scene and the live broadcast person.

Fig. 2 is a flow diagram illustrating another virtual live method in accordance with an example embodiment. Referring to fig. 2, the method may include the steps of:

in step S201, a live-.

In step S202, a virtual live scene is established according to the characteristics of the non-live player and the characteristics of the live player; the virtual live broadcast scene comprises a virtual scene established according to the characteristics of the non-live broadcast persons and an avatar established according to the characteristics of the live broadcast persons.

In step S203, an editing operation is performed on the established virtual live scene, where the editing operation at least includes any one of the following: editing the characteristics of a virtual scene in the established virtual live broadcast scene, editing the characteristics of an avatar in the established virtual live broadcast scene, adjusting a first virtual degree of the virtual scene in the established virtual live broadcast scene, and adjusting a second virtual degree of the avatar in the established virtual live broadcast scene.

In the embodiment of the present disclosure, after a virtual live broadcast scene is established according to characteristics of a non-live broadcast person and characteristics of a live broadcast person, an editing operation of a user for the virtual live broadcast scene may also be received, optionally, the editing of the characteristics of the virtual scene in the virtual live broadcast scene may be editing of characteristics of the virtual scene, such as color, atmosphere, scene style, picture style, and the like, the color may be a color of a ground, a wall, furniture, a ceiling, or a sky of the virtual scene, or may be a color of the entire virtual scene, the atmosphere may be a floating flower, light, smoke, and the like, and the picture style and the scene style may be as described above.

In the embodiment of the present disclosure, optionally, the feature editing of the avatar in the virtual live scene may be editing of a color, an accessory, a picture style, an image style, and the like of the avatar, where the color may be a pupil, a skin, a hair, and a clothing color of the avatar, the accessory may be a hat, a jewelry, and the like, and may also be a frame, a texture, and the like of the picture, and the picture style and the image style are as described above, the above-mentioned edited objects are only used as examples, and a person skilled in the art may select different edited objects according to actual application requirements, which is not specifically limited by the present disclosure.

In the embodiment of the disclosure, a first virtual degree of a virtual scene in the established virtual live broadcast scene is adjusted, and a second virtual degree of an avatar in the established virtual live broadcast scene is adjusted, wherein the first virtual degree can reflect a similarity between the virtual scene and an actual scene, the higher the first virtual degree is, the lower the similarity between the virtual scene and the actual scene is, and otherwise, the higher the similarity is, specifically, the similarities between the structure, the proportion, the size of the virtual scene and the actual scene, the shape, the position, the size and the like of a ornament; the second virtual degree may reflect a degree of similarity between the avatar and the live-cast individual, the higher the second virtual degree, the lower the degree of similarity between the avatar and the live-cast individual, otherwise, the higher the degree of similarity, specifically, the degree of similarity between the avatar and the actual image, such as a face, a limb, a dress, etc., that is, the first virtual degree and the second virtual degree may reflect a degree of reduction of the characteristics of the non-live-cast person and the characteristics of the live-cast person in the live-cast scene, and the first virtual degree and the second virtual degree may be adjusted, and may be edited at the same time, for example, in response to an editing operation of the user for the first virtual degree and the second virtual degree, an adjusting button may be displayed, an operation of the user for the adjusting button may be received, and the first virtual degree or the second virtual degree in the live-cast scene may be increased or may be displayed separately for adjusting the first virtual degree and the second virtual degree, thereby adjusting the first virtual degree and the second virtual degree, respectively.

In step S204, the edited virtual live scene is saved.

In the embodiment of the present disclosure, the edited virtual live broadcast scene may be stored, and in subsequent live broadcasts, the edited virtual live broadcast scene is used to perform virtual live broadcast.

In step S205, pose change information of the live player in the live video stream is identified.

In step S206, the pose change of the avatar in the live virtual scene is driven according to the pose change information.

Optionally, after the step S206, the method further includes:

in step S207, a virtual live video stream is generated according to the image frame corresponding to the virtual live scene.

In the step S208, the virtual live video stream is sent to the receiving end through a pre-established communication link.

The method can adaptively construct the virtual scene and the virtual image in the virtual live broadcast according to the real scene and the real character by acquiring the live broadcast scene and the characteristics of the actual live broadcast person, and can keep the characteristics of the real live broadcast scene in the virtual live broadcast, so that the method can provide the virtual live broadcast and simultaneously avoid the discordant virtual live broadcast image by keeping the individual characteristics of the live broadcast scene and the live broadcast person; meanwhile, the method and the device can also receive the editing of the user on the virtual live broadcast scene, so that the virtual live broadcast scene can be constructed and arranged according to the requirement of the user while the actual live broadcast scene is kept.

Fig. 3 is a flow diagram illustrating yet another virtual live method in accordance with an exemplary embodiment. Referring to fig. 3, the method may include the steps of:

in step S301, a non-live player characteristic and a live player characteristic in an actual live scene are acquired.

In the step S302, a virtual live broadcast scene is established according to the characteristics of the non-live broadcast users and the characteristics of the live broadcast users; the virtual live broadcast scene comprises a virtual scene established according to the characteristics of the non-live broadcast persons and an avatar established according to the characteristics of the live broadcast persons.

Optionally, as shown in fig. 3, if the number of live players in the actual live scene is greater than 1, after the step S302, the step 303 is further included.

In step 303, a first mapping relationship between live characteristics and an avatar in the actual live scene is established.

In the embodiment of the present disclosure, when the number of live players in the virtual live broadcast is greater than 1, a plurality of virtual images are constructed, so that, in order to avoid a misalignment of subsequent driving of the virtual images, for example, the virtual image of a second live player is driven according to the posture change information of a first live player, so that a picture is collapsed and violated, a first mapping relationship between a live player characteristic and the virtual image needs to be used, where the live player characteristic may be at least one of the head information, body information, and related information of the live player, and it is sufficient if a plurality of live players can be distinguished.

In the step S304, pose change information of each live player in the live video stream is identified.

In the embodiment of the present disclosure, gesture change information of each live player in a live video stream needs to be identified at this time, and the specific gesture change information is as described above, in the identification of the embodiment of the present disclosure, the gesture change information of each live player needs to be identified respectively, and optionally, the gesture change information in which the distance from the central point or the distance from the closest point is smaller than a preset threshold value may be used as the gesture change information of the same live player, so as to distinguish different live players.

In the step S305, a second mapping relationship between the live player and the pose change information in the live video stream is established.

In the embodiment of the disclosure, after gesture change information of different live players is identified, a second mapping relationship between the different live players and the gesture change information is established, that is, the gesture change information is identified to belong to the same live player, then which live player in a live video stream corresponds to the same live player is identified, and the second mapping relationship between the gesture change information and the live player in the live video stream is established.

In the step S306, posture change information corresponding to each live player in the live video stream is determined according to the second mapping relationship.

In the embodiment of the present disclosure, according to the established second mapping relationship, the posture change information corresponding to each live broadcast person in the current live broadcast video stream is determined, and optionally, the correspondence between live broadcast persons may be identified, that is, the live broadcast person in the current live broadcast video stream and which live broadcast person in the second mapping relationship are the same live broadcast person, and then the posture change information corresponding to the live broadcast person in the current live broadcast video stream is determined according to the second mapping relationship.

In step S307, the pose change of each avatar in the virtual live broadcast scene is driven according to the first mapping relationship and the pose change information corresponding to each live broadcast person in the live broadcast video stream.

In the embodiment of the disclosure, the relationship between the avatar and the posture change information can be confirmed according to the first mapping relationship for recording the relationship between the characteristics of the live broadcaster and the avatar, and the posture change information corresponding to the live broadcaster, so as to drive the corresponding avatar according to the posture change information.

In the embodiment of the present disclosure, the process of determining the posture change information and the avatar during the multi-user virtual live broadcast is only used as an example, and in the embodiment of the present disclosure, a third mapping relationship may be established between the posture change information and the characteristics of the live broadcast, and the posture change information corresponding to the avatar may be determined according to the third mapping relationship and the first mapping relationship, so as to simplify the middle calculation step and improve the efficiency of the virtual live broadcast.

Optionally, after the step S307, the method further includes:

in step S308, a virtual live video stream is generated according to the image frame corresponding to the virtual live scene.

In step S309, the virtual live video stream is sent to the receiving end through a pre-established communication link.

The method can adaptively construct the virtual scene and the virtual image in the virtual live broadcast according to the real scene and the real character by acquiring the live broadcast scene and the characteristics of the actual live broadcast person, and can keep the characteristics of the real live broadcast scene in the virtual live broadcast, so that the method can provide the virtual live broadcast and simultaneously avoid the discordant virtual live broadcast image by keeping the individual characteristics of the live broadcast scene and the live broadcast person; meanwhile, when multiple persons perform virtual live broadcasting, the virtual image and the posture change information of the live broadcaster can be accurately matched, so that the situation that the driving of the virtual image is not staggered in the process of virtual live broadcasting of the multiple persons is guaranteed.

It should be noted that, for simplicity of description, the method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the embodiments are not limited by the order of acts described, as some steps may occur in other orders or concurrently depending on the embodiments. Further, those skilled in the art will also appreciate that the embodiments described in the specification are presently preferred and that no particular act is required of the embodiments of the application.

Fig. 4 is a block diagram illustrating an apparatus 400 for virtual live broadcasting in accordance with an example embodiment. Referring to fig. 4, the apparatus 400 includes a feature acquisition module 401, a virtual construction module 402, a live player identification module 403, and a live driver module 404.

The feature obtaining module 401 is configured to obtain a non-live player feature and a live player feature in an actual live scene.

Optionally, the non-live player characteristics include: indoor scene characteristics and outdoor scene characteristics; wherein the content of the first and second substances,

the indoor scene features include: at least one of indoor three-dimensional modeling information, indoor material mapping information and indoor illumination information;

the outdoor scene features include: at least one of 360-degree environment image information and outdoor illumination information of the outdoor scene.

Optionally, the live player characteristics include: the head information, body information and the related information of the live broadcast; wherein the content of the first and second substances,

the live broadcast header information includes: at least one of the three-dimensional information of the head of the live player and the three-dimensional information of the face of the live player;

the live player body information includes: at least one of live broadcast body three-dimensional information, live broadcast clothing three-dimensional information and clothing material information;

the information related to the live broadcast comprises: and at least one of the expression three-dimensional information of the live broadcast person, the sound information of the live broadcast person and the accessory information of the live broadcast person.

A virtual construction module 402 configured to establish a virtual live scene according to the characteristics of the non-live player and the characteristics of the live player, where the virtual live scene includes a virtual scene established according to the characteristics of the non-live player and an avatar established according to the characteristics of the live player.

And a live player identification module 403 configured to identify posture change information of a live player in the live video stream.

Optionally, the posture change information includes: at least one of head posture change information, facial expression change information, body posture change information, and hand posture change information.

And a live broadcast driving module 404 configured to drive the pose change of the avatar in the virtual live broadcast scene according to the pose change information.

Optionally, referring to fig. 4, the apparatus 400 further comprises a live broadcast generation module 405 and a live broadcast module 406.

A live broadcast generating module 405 configured to generate a virtual live broadcast video stream according to the image frame corresponding to the virtual live broadcast scene.

And the live broadcast module 406 is used for sending the virtual live broadcast video stream to the receiving end through a pre-established communication link.

Fig. 5 is a block diagram illustrating another apparatus 500 for virtual live according to an example embodiment. Referring to fig. 5, the apparatus 500 includes a feature obtaining module 501, a virtual building module 502, a virtual editing module 503, an edit saving module 504, a live player identification module 505, and a live driver module 506.

The feature obtaining module 501 is configured to obtain a non-live player feature and a live player feature in an actual live scene.

A virtual construction module 502 configured to establish a virtual live scene according to the characteristics of the non-live player and the characteristics of the live player, where the virtual live scene includes a virtual scene established according to the characteristics of the non-live player and an avatar established according to the characteristics of the live player.

A virtual editing module 503, configured to perform an editing operation on the established virtual live scene, where the editing operation includes at least any one of: editing the characteristics of a virtual scene in the established virtual live broadcast scene, editing the characteristics of an avatar in the established virtual live broadcast scene, adjusting a first virtual degree of the virtual scene in the established virtual live broadcast scene, and adjusting a second virtual degree of the avatar in the established virtual live broadcast scene.

And an edit saving module 504 configured to save the edited virtual live scene.

And a live player identification module 505 configured to identify posture change information of a live player in the live video stream.

And a live broadcast driving module 506 configured to drive the change of the pose of the avatar in the virtual live broadcast scene according to the pose change information.

Optionally, referring to fig. 5, the apparatus 500 further includes a live broadcast generating module 507 and a live broadcast module 508.

And a live broadcast generating module 507 configured to generate a virtual live broadcast video stream according to the image frame corresponding to the virtual live broadcast scene.

The live broadcast module 508 sends the virtual live broadcast video stream to the receiving end through a pre-established communication link.

Fig. 6 is a block diagram illustrating yet another apparatus 600 for virtual live according to an example embodiment. Referring to fig. 6, apparatus 600 includes a feature acquisition module 601, a virtual construction module 602, a first mapping module 603, a live player identification module 604, and a live driver module 605.

The feature obtaining module 601 is configured to obtain a non-live player feature and a live player feature in an actual live scene.

A virtual construction module 602 configured to establish a virtual live scene according to the characteristics of the non-live player and the characteristics of the live player; the virtual live broadcast scene comprises a virtual scene established according to the characteristics of the non-live broadcast persons and an avatar established according to the characteristics of the live broadcast persons.

A first mapping module 603 configured to establish a first mapping relationship between live player characteristics and an avatar in the actual live scene.

And a live player identification module 604 configured to identify pose change information of a live player in the live video stream.

Optionally, the live player identification module 604 includes: a multi-live player identification sub-module 6041 and a second mapping sub-module 6042.

A multi-live player identification submodule 6041 configured to identify pose change information of each live player in the live video stream;

a second mapping sub-module 6042 configured to establish a second mapping relationship between a live player and the pose change information in the live video stream.

And a live broadcast driving module 605 configured to drive the change of the posture of the avatar in the virtual live broadcast scene according to the posture change information.

Optionally, the live driving module 605 includes: a second mapping relation validation sub-module 6051 and a first mapping relation validation sub-module 6052.

And a second mapping relation confirming sub-module 6051 configured to determine, according to the second mapping relation, posture change information corresponding to each live player in the live video stream.

And a first mapping relation confirming sub-module 6052 configured to drive the pose change of each avatar in the virtual live broadcast scene according to the first mapping relation and the pose change information corresponding to each live player in the live broadcast video stream.

Optionally, referring to fig. 6, the apparatus 600 further includes a live broadcast generating module 606 and a live broadcast module 607.

And a live broadcast generation module 606 configured to generate a virtual live broadcast video stream according to the image frame corresponding to the virtual live broadcast scene.

The live broadcast module 607 is configured to send the virtual live broadcast video stream to a receiving end through a pre-established communication link.

With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.

Fig. 7 is a block diagram illustrating an electronic device 700 in accordance with an example embodiment. The electronic device may be a mobile terminal or a server, and in the embodiment of the present disclosure, the electronic device is taken as an example for description. For example, the electronic device 700 may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, an exercise device, a personal digital assistant, and the like.

Referring to fig. 7, electronic device 700 may include one or more of the following components: a processing component 702, a memory 704, a power component 706, a multimedia component 708, an audio component 710, an input/output (I/O) interface 712, a sensor component 714, and a communication component 716.

The processing component 702 generally controls overall operation of the electronic device 700, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing components 702 may include one or more processors 720 to execute instructions to perform all or a portion of the steps of the methods described above. Further, the processing component 702 may include one or more modules that facilitate interaction between the processing component 702 and other components. For example, the processing component 702 may include a multimedia module to facilitate interaction between the multimedia component 708 and the processing component 702.

The memory 704 is configured to store various types of data to support operations at the electronic device 700. Examples of such data include instructions for any application or method operating on the electronic device 700, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 704 may be implemented by any type or combination of volatile or non-volatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.

The power supply component 706 provides power to the various components of the electronic device 700. The power components 706 may include a power management system, one or more power sources, and other components associated with generating, managing, and distributing power for the electronic device 700.

The multimedia component 708 includes a screen that provides an output interface between the electronic device 700 and a user. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 708 includes a front facing camera and/or a rear facing camera. The front camera and/or the rear camera may receive external multimedia data when the electronic device 700 is in an operation mode, such as a photographing mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.

The audio component 710 is configured to output and/or input audio signals. For example, the audio component 710 includes a Microphone (MIC) configured to receive external audio signals when the electronic device 700 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signal may further be stored in the memory 704 or transmitted via the communication component 716. In some embodiments, audio component 710 also includes a speaker for outputting audio signals.

The I/O interface 712 provides an interface between the processing component 702 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.

The sensor assembly 714 includes one or more sensors for providing various aspects of status assessment for the electronic device 700. For example, the sensor assembly 714 may detect an open/closed state of the electronic device 700, the relative positioning of components, such as a display and keypad of the electronic device 700, the sensor assembly 714 may also detect a change in the position of the electronic device 700 or a component of the electronic device 700, the presence or absence of user contact with the electronic device 700, orientation or acceleration/deceleration of the electronic device 700, and a change in the temperature of the electronic device 700. The sensor assembly 714 may include a proximity sensor configured to detect the presence of a nearby object without any physical contact. The sensor assembly 714 may also include a light sensor, such as a CMOS or CCD spectrum sensor, for use in imaging applications. In some embodiments, the sensor assembly 714 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.

The communication component 716 is configured to facilitate wired or wireless communication between the electronic device 700 and other devices. The electronic device 700 may access a wireless network based on a communication standard, such as WiFi, a carrier network (such as 2G, 7G, 4G, or 5G), or a combination thereof. In an exemplary embodiment, the communication component 716 receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 716 further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.

In an exemplary embodiment, the electronic device 700 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components for performing the above-described method of virtual live broadcast illustrated in fig. 1 to 3.

In an exemplary embodiment, a non-transitory computer-readable storage medium including instructions, such as the memory 704 including instructions, executable by the processor 720 of the electronic device 700 to perform the method of virtual live shown in fig. 1-3 described above is also provided. For example, the non-transitory computer readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.

In an exemplary embodiment, a computer program product is also provided, in which instructions, when executed by the processor 720 of the electronic device 700, cause the electronic device 700 to perform the method of virtual live shown in fig. 1 to 3 described above.

Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This disclosure is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.

It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

24页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:视频广告处理方法、客户端、视频服务器及可读存储介质

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!

技术分类