Video live broadcast method, computing device and computer storage medium

文档序号:956371 发布日期:2020-10-30 浏览:2次 中文

阅读说明:本技术 视频直播方法、计算设备及计算机存储介质 (Video live broadcast method, computing device and computer storage medium ) 是由 刘洋 于 2020-07-31 设计创作,主要内容包括:本发明公开了一种视频直播方法、计算设备及计算机存储介质,该方法包括:接收主播用户端上传的第一直播视频流;对第一直播视频流所包含的视频帧图像进行遮挡处理,得到经遮挡处理后的第二直播视频流;获取观看用户端的用户行为数据,根据用户行为数据对第二直播视频流进行去遮挡处理,得到第三直播视频流;向观看用户端推送第三直播视频流,以供观看用户端渲染直播视频画面。通过上述方式,使得不同的观看用户端渲染的直播视频画面是不相同的,使视频直播具有个性化的特点,达到千人千面的效果。(The invention discloses a video live broadcast method, a computing device and a computer storage medium, wherein the method comprises the following steps: receiving a first direct-playing video stream uploaded by a main-playing user side; shielding the video frame image contained in the first live video stream to obtain a second live video stream after shielding treatment; acquiring user behavior data of a watching user side, and carrying out shielding removal processing on the second live video stream according to the user behavior data to obtain a third live video stream; and pushing the third live video stream to the watching user side so that the watching user side can render the live video pictures. By the mode, live video pictures rendered by different watching clients are different, so that the live video has the personalized characteristic, and the effect of thousands of people is achieved.)

1. A video live method, comprising:

receiving a first direct-playing video stream uploaded by a main-playing user side;

shielding the video frame image contained in the first live video stream to obtain a second live video stream subjected to shielding treatment;

acquiring user behavior data of a watching user side, and carrying out shielding removal processing on the second live video stream according to the user behavior data to obtain a third live video stream;

and pushing the third live video stream to a watching user side so that the watching user side can render live video pictures.

2. The method of claim 1, wherein the occlusion processing of the video frame images included in the first live video stream further comprises:

according to the face point location marking data, carrying out shielding processing on a video frame image contained in the first live video stream;

the face point location marking data is obtained by performing face recognition processing on a video frame image contained in the first live video stream.

3. The method according to claim 1 or 2, wherein the occlusion processing of the video frame image included in the first live video stream further comprises:

and adding a shielding layer to the face region or each face organ region in the video frame image contained in the first live video stream.

4. The method of any of claims 1-3, wherein the de-occluding the second live video stream in accordance with the user behavior data further comprises:

and determining the de-occlusion level information according to the user behavior data, and performing de-occlusion processing of the second live video stream to a corresponding degree according to the de-occlusion level information.

5. The method of claim 4, wherein the masking layer comprises: a decal style of barrier layer and/or obscuring skin layer.

6. The method of claim 5, wherein the determining, from the user behavior data, de-occlusion level information, and performing, from the de-occlusion level information, a corresponding degree of de-occlusion processing on the second live video stream further comprises:

removing the shielding layer of the sticker style added to the corresponding face organ area in the video frame image contained in the second live broadcast video stream according to the shielding-removing level information; and/or the presence of a gas in the gas,

and performing fuzzy weakening processing of a corresponding degree on the added fuzzy masking layer of the video frame image contained in the second live video stream according to the occlusion removing level information.

7. The method of claim 4, wherein the user behavior data comprises one or more of:

the data processing method comprises the following steps of user interaction behavior data, user reading behavior data, user payment behavior data, user appreciation behavior data, user comment behavior data, user praise behavior data and user sharing behavior data.

8. The method of claim 7, wherein the de-occluding the second live video stream in accordance with the user behavior data further comprises:

receiving a de-occlusion request triggered by a de-occlusion payment entry in a viewing client and receiving user payment behavior data;

verifying whether the payment is successful according to the user payment behavior data;

and if so, carrying out occlusion removal processing on the second live video stream.

9. A computing device, comprising: the system comprises a processor, a memory, a communication interface and a communication bus, wherein the processor, the memory and the communication interface complete mutual communication through the communication bus;

the memory is configured to store at least one executable instruction that causes the processor to:

receiving a first direct-playing video stream uploaded by a main-playing user side;

shielding the video frame image contained in the first live video stream to obtain a second live video stream subjected to shielding treatment;

acquiring user behavior data of a watching user side, and carrying out shielding removal processing on the second live video stream according to the user behavior data to obtain a third live video stream;

and pushing the third live video stream to a watching user side so that the watching user side can render live video pictures.

10. A computer storage medium having at least one executable instruction stored therein, the executable instruction causing a processor to:

receiving a first direct-playing video stream uploaded by a main-playing user side;

shielding the video frame image contained in the first live video stream to obtain a second live video stream subjected to shielding treatment;

acquiring user behavior data of a watching user side, and carrying out shielding removal processing on the second live video stream according to the user behavior data to obtain a third live video stream;

and pushing the third live video stream to a watching user side so that the watching user side can render live video pictures.

Technical Field

The invention relates to the technical field of live video, in particular to a live video method, computing equipment and a computer storage medium.

Background

At present, the face recognition technology is relatively mature, various products utilizing the face recognition technology are gradually popularized, such as facial beautification camera application, live broadcast software and the like, the products can recognize a large number of face point positions, and face regions are processed through an algorithm technology, so that the effects of face beautification, face sticker and the like are achieved.

In the existing video live broadcast solutions, video stream processing is realized at a main broadcast user end side, that is, a main broadcast user end performs facial beautification or sticker processing after recognizing a face during shooting, then the live broadcast video stream is uploaded to a server, and a watching user end pulls the live broadcast video stream from the server to render a video live broadcast picture.

However, the inventor finds that the prior art has at least the following defects in the process of implementing the invention: because the live video stream is processed uniformly at the anchor user end, all the live video pictures on the watching user ends are consistent and have no difference.

Disclosure of Invention

In view of the above, the present invention has been made to provide a video live method, a computing device and a computer storage medium that overcome or at least partially solve the above-mentioned problems.

According to an aspect of the present invention, there is provided a video live broadcasting method, including:

receiving a first direct-playing video stream uploaded by a main-playing user side;

shielding the video frame image contained in the first live video stream to obtain a second live video stream after shielding treatment;

acquiring user behavior data of a watching user side, and carrying out shielding removal processing on the second live video stream according to the user behavior data to obtain a third live video stream;

and pushing the third live video stream to the watching user side so that the watching user side can render the live video pictures.

According to another aspect of the present invention, there is provided a computing device comprising: the system comprises a processor, a memory, a communication interface and a communication bus, wherein the processor, the memory and the communication interface complete mutual communication through the communication bus;

the memory is configured to store at least one executable instruction that causes the processor to:

receiving a first direct-playing video stream uploaded by a main-playing user side;

shielding the video frame image contained in the first live video stream to obtain a second live video stream after shielding treatment;

acquiring user behavior data of a watching user side, and carrying out shielding removal processing on the second live video stream according to the user behavior data to obtain a third live video stream;

and pushing the third live video stream to the watching user side so that the watching user side can render the live video pictures.

According to yet another aspect of the present invention, there is provided a computer storage medium having at least one executable instruction stored therein, the executable instruction causing a processor to:

receiving a first direct-playing video stream uploaded by a main-playing user side;

shielding the video frame image contained in the first live video stream to obtain a second live video stream after shielding treatment;

acquiring user behavior data of a watching user side, and carrying out shielding removal processing on the second live video stream according to the user behavior data to obtain a third live video stream;

and pushing the third live video stream to the watching user side so that the watching user side can render the live video pictures.

According to the video live broadcast method, the computing equipment and the computer storage medium, the method comprises the following steps: receiving a first direct-playing video stream uploaded by a main-playing user side; shielding the video frame image contained in the first live video stream to obtain a second live video stream after shielding treatment; acquiring user behavior data of a watching user side, and carrying out shielding removal processing on the second live video stream according to the user behavior data to obtain a third live video stream; and pushing the third live video stream to the watching user side so that the watching user side can render the live video pictures. By the mode, live video pictures rendered by different watching clients are different, so that the live video has the personalized characteristic, and the effect of thousands of people is achieved.

The foregoing description is only an overview of the technical solutions of the present invention, and the embodiments of the present invention are described below in order to make the technical means of the present invention more clearly understood and to make the above and other objects, features, and advantages of the present invention more clearly understandable.

Drawings

Various other advantages and benefits will become apparent to those of ordinary skill in the art upon reading the following detailed description of the preferred embodiments. The drawings are only for purposes of illustrating the preferred embodiments and are not to be construed as limiting the invention. Also, like reference numerals are used to refer to like parts throughout the drawings. In the drawings:

fig. 1 shows a flowchart of a video live broadcast method provided by an embodiment of the present invention;

fig. 2 is a flowchart illustrating a video live broadcasting method according to another embodiment of the present invention;

fig. 3 shows a flowchart of a video live broadcasting method according to another embodiment of the present invention;

fig. 4 is a schematic structural diagram of a computing device provided in an embodiment of the present invention.

Detailed Description

Exemplary embodiments of the present invention will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the invention are shown in the drawings, it should be understood that the invention can be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the invention to those skilled in the art.

Fig. 1 shows a flowchart of a live video method provided by an embodiment of the present invention, which is applied in a server. As shown in fig. 1, the method comprises the steps of:

step S110, a first direct-playing video stream uploaded by a main-playing user side is received.

The anchor user side is the user side used by the video anchor, the anchor user side needs to upload the live video stream to the server, the server sends the live video stream to the watching user side, and the watching user side renders a video live frame according to the received live video stream, so that the user at the watching user side can watch live broadcast.

Optionally, the first live video stream is a live video stream obtained by preprocessing an original live video stream captured by a camera, for example, the preprocessing may be image quality optimization processing, filter processing, and color processing, and the scheme of the present invention is not limited thereto.

And step S120, carrying out shielding processing on the video frame images contained in the first live video stream to obtain a second live video stream after shielding processing.

The shielding processing may be adding a paper layer or a fuzzy masking layer to a face region in the video frame image, that is, adding a paper layer or a fuzzy masking layer to a face region of the video frame image included in the first live video, and obtaining the second live video stream after the processing.

Step S130, user behavior data of a watching user side is obtained, and the second live video stream is subjected to shielding removal processing according to the user behavior data to obtain a third live video stream.

The watching user side is the user side used when the user watches the video live broadcast, the user behavior data of the watching user side is obtained, whether the second live broadcast video stream needs to be subjected to shielding removal processing or not is determined according to the user behavior data, and if the user behavior data are determined to be needed, shielding added to the video frame images contained in the second live broadcast video stream is removed, and a third live broadcast video stream is obtained. For example, the user watching behavior data of the watching client is obtained, and if the watching value is judged to exceed the preset limit, the direct-broadcast video stream needs to be subjected to occlusion removal processing.

Step S140, pushing the third live video stream to the viewing user side for the viewing user side to render the live video frame.

And the server pushes the third live video stream to the watching user side, and the watching user side renders a live video picture according to the third live video stream.

According to the live video streaming method provided by the embodiment, firstly, live video streams are uniformly shielded in the server, and when the live video streams are pushed to a watching user side, the live video streams after shielding processing are subjected to shielding removal processing according to user behavior data. By the mode, live video pictures rendered by different watching clients are different, so that the live video has the personalized characteristic, and the effect of thousands of people is achieved.

Fig. 2 shows a flowchart of a live video method provided by another embodiment of the present invention, which is applied in a server. As shown in fig. 2, the method comprises the steps of:

step S210, receiving a first direct-playing video stream uploaded by a main-playing user side.

And step S220, according to the face point location marking data, carrying out shielding processing on the video frame image contained in the first live video stream to obtain a second live video stream after shielding processing.

The face point location marking data is obtained by carrying out face recognition processing on video frame images contained in the first live video stream.

Through a face recognition technology, a face and key points of the face in a video frame image contained in a first live video stream are recognized, the key points include face contour key points, key points of each face organ and the like, face key point mark data (namely face point mark data) are extracted, an execution main body of the step can be a main broadcast user end or a server, and the method is not limited to the steps. Then, the video frame images included in the first live video stream are subjected to occlusion processing, that is, facial regions or respective facial organ regions in the video frame images included in the first live video stream are subjected to occlusion processing, for example, regions of organs such as the head, the ears, the neck, the eyebrows, the chin, the eyes, the mouth, the cheeks, and the nose are respectively occluded by different occlusion layers.

Optionally, the step of performing occlusion processing on the video frame image included in the first live video stream specifically includes: and adding a shielding layer to the face region or each face organ region in the video frame image contained in the first live video stream. The shielding layer may be a sticker-style shielding layer, such as a panda head-style shielding layer, a rabbit-ear-style shielding layer, or the like, or may be a fuzzy covering layer.

Step S230, obtaining user behavior data of the viewing user, determining occlusion removal level information according to the user behavior data, and performing occlusion removal processing of a corresponding degree on the second live video stream according to the occlusion removal level information to obtain a third live video stream.

In the method of the embodiment, a deblocking level is determined according to user behavior data of a watching user end, the deblocking processing degrees corresponding to different deblocking levels are different, and then, the deblocking processing is performed on the second live video stream according to the deblocking level information.

Specifically, according to the occlusion level information, the occlusion layer of the sticker type added to the corresponding face organ region in the video frame image included in the second live video stream is removed, that is, when the occlusion level information is different, the face organ region from which the occlusion layer is removed is different. And/or performing corresponding degree of blurring weakening processing on the added blurring layer of the video frame image contained in the second live video stream according to the occlusion removing level information, namely, when the occlusion removing level information is different, performing different degrees of blurring weakening processing on the added blurring layer.

For example, when the occlusion removing level is one level, a sticker added to a face organ region in a video frame image included in the second live video stream is removed, or the degree of blurring of a blurring layer added to the video frame image included in the second live video stream is weakened by 10%; when the de-occlusion boundary is at a higher level, for example, three levels, the stickers added to the three face organ areas in the video image contained in the second live video stream are removed, or the blurring degree of the blurring layer added to the video frame image contained in the second live video stream is weakened by 30%; and when the de-occlusion level is the highest level, removing all stickers added in the video images contained in the second live video stream, or completely removing the added fuzzy masking layer of the video frame images contained in the second live video stream. In short, the higher the deblocking level is, the greater the deblocking processing strength on the video images contained in the second live video stream is, and when the deblocking level is the highest, the added blocking layer of the video frame images contained in the second live video stream is completely removed.

For another example, if the priority of each facial organ region is set, and the correspondence between the occlusion removal level information and the priority of the facial organ region is set, the occlusion added to the facial organ region of the corresponding priority in the video frame image included in the second live video stream is removed according to the occlusion removal level information. Wherein the higher the sensory impact on the user, the higher the priority of setting the facial organ region.

Optionally, the user behavior data comprises one or more of: the data processing method comprises the following steps of user interaction behavior data, user reading behavior data, user appreciation behavior data, user comment behavior data, user praise behavior data and user sharing behavior data.

For example, live video may be live video of a book, that is, live video may be associated with a certain book or a certain book order, in the live video process, reading behavior data of a user is obtained, a reading progress of the user on the live book is determined according to the reading behavior data of the user, then, a corresponding deblocking level is determined according to the reading progress, the higher a proportion of the read content of the user is, the higher the deblocking level is determined, and accordingly, the greater the deblocking processing strength on the second live video stream is.

For another example, the user interaction times are determined according to the user interaction behavior data, when the user interaction times reach a first preset value, the deblocking level is determined to be a first level, and when the user interaction times reach a second threshold value, the deblocking level is determined to be a second level, that is, the higher the user interaction times, the higher the deblocking level. The user comment behavior data are used for determining the user comment value, the higher the user comment value is, the higher the unblocking level is, the user comment behavior data are used for determining the user comment times, the user approval behavior data are used for determining the user approval times, the user sharing behavior data are used for determining the user sharing times, and the higher the user comment times are, the user approval times or the higher the user sharing times are, the higher the unblocking level is. Of course, the relationship between the user behavior data and the de-occlusion level is merely illustrated, and the present invention is not limited thereto.

Step S240, pushing the third live video stream to the viewing user side for the viewing user side to render the live video frame.

And the server pushes the third live video stream to the watching user side, and the watching user side renders a live video picture according to the third live video stream.

According to the live video streaming method provided by the embodiment, firstly, the live video stream is uniformly shielded in the server, and when the stream is pushed to the watching user end, the shielding layers added to the live video stream after shielding processing are removed at different levels according to the user behavior data. By the mode, live video pictures rendered by different watching clients are different, so that the live video has the personalized characteristic, and the effect of thousands of people is achieved.

Fig. 3 shows a flowchart of a live video broadcasting method provided by another embodiment of the present invention, which is applied in a server. As shown in fig. 3, the method comprises the steps of:

step S310, receiving a first direct-playing video stream uploaded by a main-playing user side.

Step S320, performing occlusion processing on the video frame image included in the first live video stream to obtain a second live video stream after the occlusion processing.

Step S330, receiving a de-occlusion request triggered by the de-occlusion payment entry in the viewing client and receiving user payment behavior data.

The method comprises the steps that a shielding removal payment inlet is arranged in a watching user side, a shielding removal request is initiated by a user through triggering the shielding removal payment inlet, and the user needs to pay according to payment prompt information.

And step S340, verifying whether the payment is successful according to the user payment behavior data.

And step S350, if the payment is successful, performing occlusion removing processing on the second live video stream to obtain a third live video stream.

And if the payment is successful, performing occlusion removing processing on the second live video stream to obtain a third live video stream. Otherwise, if the payment is unsuccessful, the second live video stream is not subjected to unblocking processing, and prompt information that the payment is unsuccessful is returned to the watching user side, or the second live video stream is directly pushed to the watching user side.

In an alternative approach, de-blocking the payment portal comprises: the deblock payment entries corresponding to the respective face organ regions, for example, the deblock payment entry for the cheek region, the deblock payment entry for the eye region, the deblock payment entry for the mouth region, the deblock payment entry for the nose region, and so on.

In this way, if the user pays successfully, the occlusion layer added to the face organ region corresponding to the user-triggered de-occlusion payment entry in the video frame image included in the second live broadcast video stream is removed. For example, if the user triggers a de-occlusion payment entry in the cheek region and successfully pays, the occlusion layer that has been added in the cheek region in the video frame image in the second live video stream is removed.

And S360, pushing the third live video stream to the watching user side so that the watching user side can render the live video picture.

According to the live video broadcasting method provided by the embodiment, firstly, the live video stream is uniformly shielded in the server, and when the stream is pushed to the watching user side, the shielding layer added in the live video stream after shielding processing is removed according to the behavior of paying the user to remove the shielding, so that the user can remove the shielding of the human face in the live video picture in the paying way.

The embodiment of the invention provides a nonvolatile computer storage medium, wherein at least one executable instruction is stored in the computer storage medium, and the computer executable instruction can execute a video live broadcast method in any method embodiment.

The executable instructions may be specifically configured to cause the processor to:

receiving a first direct-playing video stream uploaded by a main-playing user side;

shielding the video frame image contained in the first live video stream to obtain a second live video stream after shielding treatment;

acquiring user behavior data of a watching user side, and carrying out shielding removal processing on the second live video stream according to the user behavior data to obtain a third live video stream;

and pushing the third live video stream to the watching user side so that the watching user side can render the live video pictures.

In an alternative, the executable instructions cause the processor to:

according to the face point location marking data, carrying out shielding processing on a video frame image contained in the first live video stream; the face point location marking data is obtained by carrying out face recognition processing on video frame images contained in the first live video stream.

In an alternative, the executable instructions cause the processor to: and adding a shielding layer to the face region or each face organ region in the video frame image contained in the first live video stream.

In an alternative, the executable instructions cause the processor to: and determining the de-occlusion level information according to the user behavior data, and performing de-occlusion processing of the second live video stream to a corresponding degree according to the de-occlusion level information.

In an alternative form, the shielding layer includes: a decal style of barrier layer and/or obscuring skin layer.

In an alternative, the executable instructions cause the processor to:

removing the shielding layer of the sticker style added to the corresponding face organ area in the video frame image contained in the second live broadcast video stream according to the shielding-removing level information; and/or the presence of a gas in the gas,

and performing fuzzy weakening processing of a corresponding degree on the added fuzzy masking layer of the video frame image contained in the second live video stream according to the occlusion removing level information.

In an alternative approach, the user behavior data includes one or more of the following:

the data processing method comprises the following steps of user interaction behavior data, user reading behavior data, user payment behavior data, user appreciation behavior data, user comment behavior data, user praise behavior data and user sharing behavior data.

In an alternative, the executable instructions cause the processor to:

receiving a de-occlusion request triggered by a de-occlusion payment entry in a viewing client and receiving user payment behavior data;

verifying whether the payment is successful according to the user payment behavior data;

and if so, carrying out occlusion removal processing on the second live video stream.

In an alternative approach, de-blocking the payment portal comprises: a de-occlusion payment portal corresponding to each facial organ region; the executable instructions cause the processor to:

and removing the occlusion layer added to the face organ area corresponding to the triggered de-occlusion payment inlet in the video frame image contained in the second live video stream.

In the mode of this embodiment, at first shield the processing to living broadcast video stream in unison, when pushing away the stream to watching the user side, remove to shelter from the processing to living broadcast video stream after shielding the processing again according to user behavior data, through the aforesaid mode for the living broadcast video picture that different watching user side render is inequality, makes the video living broadcast have individualized characteristics, reaches the effect of thousand people thousand faces.

Fig. 4 is a schematic structural diagram of an embodiment of a computing device according to the present invention, and the specific embodiment of the present invention does not limit the specific implementation of the computing device.

As shown in fig. 4, the computing device may include: a processor (processor)402, a Communications Interface 404, a memory 406, and a Communications bus 408.

Wherein: the processor 402, communication interface 404, and memory 406 communicate with each other via a communication bus 408. A communication interface 404 for communicating with network elements of other devices, such as clients or other servers. The processor 402 is configured to execute the program 410, and may specifically perform relevant steps in the video live method embodiment for the computing device.

In particular, program 410 may include program code comprising computer operating instructions.

The processor 402 may be a central processing unit CPU, or an application specific Integrated circuit asic, or one or more Integrated circuits configured to implement an embodiment of the present invention. The computing device includes one or more processors, which may be the same type of processor, such as one or more CPUs; or may be different types of processors such as one or more CPUs and one or more ASICs.

And a memory 406 for storing a program 410. Memory 406 may comprise high-speed RAM memory, and may also include non-volatile memory (non-volatile memory), such as at least one disk memory.

The program 410 may specifically be configured to cause the processor 402 to perform the following operations:

receiving a first direct-playing video stream uploaded by a main-playing user side;

shielding the video frame image contained in the first live video stream to obtain a second live video stream after shielding treatment;

acquiring user behavior data of a watching user side, and carrying out shielding removal processing on the second live video stream according to the user behavior data to obtain a third live video stream;

and pushing the third live video stream to the watching user side so that the watching user side can render the live video pictures.

In an alternative, the program 410 causes the processor 402 to:

according to the face point location marking data, carrying out shielding processing on a video frame image contained in the first live video stream;

the face point location marking data is obtained by carrying out face recognition processing on video frame images contained in the first live video stream.

In an alternative, the program 410 causes the processor 402 to:

and adding a shielding layer to the face region or each face organ region in the video frame image contained in the first live video stream.

In an alternative, the program 410 causes the processor 402 to:

and determining the de-occlusion level information according to the user behavior data, and performing de-occlusion processing of the second live video stream to a corresponding degree according to the de-occlusion level information.

In an alternative form, the shielding layer includes: a decal style of barrier layer and/or obscuring skin layer.

In an alternative, the program 410 causes the processor 402 to:

removing the shielding layer of the sticker style added to the corresponding face organ area in the video frame image contained in the second live broadcast video stream according to the shielding-removing level information; and/or the presence of a gas in the gas,

and performing fuzzy weakening processing of a corresponding degree on the added fuzzy masking layer of the video frame image contained in the second live video stream according to the occlusion removing level information.

In an alternative approach, the user behavior data includes one or more of the following:

the data processing method comprises the following steps of user interaction behavior data, user reading behavior data, user payment behavior data, user appreciation behavior data, user comment behavior data, user praise behavior data and user sharing behavior data.

In an alternative, the program 410 causes the processor 402 to:

receiving a de-occlusion request triggered by a de-occlusion payment entry in a viewing client and receiving user payment behavior data;

verifying whether the payment is successful according to the user payment behavior data;

and if so, carrying out occlusion removal processing on the second live video stream.

In an alternative approach, de-blocking the payment portal comprises: a de-occlusion payment portal corresponding to each facial organ region; the program 410 causes the processor 402 to perform the following operations:

and removing the occlusion layer added to the face organ area corresponding to the triggered de-occlusion payment inlet in the video frame image contained in the second live video stream. In the mode of this embodiment, at first shield the processing to living broadcast video stream in unison, when pushing away the stream to watching the user side, remove to shelter from the processing to living broadcast video stream after shielding the processing again according to user behavior data, through the aforesaid mode for the living broadcast video picture that different watching user side render is inequality, makes the video living broadcast have individualized characteristics, reaches the effect of thousand people thousand faces.

The algorithms or displays presented herein are not inherently related to any particular computer, virtual system, or other apparatus. Various general purpose systems may also be used with the teachings herein. The required structure for constructing such a system will be apparent from the description above. In addition, embodiments of the present invention are not directed to any particular programming language. It is appreciated that a variety of programming languages may be used to implement the teachings of the present invention as described herein, and any descriptions of specific languages are provided above to disclose the best mode of the invention.

In the description provided herein, numerous specific details are set forth. It is understood, however, that embodiments of the invention may be practiced without these specific details. In some instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description.

Similarly, it should be appreciated that in the foregoing description of exemplary embodiments of the invention, various features of the embodiments of the invention are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the invention and aiding in the understanding of one or more of the various inventive aspects. However, the disclosed method should not be interpreted as reflecting an intention that: that the invention as claimed requires more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive aspects lie in less than all features of a single foregoing disclosed embodiment. Thus, the claims following the detailed description are hereby expressly incorporated into this detailed description, with each claim standing on its own as a separate embodiment of this invention.

Those skilled in the art will appreciate that the modules in the device in an embodiment may be adaptively changed and disposed in one or more devices different from the embodiment. The modules or units or components of the embodiments may be combined into one module or unit or component, and furthermore they may be divided into a plurality of sub-modules or sub-units or sub-components. All of the features disclosed in this specification (including any accompanying claims, abstract and drawings), and all of the processes or elements of any method or apparatus so disclosed, may be combined in any combination, except combinations where at least some of such features and/or processes or elements are mutually exclusive. Each feature disclosed in this specification (including any accompanying claims, abstract and drawings) may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise.

Furthermore, those skilled in the art will appreciate that while some embodiments herein include some features included in other embodiments, rather than other features, combinations of features of different embodiments are meant to be within the scope of the invention and form different embodiments. For example, in the following claims, any of the claimed embodiments may be used in any combination.

The various component embodiments of the invention may be implemented in hardware, or in software modules running on one or more processors, or in a combination thereof. Those skilled in the art will appreciate that a microprocessor or Digital Signal Processor (DSP) may be used in practice to implement some or all of the functionality of some or all of the components according to embodiments of the present invention. The present invention may also be embodied as apparatus or device programs (e.g., computer programs and computer program products) for performing a portion or all of the methods described herein. Such programs implementing the present invention may be stored on computer-readable media or may be in the form of one or more signals. Such a signal may be downloaded from an internet website or provided on a carrier signal or in any other form.

It should be noted that the above-mentioned embodiments illustrate rather than limit the invention, and that those skilled in the art will be able to design alternative embodiments without departing from the scope of the appended claims. In the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. The word "comprising" does not exclude the presence of elements or steps not listed in a claim. The word "a" or "an" preceding an element does not exclude the presence of a plurality of such elements. The invention may be implemented by means of hardware comprising several distinct elements, and by means of a suitably programmed computer. In the unit claims enumerating several means, several of these means may be embodied by one and the same item of hardware. The usage of the words first, second and third, etcetera do not indicate any ordering. These words may be interpreted as names. The steps in the above embodiments should not be construed as limiting the order of execution unless specified otherwise.

The invention discloses: A1. a video live method, comprising:

receiving a first direct-playing video stream uploaded by a main-playing user side;

shielding the video frame image contained in the first live video stream to obtain a second live video stream subjected to shielding treatment;

acquiring user behavior data of a watching user side, and carrying out shielding removal processing on the second live video stream according to the user behavior data to obtain a third live video stream;

and pushing the third live video stream to a watching user side so that the watching user side can render live video pictures.

A2. The method according to a1, wherein the occlusion processing of the video frame image included in the first live video stream further comprises:

according to the face point location marking data, carrying out shielding processing on a video frame image contained in the first live video stream;

the face point location marking data is obtained by performing face recognition processing on a video frame image contained in the first live video stream.

A3. The method according to a1 or a2, wherein the occlusion processing of the video frame images included in the first live video stream further comprises:

and adding a shielding layer to the face region or each face organ region in the video frame image contained in the first live video stream.

A4. The method of any of A1-A3, wherein the de-occluding the second live video stream in accordance with the user behavior data further comprises:

and determining the de-occlusion level information according to the user behavior data, and performing de-occlusion processing of the second live video stream to a corresponding degree according to the de-occlusion level information.

A5. The method of a4, wherein the masking layer comprises: a decal style of barrier layer and/or obscuring skin layer.

A6. The method according to a5, wherein the determining, according to the user behavior data, the de-occlusion level information, and performing, according to the de-occlusion level information, de-occlusion processing of the second live video stream to a corresponding degree further includes:

removing the shielding layer of the sticker style added to the corresponding face organ area in the video frame image contained in the second live broadcast video stream according to the shielding-removing level information; and/or the presence of a gas in the gas,

and performing fuzzy weakening processing of a corresponding degree on the added fuzzy masking layer of the video frame image contained in the second live video stream according to the occlusion removing level information.

A7. The method of a4, wherein the user behavior data includes one or more of:

the data processing method comprises the following steps of user interaction behavior data, user reading behavior data, user payment behavior data, user appreciation behavior data, user comment behavior data, user praise behavior data and user sharing behavior data.

A8. The method of A7, wherein the de-occlusion processing the second live video stream according to the user behavior data further comprises:

receiving a de-occlusion request triggered by a de-occlusion payment entry in a viewing client and receiving user payment behavior data;

verifying whether the payment is successful according to the user payment behavior data;

and if so, carrying out occlusion removal processing on the second live video stream.

A9. The method of A8, wherein the de-occluding payment portal comprises: a de-occlusion payment portal corresponding to each facial organ region;

the performing of the occlusion removal processing on the second live video stream according to the user behavior data further includes:

and removing the shielding layer which is added to the face organ area corresponding to the triggered de-shielding payment inlet in the video frame image contained in the second live broadcast video stream.

B10. A computing device, comprising: the system comprises a processor, a memory, a communication interface and a communication bus, wherein the processor, the memory and the communication interface complete mutual communication through the communication bus;

the memory is configured to store at least one executable instruction that causes the processor to:

receiving a first direct-playing video stream uploaded by a main-playing user side;

shielding the video frame image contained in the first live video stream to obtain a second live video stream subjected to shielding treatment;

acquiring user behavior data of a watching user side, and carrying out shielding removal processing on the second live video stream according to the user behavior data to obtain a third live video stream;

and pushing the third live video stream to a watching user side so that the watching user side can render live video pictures.

B11. The computing device of B10, the executable instructions further cause the processor to:

according to the face point location marking data, carrying out shielding processing on a video frame image contained in the first live video stream;

the face point location marking data is obtained by performing face recognition processing on a video frame image contained in the first live video stream.

B12. The computing device of B10 or B11, the executable instructions further cause the processor to:

and adding a shielding layer to the face region or each face organ region in the video frame image contained in the first live video stream.

B13. The computing device of any one of B10-B12, the executable instructions further cause the processor to:

and determining the de-occlusion level information according to the user behavior data, and performing de-occlusion processing of the second live video stream to a corresponding degree according to the de-occlusion level information.

B14. The computing device of B13, wherein the occlusion layer comprises: a decal style of barrier layer and/or obscuring skin layer.

B15. The computing device of B14, wherein the executable instructions further cause the processor to:

removing the shielding layer of the sticker style added to the corresponding face organ area in the video frame image contained in the second live broadcast video stream according to the shielding-removing level information; and/or the presence of a gas in the gas,

and performing fuzzy weakening processing of a corresponding degree on the added fuzzy masking layer of the video frame image contained in the second live video stream according to the occlusion removing level information.

B16. The computing device of B13, wherein the user behavior data includes one or more of:

the data processing method comprises the following steps of user interaction behavior data, user reading behavior data, user payment behavior data, user appreciation behavior data, user comment behavior data, user praise behavior data and user sharing behavior data.

B17. The computing device of B16, the executable instructions further cause the processor to:

receiving a de-occlusion request triggered by a de-occlusion payment entry in a viewing client and receiving user payment behavior data;

verifying whether the payment is successful according to the user payment behavior data;

and if so, carrying out occlusion removal processing on the second live video stream.

B18. The computing device of B17, wherein the de-occluding payment portal comprises: a de-occlusion payment portal corresponding to each facial organ region; the executable instructions further cause the processor to:

and removing the shielding layer which is added to the face organ area corresponding to the triggered de-shielding payment inlet in the video frame image contained in the second live broadcast video stream.

C19. A computer storage medium having at least one executable instruction stored therein, the executable instruction causing a processor to:

receiving a first direct-playing video stream uploaded by a main-playing user side;

shielding the video frame image contained in the first live video stream to obtain a second live video stream subjected to shielding treatment;

acquiring user behavior data of a watching user side, and carrying out shielding removal processing on the second live video stream according to the user behavior data to obtain a third live video stream;

and pushing the third live video stream to a watching user side so that the watching user side can render live video pictures.

C20. The computer storage medium of C19, the executable instructions further cause the processor to:

according to the face point location marking data, carrying out shielding processing on a video frame image contained in the first live video stream;

the face point location marking data is obtained by performing face recognition processing on a video frame image contained in the first live video stream.

C21. The computer storage medium of C19 or C20, the executable instructions further cause the processor to:

and adding a shielding layer to the face region or each face organ region in the video frame image contained in the first live video stream.

C22. The computer storage medium of any of C19-C21, the executable instructions further cause the processor to:

and determining the de-occlusion level information according to the user behavior data, and performing de-occlusion processing of the second live video stream to a corresponding degree according to the de-occlusion level information.

C23. The computer storage medium of C22, wherein the occlusion layer comprises: a decal style of barrier layer and/or obscuring skin layer.

C24. The computer storage medium of C23, wherein the executable instructions further cause the processor to:

removing the shielding layer of the sticker style added to the corresponding face organ area in the video frame image contained in the second live broadcast video stream according to the shielding-removing level information; and/or the presence of a gas in the gas,

and performing fuzzy weakening processing of a corresponding degree on the added fuzzy masking layer of the video frame image contained in the second live video stream according to the occlusion removing level information.

C25. The computer storage medium of C22, wherein the user behavior data includes one or more of:

the data processing method comprises the following steps of user interaction behavior data, user reading behavior data, user payment behavior data, user appreciation behavior data, user comment behavior data, user praise behavior data and user sharing behavior data.

C26. The computer storage medium of C25, the executable instructions further cause the processor to:

receiving a de-occlusion request triggered by a de-occlusion payment entry in a viewing client and receiving user payment behavior data;

verifying whether the payment is successful according to the user payment behavior data;

and if so, carrying out occlusion removal processing on the second live video stream.

C27. The computer storage medium of C26, wherein the de-occluding payment portal comprises: a de-occlusion payment portal corresponding to each facial organ region; the executable instructions further cause the processor to:

and removing the shielding layer which is added to the face organ area corresponding to the triggered de-shielding payment inlet in the video frame image contained in the second live broadcast video stream.

19页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:一种显示设备及开机界面显示方法

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!

技术分类