Live broadcast data processing method, device, equipment and storage medium

文档序号:107625 发布日期:2021-10-15 浏览:21次 中文

阅读说明:本技术 直播数据处理方法、装置、设备以及存储介质 (Live broadcast data processing method, device, equipment and storage medium ) 是由 徐冬博 蔡钦童 于 2021-07-02 设计创作,主要内容包括:本申请实施例公开了一种直播数据处理方法、装置、设备以及存储介质,涉及云技术中的大数据相关的数据传输,其中,方法包括:在虚拟直播间的第一直播互动界面中显示目标绘画任务;响应于针对所述目标绘画任务的绘制请求,在所述第一直播互动界面中显示所述虚拟直播间的目标观众用户所绘制得到的目标对象;将所述目标对象发送给所述主播用户对应主播终端,以使所述主播终端在第二直播互动界面中显示所述目标对象;在所述第一直播互动界面中,显示用于指示预测对象属性的有效性的验证结果;所述预测对象属性是所述主播用户对所述目标对象的对象属性进行预测得到的。通过本申请能够有效提高直播互动的多样性,并提高虚拟直播间的互动效果。(The embodiment of the application discloses a live data processing method, a live data processing device, live data processing equipment and a storage medium, and relates to data transmission related to big data in a cloud technology, wherein the method comprises the following steps: displaying a target drawing task in a first live broadcast interactive interface of a virtual live broadcast room; responding to a drawing request aiming at the target drawing task, and displaying a target object drawn by a target audience user of the virtual live broadcast room in the first live broadcast interactive interface; sending the target object to a main broadcast terminal corresponding to the main broadcast user so that the main broadcast terminal displays the target object in a second live broadcast interaction interface; displaying a verification result for indicating the effectiveness of the attribute of the prediction object in the first direct-playing interactive interface; the predicted object attribute is obtained by predicting the object attribute of the target object by the anchor user. Through the method and the device, the diversity of live broadcast interaction can be effectively improved, and the interaction effect of a virtual live broadcast room is improved.)

1. A live data processing method is characterized by comprising the following steps:

displaying a target drawing task in a first live broadcast interactive interface of a virtual live broadcast room; the target drawing task is selected from at least two drawing tasks displayed in a second live broadcast interactive interface in an encrypted manner by a main broadcast user of the virtual live broadcast room;

responding to a drawing request aiming at the target drawing task, and displaying a target object drawn by a target audience user of the virtual live broadcast room in the first live broadcast interactive interface;

sending the target object to a main broadcast terminal corresponding to the main broadcast user so that the main broadcast terminal displays the target object in a second live broadcast interaction interface;

displaying a verification result for indicating the effectiveness of the attribute of the prediction object in the first direct-playing interactive interface; the predicted object attribute is obtained by predicting the object attribute of the target object by the anchor user.

2. The method as recited in claim 1, wherein the number of target audience users is at least two, and the displaying, in the first live interactive interface, the target objects drawn by the target audience users of the virtual live room in response to the draw request for the target drawing task comprises:

and responding to a drawing request aiming at the target drawing task, and drawing the target object in the first direct-playing interactive interface according to the drawing sequence of each target audience user in the at least two target audience users.

3. The method of claim 2, wherein the at least two target audience users comprise a first target audience user and a second target audience user; the drawing order of the first target audience user is prior to the drawing order of the second target audience user;

the step of obtaining the target object in the first live interactive interface in response to the drawing request for the target drawing task and according to the drawing sequence of each of the at least two target audience users includes:

responding to a drawing request aiming at the target drawing task, and displaying first drawing content drawn by the first target audience user in the first direct-playing interactive interface;

and when the time reaches the drawing starting time of the second target audience user, adding second drawn content drawn by the second target audience user in the area where the first drawn content is located to obtain the target object.

4. The method of claim 3, wherein said displaying, in the first live interactive interface, first rendered content rendered by the first target audience user in response to a rendering request for the target drawing task comprises:

in response to a drawing request for the target drawing task, highlighting user information of the first target audience user in the first live interactive interface;

and when the time reaches the end drawing time of the first target audience user, in the first direct-playing interactive interface, canceling and highlighting the user information of the first target audience user, and displaying the first drawing content drawn by the first target audience user.

5. The method of any of claims 2-4, wherein the at least two target viewer users are viewer users of the virtual live room that establish a voice connection with the anchor user; the drawing sequence of each of the at least two target audience users is determined according to the time when the at least two target audience users establish the voice connection with the anchor user.

6. The method of claim 3, wherein said displaying, in the first live interactive interface, first rendered content rendered by the first target audience user in response to a rendering request for the target drawing task comprises:

responding to a drawing request aiming at the target drawing task, and acquiring initial drawing content drawn by the first target audience user and first size information of a display interface of an audience terminal corresponding to the first target audience user;

acquiring second size information of a display interface of the audience terminal corresponding to the second target audience user;

adjusting the initial drawing content according to the first size information and the second size information to obtain first drawing content;

and displaying the first drawing content in the first direct-playing interactive interface.

7. The method of claim 6, wherein the adjusting the initial rendering content according to the first size information and the second size information to obtain a first rendering content comprises:

determining, according to the first size information and the second size information, a size ratio between a display interface of an audience terminal corresponding to the first target audience user and a display interface of an audience terminal corresponding to the second target audience user;

and adjusting the initial drawing content according to the size ratio to obtain a first drawing content.

8. The method of claim 3, wherein said adding a second rendered content rendered by said second target viewer user in an area where said first rendered content is located when said time reaches a starting rendering time of said second target viewer user to obtain said target object comprises:

when the time reaches the drawing starting time of the second target audience user, calling a drawing detection function to detect a display interface of an audience terminal corresponding to the second target audience user to obtain a drawing array;

calling an analysis function to analyze the drawing array to obtain drawing information; the drawing information comprises a drawing line segment, position information of the drawing line segment and color information;

and calling a drawing function to draw according to the drawing information to obtain the second drawing content, and adding the second drawing content in the area where the first drawing content is located to obtain the target object.

9. The method of any of claims 2-4, further comprising:

if the verification result of the predicted object attribute indicates that the predicted object attribute is invalid, displaying the scores of all target audience users in the first direct-playing interactive interface; and the scores of all target audience users are obtained by scoring the drawn contents of all target audience users by the rest audience users in the virtual live broadcast room, wherein the rest audience users refer to the audience users in the virtual live broadcast room except for the at least two target audience users.

10. The method of claim 9, wherein the method further comprises:

if the verification result of the predicted object attribute indicates that the predicted object attribute is valid, displaying a first resource package in a resource display area of the first direct-playing interactive interface; resources in the first resource package are for allocation to the anchor user;

if the verification result of the predicted object attribute indicates that the predicted object attribute is invalid, displaying a second resource packet in the resource display area of the first direct-broadcasting interactive interface, and displaying user information of a specified audience user in the second resource packet in a highlighted mode; the specified audience user refers to the target audience user with the highest score in the at least two target audience users, and the resources in the second resource packet are used for being distributed to the audience users in the virtual live broadcast room.

11. A live data processing method is characterized by comprising the following steps:

responding to an interaction request aiming at a virtual live broadcast room, and displaying at least two drawing tasks in a second live broadcast interaction interface of the virtual live broadcast room in an encrypted manner;

responding to the at least two drawing task selection requests, and sending a target drawing task selected by an anchor user of the virtual live broadcast room to an audience terminal so that the audience terminal can display the target drawing task in a first live broadcast interactive interface;

displaying a target object in the second live interactive interface; the target object is drawn by the target audience user for the target drawing task;

in response to a prediction request for the object attribute of the target object, displaying a predicted object attribute predicted by the anchor user in the second live interactive interface, and displaying a verification result for indicating the validity of the predicted object attribute.

12. The method of claim 11, wherein the displaying predicted object properties predicted by the anchor user in the second live interactive interface in response to a prediction request for object properties of the target object comprises:

responding to a prediction request aiming at the object attribute of the target object, and acquiring voice data input by the anchor user for predicting the object attribute of the target object;

performing semantic extraction on the voice data to obtain semantic information of the voice data;

and determining the predicted object attribute of the target object according to the semantic information of the voice data, and displaying the predicted object attribute in the second live broadcast interactive interface.

13. The method of claim 11, wherein displaying, in the first live interactive interface, a validation result indicating validity of a predicted object property comprises:

acquiring standard object attributes of an object indicated by the target drawing task;

determining a degree of match between the standard object attribute and the predicted object attribute;

if the matching degree is larger than a matching threshold value, displaying a verification result for indicating that the attribute of the predicted object is valid in the first live interaction interface;

and if the matching degree is smaller than or equal to the matching threshold, displaying a verification result for indicating that the predicted object attribute is invalid in the first live interactive interface.

14. The method of claim 13, wherein said determining a degree of match between said standard object property and said predicted object property comprises:

performing feature extraction on the standard object attribute to obtain a feature vector of the standard object attribute;

performing feature extraction on the predicted object attribute to obtain a feature vector of the standard object attribute;

and determining the distance between the feature vector of the standard object attribute and the feature vector of the standard object attribute, and determining the matching degree between the standard object attribute and the predicted object attribute according to the distance.

15. A live data processing apparatus, comprising:

the first display module is used for displaying a target drawing task in a first live broadcast interactive interface of the virtual live broadcast room; the target drawing task is selected from at least two drawing tasks displayed in a second live broadcast interactive interface in an encrypted manner by a main broadcast user of the virtual live broadcast room;

the second display module is used for responding to a drawing request aiming at the target drawing task and displaying a target object drawn by a target audience user in the virtual live broadcast room in the first live broadcast interactive interface;

the sending module is used for sending the target object to an anchor terminal corresponding to the anchor user so that the anchor terminal can display the target object in a second live broadcast interaction interface;

the third display module is used for displaying a verification result used for indicating the effectiveness of the attribute of the predicted object in the first direct-broadcasting interactive interface; the predicted object attribute is obtained by predicting the object attribute of the target object by the anchor user.

16. A live data processing apparatus, comprising:

the system comprises a first display module, a second display module and a display module, wherein the first display module is used for responding to an interaction request aiming at a virtual live broadcast room and displaying at least two drawing tasks in a second live broadcast interaction interface of the virtual live broadcast room in an encrypted manner;

the sending module is used for responding to the at least two drawing task selection requests, sending a target drawing task selected by an anchor user of the virtual live broadcast room to an audience terminal, and enabling the audience terminal to display the target drawing task in a first live broadcast interactive interface;

the second display module is used for displaying the target object in the second live broadcast interactive interface; the target object is drawn by the target audience user for the target drawing task;

and the third display module is used for responding to a prediction request aiming at the object attribute of the target object, displaying the predicted object attribute predicted by the anchor user in the second live broadcast interactive interface, and displaying a verification result used for indicating the effectiveness of the predicted object attribute.

17. A computer device, comprising:

a processor and a memory;

the processor is coupled to the memory, wherein the memory is configured to store program code and the processor is configured to call the program code to perform the method of any of claims 1-14.

18. A computer-readable storage medium, in which a computer program is stored which is adapted to be loaded by a processor and to carry out the method according to any one of claims 1 to 14.

Technical Field

The present application relates to the field of data transmission in cloud technologies, and in particular, to a live data processing method, apparatus, device, and storage medium.

Background

With the development of internet technology, live webcasting is more and more popular, which means that a main broadcasting terminal transmits and releases a collected data stream while collecting the data stream, so that a spectator terminal can play the collected data stream through the internet; the data stream may refer to voice, video, and images, among others. At present, in the process of network live broadcast, audience users can only present virtual gifts to a main broadcast user through audience terminals and interact with the main broadcast user, and the interaction form is single, so that the interaction effect of a virtual live broadcast room is poor.

Disclosure of Invention

The technical problem to be solved by the embodiments of the present application is to provide a live data processing method, device, equipment and storage medium, which can effectively improve the diversity of live broadcast interaction and improve the interaction effect of a virtual live broadcast room.

An embodiment of the present application provides a live data processing method, including:

displaying a target drawing task in a first live broadcast interactive interface of a virtual live broadcast room; the target drawing task is selected from at least two drawing tasks displayed in a second live broadcast interactive interface in an encrypted manner by a main broadcast user of the virtual live broadcast room;

responding to a drawing request aiming at the target drawing task, and displaying a target object drawn by a target audience user of the virtual live broadcast room in the first live broadcast interactive interface;

sending the target object to a main broadcast terminal corresponding to the main broadcast user so that the main broadcast terminal displays the target object in a second live broadcast interaction interface;

displaying a verification result for indicating the effectiveness of the attribute of the prediction object in the first direct-playing interactive interface; the predicted object attribute is obtained by predicting the object attribute of the target object by the anchor user.

An embodiment of the present application provides a live data processing method, including:

responding to an interaction request aiming at a virtual live broadcast room, and displaying at least two drawing tasks in a second live broadcast interaction interface of the virtual live broadcast room in an encrypted manner;

responding to the at least two drawing task selection requests, and sending a target drawing task selected by an anchor user of the virtual live broadcast room to an audience terminal so that the audience terminal can display the target drawing task in a first live broadcast interactive interface;

displaying a target object in the second live interactive interface; the target object is drawn by the target audience user for the target drawing task;

in response to a prediction request for the object attribute of the target object, displaying a predicted object attribute predicted by the anchor user in the second live interactive interface, and displaying a verification result for indicating the validity of the predicted object attribute.

An aspect of an embodiment of the present application provides a live data processing apparatus, including:

the first display module is used for displaying a target drawing task in a first live broadcast interactive interface of the virtual live broadcast room; the target drawing task is selected from at least two drawing tasks displayed in a second live broadcast interactive interface in an encrypted manner by a main broadcast user of the virtual live broadcast room;

the second display module is used for responding to a drawing request aiming at the target drawing task and displaying a target object drawn by a target audience user in the virtual live broadcast room in the first live broadcast interactive interface;

the sending module is used for sending the target object to an anchor terminal corresponding to the anchor user so that the anchor terminal can display the target object in a second live broadcast interaction interface;

the third display module is used for displaying a verification result used for indicating the effectiveness of the attribute of the predicted object in the first direct-broadcasting interactive interface; the predicted object attribute is obtained by predicting the object attribute of the target object by the anchor user.

An aspect of an embodiment of the present application provides a live data processing apparatus, including:

the system comprises a first display module, a second display module and a display module, wherein the first display module is used for responding to an interaction request aiming at a virtual live broadcast room and displaying at least two drawing tasks in a second live broadcast interaction interface of the virtual live broadcast room in an encrypted manner;

the sending module is used for responding to the at least two drawing task selection requests, sending a target drawing task selected by an anchor user of the virtual live broadcast room to an audience terminal, and enabling the audience terminal to display the target drawing task in a first live broadcast interactive interface;

the second display module is used for displaying the target object in the second live broadcast interactive interface; the target object is drawn by the target audience user for the target drawing task;

and the third display module is used for responding to a prediction request aiming at the object attribute of the target object, displaying the predicted object attribute predicted by the anchor user in the second live broadcast interactive interface, and displaying a verification result used for indicating the effectiveness of the predicted object attribute.

One aspect of the present application provides a computer device, comprising: a processor and a memory;

wherein, the memory is used for storing computer programs, and the processor is used for calling the computer programs to execute the following steps:

displaying a target drawing task in a first live broadcast interactive interface of a virtual live broadcast room; the target drawing task is selected from at least two drawing tasks displayed in a second live broadcast interactive interface in an encrypted manner by a main broadcast user of the virtual live broadcast room;

responding to a drawing request aiming at the target drawing task, and displaying a target object drawn by a target audience user of the virtual live broadcast room in the first live broadcast interactive interface;

sending the target object to a main broadcast terminal corresponding to the main broadcast user so that the main broadcast terminal displays the target object in a second live broadcast interaction interface;

displaying a verification result for indicating the effectiveness of the attribute of the prediction object in the first direct-playing interactive interface; the predicted object attribute is obtained by predicting the object attribute of the target object by the anchor user.

Wherein, the memory is used for storing computer programs, and the processor is used for calling the computer programs to execute the following steps:

responding to an interaction request aiming at a virtual live broadcast room, and displaying at least two drawing tasks in a second live broadcast interaction interface of the virtual live broadcast room in an encrypted manner;

responding to the at least two drawing task selection requests, and sending a target drawing task selected by an anchor user of the virtual live broadcast room to an audience terminal so that the audience terminal can display the target drawing task in a first live broadcast interactive interface;

displaying a target object in the second live interactive interface; the target object is drawn by the target audience user for the target drawing task;

in response to a prediction request for the object attribute of the target object, displaying a predicted object attribute predicted by the anchor user in the second live interactive interface, and displaying a verification result for indicating the validity of the predicted object attribute.

An aspect of the embodiments of the present application provides a computer-readable storage medium, where a computer program is stored, where the computer program includes program instructions, and the program instructions, when executed by a processor, perform the following steps:

displaying a target drawing task in a first live broadcast interactive interface of a virtual live broadcast room; the target drawing task is selected from at least two drawing tasks displayed in a second live broadcast interactive interface in an encrypted manner by a main broadcast user of the virtual live broadcast room;

responding to a drawing request aiming at the target drawing task, and displaying a target object drawn by a target audience user of the virtual live broadcast room in the first live broadcast interactive interface;

sending the target object to a main broadcast terminal corresponding to the main broadcast user so that the main broadcast terminal displays the target object in a second live broadcast interaction interface;

displaying a verification result for indicating the effectiveness of the attribute of the prediction object in the first direct-playing interactive interface; the predicted object attribute is obtained by predicting the object attribute of the target object by the anchor user.

The computer program includes program instructions that, when executed by a processor, perform the steps of:

responding to an interaction request aiming at a virtual live broadcast room, and displaying at least two drawing tasks in a second live broadcast interaction interface of the virtual live broadcast room in an encrypted manner;

responding to the at least two drawing task selection requests, and sending a target drawing task selected by an anchor user of the virtual live broadcast room to an audience terminal so that the audience terminal can display the target drawing task in a first live broadcast interactive interface;

displaying a target object in the second live interactive interface; the target object is drawn by the target audience user for the target drawing task;

in response to a prediction request for the object attribute of the target object, displaying a predicted object attribute predicted by the anchor user in the second live interactive interface, and displaying a verification result for indicating the validity of the predicted object attribute.

In the application, the anchor terminal can display a plurality of drawing tasks in a second live broadcast interactive interface in an encrypted manner, and responds to selection operation aiming at the plurality of drawing tasks, and sends the target drawing task selected by the anchor user to the audience terminal. The audience terminal can display a target drawing task selected and obtained by an anchor user in a first live broadcast interactive interface of the virtual live broadcast room, and respond to a drawing request aiming at the target drawing task and display a target object drawn and obtained by the target audience user of the virtual live broadcast room in the first live broadcast interactive interface. Further, the target object is sent to the anchor terminal corresponding to the anchor user, so that the anchor terminal can display the target object in the second live broadcast interactive interface, and the anchor user predicts the object attribute of the target object to obtain a predicted object attribute. And then, the anchor terminal verifies the validity of the predicted object attribute of the target object to obtain a verification result, and the verification result is sent to the audience terminal. And after receiving the verification result, the audience terminal can display the verification result in the first direct-broadcasting interactive interface. The interaction between the anchor user and the target audience user is realized by drawing the target audience user in the live broadcast interaction interface, the participation of the target audience user is enhanced, the intimacy of the target audience user to the virtual live broadcast room is increased, and the interest of live broadcast is enhanced. In addition, different virtual directness rooms, the drawing task that the anchor user selected is inconsistent, consequently, can effectively improve live interactive's variety to improve the interactive effect in virtual live room.

Drawings

In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.

FIG. 1 is an architecture diagram of a live data processing system provided herein;

fig. 2a is a schematic view of a scene in which data interaction is performed between devices in a live data processing system provided in the present application;

fig. 2b is a schematic view of a scene in which data interaction is performed between devices in a live data processing system provided in the present application;

fig. 2c is a schematic view of a scene in which data interaction is performed between devices in a live data processing system provided in the present application;

fig. 3 is a flow diagram of a live data processing method provided in the present application;

FIG. 4 is a schematic illustration of a scenario in which user information of a target audience user is highlighted in a first live interactive interface provided by the present application;

FIG. 5 is a flow diagram of a method for obtaining initial rendered content according to the present application;

FIG. 6 is a flow diagram of a method for obtaining first rendered content according to the present application;

FIG. 7 is a schematic illustration of a scenario for scoring a rendered content for respective target audience users as provided herein;

fig. 8 is a schematic view of a scenario for sending a gift to a virtual live broadcast room according to a verification result provided in the present application;

fig. 9 is a flow diagram of a live data processing method provided in the present application;

fig. 10 is a schematic structural diagram of a live data processing apparatus according to an embodiment of the present application;

fig. 11 is a schematic structural diagram of a live data processing apparatus according to an embodiment of the present application;

fig. 12 is a schematic structural diagram of a computer device according to an embodiment of the present application.

Detailed Description

The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.

The application mainly relates to a big data technology in a Cloud technology, and the Cloud technology (Cloud technology) is a general term of a network technology, an information technology, an integration technology, a management platform technology, an application technology and the like based on Cloud computing business model application, can form a resource pool, is used as required, and is flexible and convenient. Cloud computing technology will become an important support; background services of the technical network system require a large amount of computing and storage resources, such as video websites, picture-like websites and more web portals. With the high development and application of the internet industry, each article may have its own identification mark and needs to be transmitted to a background system for logic processing, data in different levels are processed separately, and various industrial data need strong system background support and can only be realized through cloud computing.

Big data (Big data) refers to a data set which cannot be captured, managed and processed by a conventional software tool within a certain time range, and is a massive, high-growth-rate and diversified information asset which can have stronger decision-making power, insight discovery power and flow optimization capability only by a new processing mode. With the advent of the cloud era, big data has attracted more and more attention, and the big data needs special technology to effectively process a large amount of data within a tolerance elapsed time. The method is suitable for the technology of big data, and comprises a large-scale parallel processing database, data mining, a distributed file system, a distributed database, a cloud computing platform, the Internet and an extensible storage system.

The interaction between the anchor user and the audience user is realized by adopting a big data technology, the diversity of live broadcast interaction is improved, and the interaction effect of a virtual live broadcast room is improved.

In order to facilitate a clearer understanding of the present application, a live data processing system for implementing the live data processing method of the present application is first introduced, and as shown in fig. 1, the live data processing system includes a server 10, an anchor terminal 11, and a plurality of audience terminals 12.

The server 10 may refer to a backend device for providing a live broadcast service, for example, the server 10 may provide a live broadcast platform for a user, and the live broadcast platform may refer to an application (such as a social application, a shopping application, a short video application) having a live broadcast function, a web page, an applet, a public number, and the like. The live platform allows users to publish live data, watch live data, download live data, and the like. The live data may specifically be at least one of voice data, video data, or the like.

The anchor terminal 11 may refer to a device used by an anchor user to publish live data, for example, the anchor terminal 11 may be used to shoot the anchor user to obtain video data, and publish the video data to a live platform; alternatively, the anchor terminal 11 may be configured to record the words spoken by the anchor user to obtain audio data, and distribute the audio data to the live platform. The viewer terminal 12 may refer to a device used by a viewer user to browse live data in a live platform. Meanwhile, the anchor terminal 11 and the audience terminal 12 can be used for realizing interaction between the anchor user and the audience user.

It is understood that the anchor terminal and the viewer terminal can both refer to devices with live processing capability, human-computer interaction capability and communication capability. The terminal may be a smart phone, a tablet computer, a notebook computer, a desktop computer, a smart speaker, a smart watch, or the like, but is not limited thereto. The server may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server providing basic cloud computing services such as cloud service, a cloud database, cloud computing, a cloud function, cloud storage, Network service, cloud communication, middleware service, domain name service, security service, Content Delivery Network (CDN), big data and an artificial intelligence platform. Each user terminal and each server may be directly or indirectly connected through wired or wireless communication, and the present application is not limited herein.

It is to be understood that the target audience user in the present application may refer to one or at least two audience users selected from the audience users of the virtual live broadcast room, for example, the target audience user may refer to an audience user who establishes a voice connection with a main audience of the virtual live broadcast room, i.e., an audience user who connects to the main audience. Alternatively, the target user may refer to a viewer user who has a greater attention to the anchor user than an attention threshold, and so on. The attention degree of the audience user to the anchor user can be determined according to one or more items of time when the audience user pays attention to the anchor user, time length of accessing a virtual live broadcast room of the anchor user, interaction time length with the anchor user and the like.

It is understood that the drawing task in the present application may refer to drawing an object, and the object may refer to characters, buildings, vehicles, animals, plants, equipment, and the like; the character may refer to a well-known character such as a scientist, a celebrity, a sports landlory, and the like, and the building may refer to a famous building. Vehicles may include automobiles, airplanes, buses, trucks, and rail cars, among others, animals include cats, dogs, birds, among others, plants include flowers, trees, among others, and equipment includes gaming equipment, outdoor equipment, among others. Further, the object attribute of the object may include one or more of a name of the object, a category of the object, and the like, where the category of the object refers to a classification of the object, e.g., if the object is a dog, the category of the object is an animal; or, the object is an airplane, and the class of the object is a vehicle.

For easy understanding, please refer to fig. 2a-2c, which are schematic views of a data interaction scenario provided in an embodiment of the present application.

As shown in fig. 2a, the anchor terminal may display a second live broadcast interactive interface 20, where the second live broadcast interactive interface 20 includes a drawing interaction option 21 (i.e., a drawing combination), and in response to a touch operation on the drawing interaction option, display user information of each target audience user and user information of the anchor user in the second live broadcast interactive interface 20; here the user information may include the name, avatar, and account number of the user, etc. As shown in fig. 2a, four target audience users, namely, audience user 1 to audience user 4, are included, each target audience user is located below the anchor user, and the avatar of each target audience user and the avatar of the anchor user are displayed in the second live interactive interface 20. Meanwhile, the anchor terminal may display the plurality of drawing tasks in a second live interactive interface in an encrypted manner, as shown in fig. 2a, each drawing task is packaged as a card, and each card is displayed between the anchor user and a plurality of target audience users. The anchor terminal can start timing, if the anchor user selects one drawing task from a plurality of drawing tasks within a time period (such as 15s), the selected drawing task is taken as a target drawing task, and interactive activities are started; if the anchor user does not select a drawing task from the plurality of drawing tasks within a time period (e.g., 15s), the interactive activity is ended.

When the anchor user selects the target drawing task, the anchor terminal can send the target drawing task to the server, the server sends the target drawing task to the audience terminal, and the audience terminals corresponding to the rest audience users in the virtual live broadcast room can display the target drawing task in an encrypted manner; the remaining audience users may refer to audience users in the virtual live room other than the target audience user. The audience terminal corresponding to the target audience user can display the target drawing task in the first live interactive interface 22, as shown in fig. 2b, task prompt information and the current turn of the title is an airplane are displayed in the first live interactive interface 22. The target painting task is to paint an airplane, the task prompt information is used for prompting each target audience user to paint a part of the airplane, and when the anchor user guesses the object attribute of the target object, the live broadcast platform sends a gift to the anchor user; the target object refers to the drawing result of each target audience user aiming at the target drawing task. The first live interactive interface 22 also displays user information of each target audience user and user information of the anchor user. As shown in fig. 2b, four target audience users, namely audience user 1 to audience user 4, are included, each target audience user is located below the anchor user, and the first direct-broadcasting interactive interface 22 displays the head portraits of each target audience user and the head portraits of the anchor user.

As in fig. 2b, the server may determine the rendering order of each target audience user according to the live attributes of each target audience user; the live attributes include the target audience user's attention to the anchor user, the time at which the target audience user establishes a voice communication connection with the anchor user, and so on. The server may send the drawing sequence and the drawing time of each target audience user to the audience terminal corresponding to each target audience user, and the audience terminal corresponding to each target audience user may display the drawing sequence of each target audience user in the first direct-broadcast interactive interface 22. The drawing time of each target audience user can be determined by the server according to the complexity of the target drawing task, for example, in the current round of interactive activities, the drawing time of each target audience user can be 30 s.

When the audience terminal corresponding to each target audience user receives the drawing sequence and the drawing time, each target audience user may draw the target object in sequence in the first direct-broadcast interactive interface 22 based on the drawing sequence and the drawing time. As shown in fig. 2b, the drawing order of each target audience user is from first to last: audience user 1, audience user 2, audience user 3, audience user 4; user information for a target audience user is highlighted in the first live interactive interface 22 while the target audience user is drawing. As in fig. 2b, the avatar box of the viewer user 4 is shown bold in the first live interactive interface 22, indicating that the viewer user 4 is drawing.

When the audience terminal acquires the target object, the target object may be sent to the anchor terminal, and the anchor terminal may display the target object in the second live broadcast interactive interface 20, and enter an answer guessing stage of the anchor user. The anchor user guesses the object attribute of the target object to obtain a predicted object attribute, and the anchor terminal verifies the validity of the predicted object attribute to obtain a verification result. If the verification result indicates that the attribute of the prediction object is invalid, indicating that the anchor user does not guess the answer; and if the verification result indicates that the attribute of the prediction object is valid, the anchor user guesses the answer. As shown in fig. 2c, if the anchor user can predict that the predicted object attribute is an airplane, indicating that the anchor user guesses the answer, a verification result indicating that the predicted object attribute is valid may be displayed in the second live interactive interface 20, that is, the verification result is: the congratulatory cast user answers correctly.

Further, the server may send a gift to the user in the virtual live broadcast room according to the verification result, for example, if the anchor user guesses the wrong answer, the next round of interaction may be directly entered, after multiple rounds (e.g., 3 rounds) of interaction, the remaining audience users may score the drawn content of each target audience user to obtain the score of each target audience user, and send the gift to the audience users in the virtual live broadcast room on the name of the target audience user with the highest score; if the anchor user guesses the answer, a gift may be sent to the anchor user of the virtual live room.

Therefore, the interaction between the anchor user and the target audience user is realized in a mode that the target audience user draws in the live broadcast interaction interface, the participation of the target audience user is enhanced, and the intimacy of the target audience user to the virtual live broadcast room is increased. In addition, different virtual directness rooms, the drawing task that the anchor user selected is inconsistent, consequently, can effectively improve live interactive's variety to improve the interactive effect in virtual live room.

Further, please refer to fig. 3, which is a flowchart illustrating a live data processing method according to an embodiment of the present application. The method may be performed by the viewer terminal, as shown in fig. 3, or may be performed by both the viewer terminal and the server in fig. 1, which is not limited in this application; the method is illustrated in fig. 3 as being performed by the viewer terminal. Wherein the method may at least comprise the following S101-S104:

s101, displaying a target drawing task in a first live broadcast interactive interface of a virtual live broadcast room; the target drawing task is selected from at least two drawing tasks displayed in a second live interactive interface in an encrypted mode by an anchor user of the virtual live broadcast room.

In the application, the second live broadcast interactive interface comprises a drawing interactive option, and when the anchor user wants to interact with the audience user, touch operation can be executed on the drawing interactive option in the second live broadcast interactive interface; correspondingly, the anchor terminal detects touch operation aiming at the drawing interaction option, and at least two drawing tasks can be displayed in a second live broadcast interaction interface in an encrypted mode. The anchor user can randomly select one drawing task from at least two drawing tasks, and the selected drawing task can be called a target drawing task; correspondingly, the anchor terminal can send the target drawing task to the audience terminal, and the audience terminal can display the target drawing task in a first direct broadcasting interactive interface of the virtual direct broadcasting room.

And S102, responding to the drawing request aiming at the target drawing task, and displaying a target object drawn by a target audience user in the virtual live broadcast in the first live broadcast interactive interface.

The first live interactive interface can comprise a drawing starting option, and the target audience user can perform touch operation on the drawing starting option to initiate a drawing request aiming at the target drawing task; alternatively, the target audience user may perform a touch operation on any area in the first live interactive interface to initiate a drawing request for the target drawing task. Accordingly, the audience terminal can respond to the drawing request aiming at the target drawing task and display the target object drawn by the target audience user of the virtual live broadcast room in the first live broadcast interactive interface. It is understood that the number of target audience users may refer to one or at least two, and when the number of target audience users is one, the target object may refer to the drawn content drawn by the target audience user. When the number of the target audience users is at least two, the target object may be obtained by combining the drawn contents of each of the at least two target audience users, that is, each target audience user draws a part of the target object, for example, the target object is an airplane, a first target audience user may draw a wing of the airplane, a second target audience user may draw a fuselage of the airplane, and so on. Alternatively, when the number of target audience users is at least two, the drawn contents of the respective target audience users may be each referred to as a target object.

Alternatively, when the number of target audience users is at least two, step S102 may include the following step S11.

s11, in response to the drawing request for the target drawing task, drawing the target object in the first direct-play interactive interface according to the drawing sequence of each of the at least two target audience users.

In step s11, when the number of target audience users is at least two, the target drawing task may be jointly completed by at least two target audience users, i.e., each target audience user draws a portion of the target object. Specifically, the audience terminal may respond to the drawing request for the target drawing task, and draw the target object in the first direct-play interactive interface in sequence according to the drawing order of each of the at least two target audience users. By controlling each target audience user to draw aiming at the target drawing task according to the drawing sequence of each target audience user in the at least two target audience users, the problem that the target drawing task cannot be drawn due to repeated drawing contents among the target audience users can be effectively avoided, and the integrity of the target object can be effectively ensured; meanwhile, the drawing processing pressure of the audience terminal can be reduced.

Understandably, the at least two target audience users may refer to audience users who establish a voice connection between the virtual live broadcast room and the anchor user; the drawing sequence of each of the at least two target audience users is determined according to the time when the at least two target audience users establish the voice connection with the anchor user. For example, the at least two target audience users include a first target audience user and a second target audience user, and the time when the first target audience user establishes the voice connection with the anchor user is earlier than the time when the second target audience user establishes the voice connection with the anchor user, the drawing order of the first target audience user may be before the drawing order of the second target audience user.

Understandably, the at least two target audience users may refer to audience users whose attention of the virtual live broadcast room to the anchor user is greater than an attention threshold; the drawing sequence of each of the at least two target audience users is determined according to the attention of the at least two target audience users to the anchor user. For example, the at least two target audience users include a first target audience user and a second target audience user, and the attention of the first target audience user to the anchor user is greater than the attention of the second target audience user to the anchor user, the drawing order of the first target audience user may be before the drawing order of the second target audience user.

It should be noted that, if the drawing order of each target audience user is determined by the audience terminals, the drawing order determined by each audience terminal is inconsistent, so that the interaction between the anchor user and the target audience user cannot be performed normally. Therefore, the drawing order of the target audience user may be determined by the anchor terminal, and of course, the drawing order of the target audience user may also be determined by a server of the live broadcast platform, which is not limited in this application. The drawing sequence of each target audience user is determined through the anchor terminal or the server, so that the uniqueness of the drawing sequence of each target audience user can be ensured, and the interaction between the anchor user and the target audience users can be normally executed.

Optionally, the target drawing task includes a target object to be drawn, and a drawing task and a drawing order of each target audience user, and the target object is drawn in the first live interactive interface based on the drawing task of each target audience user according to the drawing order of each target audience user of the at least two target audience users in response to the drawing request for the target drawing task.

Optionally, the at least two target audience users include a first target audience user and a second target audience user; the drawing order of the first target audience user is prior to the drawing order of the second target audience user; step s11 may include steps s 21-s 22 as follows.

s21, in response to the drawing request for the target drawing task, displaying a first drawn content drawn by the first target audience user in the first live interactive interface.

s22, when the time reaches the starting drawing time of the second target audience user, adding the second drawing content drawn by the second target audience user in the area of the first drawing content to obtain the target object.

In steps s 21-s 22, when the drawing order of the first target audience user is before the drawing order of the second target audience user, the audience terminal may control the first target audience user and the second target audience user to draw in sequence in a timing manner. Specifically, in response to a drawing request for the target drawing task, timing is started, and a first drawing content drawn by the first target audience user is displayed in the first live interactive interface, at which time the second target audience user is not allowed to draw in the first live interactive interface. And when the time reaches the end drawing time of the first target audience user, namely the time reaches the start drawing time of the second target audience user, adding second drawing content drawn by the second target audience user in the area where the first drawing content is located to obtain the target object, wherein the first target audience user is not allowed to draw in the first direct-playing interactive interface. By means of timing, each target audience user is controlled to draw the target drawing tasks according to the drawing sequence, the target audience users can be effectively prevented from waiting for other target audience users to finish drawing for a long time, and the efficiency of drawing target objects is improved.

Alternatively, step s21 may include steps s 31-s 32 as follows.

s31, highlighting user information of the first target audience user in the first live interactive interface in response to the draw request for the target drawing task.

s32, when the time reaches the end drawing time of the first target audience user, canceling the highlighting of the user information of the first target audience user and displaying the first drawing content drawn by the first target audience user in the first direct-broadcasting interactive interface.

In steps s 31-s 32, the process of drawing by each target audience user may indicate that the target audience user is drawing by highlighting the user information of the target audience user. The user information may include at least one of a name, an account number, an avatar, etc. of the user, and the highlighting may refer to a bold display, an enlarged display, a highlight display, a dynamic display, etc. Specifically, when the first target audience user is drawing, the audience terminal may highlight, in the first live interactive interface, the user information of the first target audience user in response to the drawing request for the target drawing task, so as to indicate that the first target audience user is drawing. When the time reaches the end drawing time of the first target audience user, in the first direct-playing interactive interface, the user information of the first target audience user is not highlighted and is used for indicating that the drawing of the first target audience user is completed; and displaying first drawing content drawn by the first target audience user in a first direct-playing interactive interface. By highlighting the user information of the first target audience user in the process of drawing the first target audience user, other audience users can clearly define the drawing content of the first target audience user; and the first target audience user can be instructed to start drawing and the first target audience user can be instructed to finish drawing, so that the first target audience user can be effectively instructed to master drawing opportunity.

For example, as shown in fig. 4, when the time reaches the drawing time of the viewer user 4, the viewer terminal may bold the avatar frame of the viewer user 4 in the first direct-broadcasting interactive interface 22, and display the prompt information that the viewer user 4 is drawing, for indicating that the viewer user 4 is drawing; so that other audience users may perceive that there is a target audience user currently drawing. When the audience user 4 starts to draw, the countdown is started, and if the time reaches the end drawing time of the audience user, the head portrait frame of the audience user 4 can be omitted and displayed in the first direct-broadcasting interactive interface 22, so that the user information of the audience user 4 is omitted and displayed in the first direct-broadcasting interactive interface 22.

Alternatively, step s21 may include steps s 41-s 44 as follows.

s41, in response to the drawing request for the target drawing task, obtaining the initial drawing content drawn by the first target audience user and the first size information of the display interface of the audience terminal corresponding to the first target audience user.

And s42, acquiring second size information of the display interface of the audience terminal corresponding to the second target audience user.

s43, adjusting the initial drawing content according to the first size information and the second size information to obtain a first drawing content.

s44, displaying the first rendered content in the first live interactive interface.

In steps s41 to s44, since the sizes of the display interfaces of the viewer terminals corresponding to different target viewer users are not the same, when the viewer terminal displays the rendering content transmitted from another viewer terminal, the rendering content needs to be converted. Specifically, the audience terminal may obtain, in response to the drawing request for the target drawing task, the initial drawing content drawn by the first target audience user and the first size information of the display interface of the terminal corresponding to the first target audience user. Further, second size information of a display interface of the audience terminal corresponding to the second target audience user is obtained; further, the size of the initial drawing content may be adjusted according to the first size information and the second size information, so as to obtain a first drawing content suitable for being displayed at a viewer terminal corresponding to a second target viewer user. The size of the initial drawing content is adjusted according to the first size information and the second size information, and therefore the definition of the displayed drawing content is improved.

Alternatively, step s43 may include steps s 51-s 52 as follows.

s51, determining, according to the first size information and the second size information, a size ratio between a display interface of the audience terminal corresponding to the first target audience user and a display interface of the audience terminal corresponding to the second target audience user.

s52, adjusting the initial drawing content according to the size ratio to obtain a first drawing content.

In steps s 51-s 52, the audience terminal may determine a size ratio between a display interface of the audience terminal corresponding to the first target audience user and a display interface of a terminal corresponding to the second target audience user according to the first size information and the second size information; if the size ratio is larger than 1, reducing the initial drawing content according to the size ratio to obtain first drawing content; if the size ratio is less than 1, carrying out amplification processing on the initial drawing content according to the size ratio to obtain first drawing content; if the size ratio is equal to 1, the initial drawing content is taken as the first drawing content. The initial drawing content is adjusted according to the size ratio, so that the definition of the displayed drawing content is improved.

Alternatively, step s22 may include steps s 61-s 63 as follows.

s61, when the time reaches the drawing starting time of the second target audience user, calling a drawing detection function to detect the display interface of the audience terminal corresponding to the second target audience user, and obtaining a drawing array.

s62, calling an analysis function to analyze the drawing array to obtain drawing information; the drawing information includes a drawing line segment, position information of the drawing line segment, and color information.

s63, calling a drawing function to draw according to the drawing information to obtain the second drawing content, and adding the second drawing content in the area where the first drawing content is located to obtain the target object.

In steps s 61-s 63, the spectator terminal may call a function to achieve the drawing of the target object. Specifically, when the time reaches the drawing start time of the second target audience user, the audience terminal may call a drawing detection function to detect the display interface of the audience terminal corresponding to the second target audience user, and when a touch operation on the audience terminal corresponding to the second target audience user is detected, the touch information is added to the drawing array. Here, the touch operation may refer to one or at least two of a click operation, a slide operation, a press operation, and the like; the touch information includes touch position information, touch trajectory, and the like. Furthermore, an analysis function can be called to analyze the drawing array to obtain drawing information, namely, the touch information in the drawing array is converted into the drawing information. Then, a drawing function can be called to draw according to the drawing information to obtain second drawing content, and the second drawing content is added to the area where the first drawing content is located. The drawing of the target drawing task is realized by calling the function, and the drawing accuracy can be improved.

For example, assume that a terminal corresponding to a first target audience user is a sending terminal, and a terminal corresponding to a second target audience user is a receiving terminal; as shown in fig. 5, the specific process of the sending terminal acquiring the first drawing content includes: firstly, a sender creates a custom encapsulated Bezier curve UIBezierPath object in a drawing detection function (touch Began: withEvent:). And recording related touch information in the UIBezierPath object, and adding the touch information into the drawing array. Then, when the drawing detection function detects a sliding operation on the display interface of the sending terminal, indicating that the target audience user starts drawing, the information adding method can be called to add touch information to the drawing group number. Meanwhile, an analysis function is called to analyze the drawing array to obtain drawing information, and a display configuration function (such as setneedledsisplay) is used for calling back a drawing function (draw Rect) to draw according to the drawing information. The drawing detection function does not detect touch operation aiming at a display interface of a sending terminal in a time period, the drawing is finished, drawing information can be packaged to obtain initial drawing content, the first drawing content is sent to a receiving terminal through Websocket, the Websocket is a protocol for full-duplex communication on a single TCP connection, the initial drawing content is transmitted through the Websocket, so that the sending terminal and the receiving terminal can directly establish persistent connection and bidirectional data transmission only by completing one handshake; the data exchange between the sending terminal and the receiving terminal becomes simpler and more convenient.

As shown in fig. 6, the receiving terminal receives the initial rendering content transmitted through the WebSocket, and analyzes the first rendering content to obtain rendering information. Acquiring size information of a sending terminal and size information of a receiving terminal; and determining the size ratio of the display interface of the sending terminal and the display interface of the receiving terminal according to the size information of the sending terminal and the size information of the receiving terminal. And then, converting the drawing information according to the size ratio to obtain converted drawing information, and drawing in a first direct-broadcasting interactive interface of the receiving terminal based on the converted drawing information to obtain first drawing content.

S103, the target object is sent to the anchor terminal corresponding to the anchor user, so that the anchor terminal displays the target object in a second live broadcast interactive interface.

S104, displaying a verification result for indicating the effectiveness of the attribute of the prediction object in the first direct-playing interactive interface; the predicted object attribute is obtained by predicting the object attribute of the target object by the anchor user.

In step S103 and step S104, after the viewer terminal acquires the target object, the target object may be sent to a anchor terminal corresponding to an anchor user, and the anchor terminal may display the target object in the second live broadcast interactive interface. At this time, the anchor user can predict the object attribute of the target object to obtain a predicted object attribute, correspondingly, the anchor terminal can display the predicted object attribute of the target object in the second live broadcast interactive interface, and verify the validity of the predicted object attribute of the target object to obtain a verification result. Further, the verification result may be sent to the viewer terminal; the verification result is used to indicate the validity of the attributes of the predicted object. Specifically, the verification result is used to indicate that the prediction object attribute is valid, or the verification result is used to indicate that the prediction object attribute is invalid. The verification result indicating that the prediction object attribute is valid refers to: the anchor user accurately predicts the object attribute of the target object; the verification result indicating that the prediction object attribute is invalid means that: the anchor user incorrectly predicts the object properties of the target object. Then, when receiving the verification result for the attribute of the predicted object, the audience terminal may display the verification result for indicating the validity of the attribute of the predicted tin plating in the first live interactive interface.

Optionally, the method may further include: if the verification result of the predicted object attribute indicates that the predicted object attribute is invalid, displaying the scores of all target audience users in the first direct-playing interactive interface; and the scores of all target audience users are obtained by scoring the drawn contents of all target audience users by the rest audience users in the virtual live broadcast room, wherein the rest audience users refer to the audience users in the virtual live broadcast room except for the at least two target audience users.

If the verification result of the predicted object attribute indicates that the predicted object attribute is invalid, the anchor user is indicated to incorrectly predict the object attribute of the target object, namely the anchor user guesses the object attribute of the target object incorrectly, and then the residual audience users in the virtual live broadcast room can score the drawn content of each target audience user to obtain the score of each target audience user. The audience terminal can display the scores of all target audience users in the first direct-broadcasting interactive interface, namely the scores of the target audience users are used for reflecting the accuracy of the drawn content of the target audience users. That is, the higher the score of the target audience user, the higher the accuracy of the drawn content of the target audience user is indicated; conversely, a lower score for a target audience user indicates a lower accuracy of the rendered content for that target audience user. The drawn contents of the target audience users are scored through the rest audience users, so that more audience users can participate in the interaction with the anchor user, and the interaction effect of the virtual live broadcast room is improved.

For example, as shown in fig. 7, the viewer terminal may display the voting options of the respective target viewer users in the first live interactive interface 22, and the viewer users in the virtual live room may vote (i.e., score) the respective target viewer users by performing a touch operation on the voting options of the respective target viewer users. Voting prompt information for prompting the audience users to actively participate in the voting to select the best drawing audience is also displayed in the first live interactive interface 22. Wherein each viewer user can only vote to one target viewer user, and if the viewer user is cast to one of the target viewer users, the voting options of the other target viewer users are locked, and the viewer user is prohibited from voting to the other target viewer users. The server may also set a voting time, such as 30s, and when the voting time reaches 30s, the voting options of each target audience user are locked and the voting is stopped. As shown in fig. 7, after the voting is finished, the number of votes of each target audience user is counted, and the score of each target audience user is determined according to the number of votes of each target audience user. As shown in fig. 7, the votes of the audience users 1 to 4 are 50, 80, 200 and 70, respectively, and it can be seen that the votes of the audience users 3 are the most, i.e. the score of the audience user 3 is the highest, and the audience user 3 can be the best drawing audience.

Optionally, the method may comprise the following steps s 71-s 72.

s71, if the verification result of the predicted object attribute indicates that the predicted object attribute is valid, displaying a first resource package in a resource display area of the first live interaction interface; resources in the first resource package are for allocation to the anchor user.

s72, if the verification result of the predicted object attribute indicates that the predicted object attribute is invalid, displaying a second resource package in the resource display area of the first live interaction interface, and displaying user information of a specified audience user in the second resource package in a highlighted manner; the specified audience user refers to the target audience user with the highest score in the at least two target audience users, and the resources in the second resource packet are used for being distributed to the audience users in the virtual live broadcast room.

In steps s 71-s 72, if the verification result of the predicted object attribute indicates that the predicted object attribute is valid, it indicates that the anchor user correctly predicts the object attribute of the target object, i.e. the anchor user guesses the object attribute of the target object. Therefore, the audience terminal can display the first resource package in the resource display area of the first direct-broadcasting interactive interface. And if the verification result of the predicted object attribute indicates that the predicted object attribute is invalid, indicating that the anchor user does not correctly predict the object attribute of the target object, namely that the anchor user guesses the object attribute of the target object incorrectly. Therefore, the audience terminal can display a second resource package in the resource display area of the first direct-broadcasting interactive interface, and highlight the user information of the appointed audience user in the second resource package; that is, when a gift is released to a viewer user of the virtual live broadcast room on behalf of the specified viewer user. The gift is sent to the anchor user or the audience user in the virtual live broadcast room according to the verification result, so that the interest of live broadcast interaction is increased, and the enthusiasm of the target audience user for participating in the interaction is mobilized.

For example, as shown in fig. 8, assuming that the score of the audience user 3 is the highest, when the verification result of the predicted object attribute indicates that the predicted object attribute is valid, indicating that the anchor user correctly predicts the object attribute of the target object, the first resource package 23 may be displayed in the first live interactive interface 22, and the prompt message that the anchor user correctly answers (i.e., the anchor user correctly answers). If the verification result of the predicted object attribute indicates that the predicted object attribute is invalid, indicating that the anchor user incorrectly predicts the object attribute of the target object, displaying a second resource packet 24 in the first live interactive interface 22, and displaying the user information of the audience user 3 in the second resource packet 24 in a highlighted manner (namely, the audience 3 obtains the reward and issues the live broadcast). The resources in the first resource package 23 are used for allocating to the anchor user, and the resources in the second resource package are used for allocating to the audience users in the virtual live broadcast room, for example, the second resource package includes 300 gifts, and the audience users in the virtual live broadcast room can get the gifts in the second resource package until the gifts in the second resource package are completely taken, and remove the second resource package from the first live broadcast interactive interface 22. The display forms of the first resource package and the second resource package in the first direct-playing interactive interface 22 may be the same or different; the resources contained in the first resource package and the second resource package can be the same or different, and the resources contained in the first resource package and the second resource package are from the live broadcast platform. The resource may be referred to as a gift, and may specifically refer to virtual currency (e.g., game coins), shopping tickets, electronic currency, game equipment, products (e.g., washing products, cosmetic products, living goods), and the like.

In the application, the anchor terminal can display a plurality of drawing tasks in a second live broadcast interactive interface in an encrypted manner, and responds to selection operation aiming at the plurality of drawing tasks, and sends the target drawing task selected by the anchor user to the audience terminal. The audience terminal can display a target drawing task selected and obtained by an anchor user in a first live broadcast interactive interface of the virtual live broadcast room, and respond to a drawing request aiming at the target drawing task and display a target object drawn and obtained by the target audience user of the virtual live broadcast room in the first live broadcast interactive interface. Further, the target object is sent to the anchor terminal corresponding to the anchor user, so that the anchor terminal can display the target object in the second live broadcast interactive interface, and the anchor user predicts the object attribute of the target object to obtain a predicted object attribute. And then, the anchor terminal verifies the validity of the predicted object attribute of the target object to obtain a verification result, and the verification result is sent to the audience terminal. And after receiving the verification result, the audience terminal can display the verification result in the first direct-broadcasting interactive interface. The interaction between the anchor user and the target audience user is realized by drawing the target audience user in the live broadcast interaction interface, the participation of the target audience user is enhanced, and the intimacy of the target audience user to the virtual live broadcast room is increased. In addition, different virtual directness rooms, the drawing task that the anchor user selected is inconsistent, consequently, can effectively improve live interactive's variety to improve the interactive effect in virtual live room.

Further, please refer to fig. 9, which is a flowchart illustrating a live data processing method according to an embodiment of the present application. As shown in fig. 9, the method may be executed by the anchor terminal in fig. 1, or the method may be executed by both the anchor terminal and the server in fig. 1, which is not limited in this application, and the method is executed by the anchor terminal in fig. 9 as an example. Wherein the method may at least comprise the following S201-S204:

s201, responding to an interaction request aiming at the virtual live broadcast room, and displaying at least two drawing tasks in a second live broadcast interaction interface of the virtual live broadcast room in an encrypted mode.

In the application, the second live broadcast interactive interface comprises an interactive option, and the anchor user can perform touch operation on the interactive option and initiate an interactive request aiming at the virtual live broadcast room; accordingly, the anchor terminal can respond to the interaction request aiming at the virtual live broadcast room, and at least two drawing tasks are displayed in a second live broadcast interaction interface of the virtual live broadcast room in an encrypted mode. By displaying a plurality of drawing tasks in encrypted form, the anchor user does not see the object properties of the target object in advance.

It is to be understood that the drawing tasks may include standard object properties of the drawing object, and that displaying at least two drawing tasks in the second live interactive interface in an encrypted manner includes: and hiding and displaying the standard object attribute of the drawing object included in each drawing task in a second live interactive interface. Or packaging each drawing task into a card, and displaying the card corresponding to each drawing task in a second live broadcast interactive interface.

S202, responding to the at least two drawing task selection requests, and sending the target drawing task selected by the anchor user of the virtual live broadcast room to the audience terminal so that the audience terminal can display the target drawing task in a first live broadcast interactive interface.

The anchor user can randomly select one drawing task from at least two drawing tasks, and the selected drawing task can be called a target drawing task; correspondingly, the anchor terminal can send the target drawing task to the audience terminal, and the audience terminal can display the target drawing task in a first direct broadcasting interactive interface of the virtual direct broadcasting room; so that the target audience user can draw aiming at the target drawing task to obtain the target object.

S203, displaying a target object in the second live broadcast interactive interface; the target object is drawn by the target audience user for the target drawing task.

S204, responding to a prediction request aiming at the object attribute of the target object, displaying the predicted object attribute predicted by the anchor user in the second live broadcast interactive interface, and displaying a verification result used for indicating the effectiveness of the predicted object attribute.

In step S203 and step S204, after the target object is drawn by the target audience user, the audience terminal may send the target object to the anchor terminal, and the anchor terminal may display the target object in the second live broadcast interactive interface. Further, the anchor terminal can respond to a prediction request aiming at the object attribute of the target object, and display the predicted object attribute predicted by the anchor user in the second live broadcast interaction interface; then, the validity of the attribute of the prediction object may be verified, and a verification result indicating the validity of the attribute of the prediction object may be obtained.

Optionally, the displaying, in response to the prediction request for the object attribute of the target object, the predicted object attribute predicted by the anchor user in the second live interactive interface includes the following steps s81 to s 83.

s81, in response to the prediction request for the object property of the target object, obtaining voice data input by the anchor user predicting the object property of the target object.

And s82, performing semantic extraction on the voice data to obtain semantic information of the voice data.

s83, determining the predicted object attribute of the target object according to the semantic information of the voice data, and displaying the predicted object attribute in the second live broadcast interactive interface.

In steps s81 to s83, the anchor user may obtain the predicted object attribute of the target object by inputting voice, and specifically, the anchor terminal, in response to a prediction request for the object attribute of the target object, obtains voice data input by the anchor user in predicting the object attribute of the target object, and performs semantic extraction on the voice data to obtain semantic information of the voice data. The semantic information of the voice data is used for reflecting main content in the voice data, so that the anchor terminal can determine the predicted object attribute of the target object according to the semantic information of the voice data and display the predicted object attribute in the second live broadcast interactive interface. The format of the prediction object attribute may be referred to as a text format; the anchor user can input the predicted object attribute of the target object in a voice input mode, and convenience and efficiency of inputting the predicted object attribute are improved.

Optionally, the displaying, in the first live interactive interface, a verification result indicating validity of the predicted object attribute includes the following steps s84 to s 87.

s84, obtaining the standard object attribute of the object indicated by the target drawing task.

s85, determining the matching degree between the standard object attribute and the predicted object attribute.

s86, if the matching degree is larger than the matching threshold, displaying a verification result for indicating that the predicted object attribute is valid in the first live interactive interface.

s87, if the matching degree is less than or equal to the matching threshold, displaying a verification result for indicating that the predicted object attribute is invalid in the first live interactive interface.

In steps s 84-s 87, the anchor terminal may obtain the standard object attribute of the object indicated by the target drawing task, and calculate the matching degree between the standard object attribute and the predicted object attribute. If the matching degree is greater than the matching threshold, the predicted object attribute is the same as or similar to the standard object attribute, namely the object attribute of the target object is predicted correctly by the anchor user; therefore, the anchor terminal can display a verification result for indicating that the predicted object attribute is valid in the first live interactive interface. If the matching degree is smaller than or equal to the matching threshold, it is indicated that the difference between the predicted object attribute and the standard object attribute is relatively large, that is, the object attribute of the target object is predicted incorrectly by the anchor user, so that the anchor terminal can display a verification result for indicating that the predicted object attribute is invalid in the first live interaction interface. By calculating the matching degree between the standard object attribute and the predicted object attribute, the verification result is determined, and the accuracy of verifying the predicted object attribute can be improved.

Alternatively, step s85 may include steps s 88-s 90 as follows.

s88, extracting the characteristic of the standard object attribute to obtain the characteristic vector of the standard object attribute.

s89, extracting the characteristics of the predicted object attributes to obtain the characteristic vector of the standard object attributes.

s90, determining the distance between the characteristic vector of the standard object attribute and the characteristic vector of the standard object attribute, and determining the matching degree between the standard object attribute and the predicted object attribute according to the distance.

In steps s88 to s90, the anchor terminal may perform feature extraction on the standard object attribute to obtain a feature vector of the standard object attribute, and perform feature extraction on the predicted object attribute to obtain a feature vector of the standard object attribute. Further, a distance algorithm may be employed to calculate a distance between the feature vector of the standard object attribute and the feature vector of the standard object attribute; the distance algorithm may include a euclidean distance algorithm, a cosine distance algorithm, and so on. Further, a degree of match between the standard object attribute and the predicted object attribute may be determined according to the distance; the distance and the matching degree have positive correlation, that is, the longer the distance is, the larger the difference between the standard object attribute and the predicted object attribute is, the lower the matching degree between the standard object attribute and the predicted object attribute is; conversely, the closer the distance, the smaller the difference between the standard object attribute and the predicted object attribute, the higher the degree of matching between the standard object attribute and the predicted object attribute.

It should be noted that, in order to ensure the uniqueness of the verification result, the process of verifying the validity of the predicted object attribute according to the standard object attribute may be performed by the anchor terminal; of course, the process of verifying the validity of the predicted object attribute according to the standard object attribute may also be executed by a server of the live broadcast platform, which is not limited in the present application, and the description is mainly given by taking a main broadcast terminal as an example.

In the application, the anchor terminal can display a plurality of drawing tasks in a second live broadcast interactive interface in an encrypted manner, and responds to selection operation aiming at the plurality of drawing tasks, and sends the target drawing task selected by the anchor user to the audience terminal. The audience terminal can display a target drawing task selected and obtained by an anchor user in a first live broadcast interactive interface of the virtual live broadcast room, and respond to a drawing request aiming at the target drawing task and display a target object drawn and obtained by the target audience user of the virtual live broadcast room in the first live broadcast interactive interface. Further, the target object is sent to the anchor terminal corresponding to the anchor user, so that the anchor terminal can display the target object in the second live broadcast interactive interface, and the anchor user predicts the object attribute of the target object to obtain a predicted object attribute. And then, the anchor terminal verifies the validity of the predicted object attribute of the target object to obtain a verification result, and the verification result is sent to the audience terminal. And after receiving the verification result, the audience terminal can display the verification result in the first direct-broadcasting interactive interface. The interaction between the anchor user and the target audience user is realized by drawing the target audience user in the live broadcast interaction interface, the participation of the target audience user is enhanced, and the intimacy of the target audience user to the virtual live broadcast room is increased. In addition, different virtual directness rooms, the drawing task that the anchor user selected is inconsistent, consequently, can effectively improve live interactive's variety to improve the interactive effect in virtual live room.

Fig. 10 is a schematic structural diagram of a live data processing apparatus 1 according to an embodiment of the present application. The live data processing apparatus 1 may be a computer program (including program code) running in a computer device, for example, the live data processing apparatus 1 is an application software; the apparatus may be used to perform the corresponding steps in the methods provided by the embodiments of the present application. As shown in fig. 10, the live data processing apparatus 1 may include: a first display module 801, a second display module 802, a sending module 803, and a third display module 804.

The first display module is used for displaying a target drawing task in a first live broadcast interactive interface of the virtual live broadcast room; the target drawing task is selected from at least two drawing tasks displayed in a second live broadcast interactive interface in an encrypted manner by a main broadcast user of the virtual live broadcast room;

the second display module is used for responding to a drawing request aiming at the target drawing task and displaying a target object drawn by a target audience user in the virtual live broadcast room in the first live broadcast interactive interface;

the sending module is used for sending the target object to an anchor terminal corresponding to the anchor user so that the anchor terminal can display the target object in a second live broadcast interaction interface;

the third display module is used for displaying a verification result used for indicating the effectiveness of the attribute of the predicted object in the first direct-broadcasting interactive interface; the predicted object attribute is obtained by predicting the object attribute of the target object by the anchor user.

Optionally, the number of the target audience users is at least two, and the second display module, in response to a drawing request for the target drawing task, displays a target object drawn by the target audience user in the virtual live broadcast room in the first live broadcast interactive interface, including:

and responding to a drawing request aiming at the target drawing task, and drawing the target object in the first direct-playing interactive interface according to the drawing sequence of each target audience user in the at least two target audience users.

Optionally, the at least two target audience users include a first target audience user and a second target audience user; the drawing order of the first target audience user is prior to the drawing order of the second target audience user;

the second display module, in response to a drawing request for the target drawing task, draws the target object in the first direct-play interactive interface according to a drawing order of each of the at least two target audience users, and includes:

responding to a drawing request aiming at the target drawing task, and displaying first drawing content drawn by the first target audience user in the first direct-playing interactive interface;

and when the time reaches the drawing starting time of the second target audience user, adding second drawn content drawn by the second target audience user in the area where the first drawn content is located to obtain the target object.

Optionally, the displaying, by the second display module, in response to the drawing request for the target drawing task, a first drawing content drawn by the first target viewer user in the first live interactive interface includes:

in response to a drawing request for the target drawing task, highlighting user information of the first target audience user in the first live interactive interface;

and when the time reaches the end drawing time of the first target audience user, in the first direct-playing interactive interface, canceling and highlighting the user information of the first target audience user, and displaying the first drawing content drawn by the first target audience user.

Optionally, the at least two target audience users refer to audience users who establish a voice connection between the virtual live broadcast room and the anchor user; the drawing sequence of each of the at least two target audience users is determined according to the time when the at least two target audience users establish the voice connection with the anchor user.

Optionally, the displaying, by the second display module, in response to the drawing request for the target drawing task, a first drawing content drawn by the first target viewer user in the first live interactive interface includes:

responding to a drawing request aiming at the target drawing task, and acquiring initial drawing content drawn by the first target audience user and first size information of a display interface of an audience terminal corresponding to the first target audience user;

acquiring second size information of a display interface of the audience terminal corresponding to the second target audience user;

adjusting the initial drawing content according to the first size information and the second size information to obtain first drawing content;

and displaying the first drawing content in the first direct-playing interactive interface.

Optionally, the adjusting, by the second display module, the initial drawing content according to the first size information and the second size information to obtain a first drawing content includes:

determining, according to the first size information and the second size information, a size ratio between a display interface of an audience terminal corresponding to the first target audience user and a display interface of an audience terminal corresponding to the second target audience user;

and adjusting the initial drawing content according to the size ratio to obtain a first drawing content.

Optionally, when the time reaches the drawing start time of the second target audience user, the second display module adds, in the area where the first drawing content is located, second drawing content drawn by the second target audience user to obtain the target object, where the second drawing content includes:

when the time reaches the drawing starting time of the second target audience user, calling a drawing detection function to detect a display interface of an audience terminal corresponding to the second target audience user to obtain a drawing array;

calling an analysis function to analyze the drawing array to obtain drawing information; the drawing information comprises a drawing line segment, position information of the drawing line segment and color information;

and calling a drawing function to draw according to the drawing information to obtain the second drawing content, and adding the second drawing content in the area where the first drawing content is located to obtain the target object.

Optionally, the second display module may be further configured to:

if the verification result of the predicted object attribute indicates that the predicted object attribute is invalid, displaying the scores of all target audience users in the first direct-playing interactive interface; and the scores of all target audience users are obtained by scoring the drawn contents of all target audience users by the rest audience users in the virtual live broadcast room, wherein the rest audience users refer to the audience users in the virtual live broadcast room except for the at least two target audience users.

Optionally, the second display module may be further configured to:

if the verification result of the predicted object attribute indicates that the predicted object attribute is valid, displaying a first resource package in a resource display area of the first direct-playing interactive interface; resources in the first resource package are for allocation to the anchor user;

if the verification result of the predicted object attribute indicates that the predicted object attribute is invalid, displaying a second resource packet in the resource display area of the first direct-broadcasting interactive interface, and displaying user information of a specified audience user in the second resource packet in a highlighted mode; the specified audience user refers to the target audience user with the highest score in the at least two target audience users, and the resources in the second resource packet are used for being distributed to the audience users in the virtual live broadcast room.

According to an embodiment of the present application, the steps involved in the live data processing method shown in fig. 3 may be performed by respective modules in the live data processing apparatus shown in fig. 10. For example, step S101 shown in fig. 3 may be performed by the first display module 801 in fig. 10, and step S102 shown in fig. 3 may be performed by the second display module 802 in fig. 10; step S103 shown in fig. 3 may be performed by the sending module 803 in fig. 10; step S104 shown in fig. 3 may be performed by the third display module 804 in fig. 10.

According to an embodiment of the present application, each module in the live data processing apparatus shown in fig. 10 may be respectively or entirely combined into one or several units to form the live data processing apparatus, or some unit(s) may be further split into multiple sub-units with smaller functions, which may implement the same operation without affecting implementation of technical effects of the embodiment of the present application. The modules are divided based on logic functions, and in practical application, the functions of one module can be realized by a plurality of units, or the functions of a plurality of modules can be realized by one unit. In other embodiments of the present application, the live data processing apparatus may also include other units, and in practical applications, these functions may also be implemented by assistance of other units, and may be implemented by cooperation of multiple units.

According to an embodiment of the present application, a live data processing apparatus as shown in fig. 10 may be constructed by running a computer program (including program codes) capable of executing steps involved in a corresponding method as shown in fig. 3 on a general-purpose computer device such as a computer including a processing element such as a Central Processing Unit (CPU), a random access storage medium (RAM), a read-only storage medium (ROM), and a storage element, and implementing the live data processing method of the embodiment of the present application. The computer program may be recorded on a computer-readable recording medium, for example, and loaded into and executed by the computing apparatus via the computer-readable recording medium.

In the application, the anchor terminal can display a plurality of drawing tasks in a second live broadcast interactive interface in an encrypted manner, and responds to selection operation aiming at the plurality of drawing tasks, and sends the target drawing task selected by the anchor user to the audience terminal. The audience terminal can display a target drawing task selected and obtained by an anchor user in a first live broadcast interactive interface of the virtual live broadcast room, and respond to a drawing request aiming at the target drawing task and display a target object drawn and obtained by the target audience user of the virtual live broadcast room in the first live broadcast interactive interface. Further, the target object is sent to the anchor terminal corresponding to the anchor user, so that the anchor terminal can display the target object in the second live broadcast interactive interface, and the anchor user predicts the object attribute of the target object to obtain a predicted object attribute. And then, the anchor terminal verifies the validity of the predicted object attribute of the target object to obtain a verification result, and the verification result is sent to the audience terminal. And after receiving the verification result, the audience terminal can display the verification result in the first direct-broadcasting interactive interface. The interaction between the anchor user and the target audience user is realized by drawing the target audience user in the live broadcast interaction interface, the participation of the target audience user is enhanced, and the intimacy of the target audience user to the virtual live broadcast room is increased. In addition, different virtual directness rooms, the drawing task that the anchor user selected is inconsistent, consequently, can effectively improve live interactive's variety to improve the interactive effect in virtual live room.

Fig. 11 is a schematic structural diagram of a live data processing apparatus 2 according to an embodiment of the present application. The live data processing apparatus 2 may be a computer program (including program code) running on a computer device, for example, the live data processing apparatus 2 is an application software; the apparatus may be used to perform the corresponding steps in the methods provided by the embodiments of the present application. As shown in fig. 11, the live data processing apparatus 2 may include: a first display module 901, a sending module 902, a second display model 903 and a third display module 904.

The system comprises a first display module, a second display module and a display module, wherein the first display module is used for responding to an interaction request aiming at a virtual live broadcast room and displaying at least two drawing tasks in a second live broadcast interaction interface of the virtual live broadcast room in an encrypted manner;

the sending module is used for responding to the at least two drawing task selection requests, sending a target drawing task selected by an anchor user of the virtual live broadcast room to an audience terminal, and enabling the audience terminal to display the target drawing task in a first live broadcast interactive interface;

the second display module is used for displaying the target object in the second live broadcast interactive interface; the target object is drawn by the target audience user for the target drawing task;

and the third display module is used for responding to a prediction request aiming at the object attribute of the target object, displaying the predicted object attribute predicted by the anchor user in the second live broadcast interactive interface, and displaying a verification result used for indicating the effectiveness of the predicted object attribute.

Optionally, the third display module, in response to a prediction request for the object attribute of the target object, displays, in the second live broadcast interactive interface, a predicted object attribute predicted by the anchor user, where the predicted object attribute includes:

responding to a prediction request aiming at the object attribute of the target object, and acquiring voice data input by the anchor user for predicting the object attribute of the target object;

performing semantic extraction on the voice data to obtain semantic information of the voice data;

and determining the predicted object attribute of the target object according to the semantic information of the voice data, and displaying the predicted object attribute in the second live broadcast interactive interface.

Optionally, the third display module displays, in the first live interactive interface, a verification result for indicating validity of the predicted object attribute, and the verification result includes:

acquiring standard object attributes of an object indicated by the target drawing task;

determining a degree of match between the standard object attribute and the predicted object attribute;

if the matching degree is larger than a matching threshold value, displaying a verification result for indicating that the attribute of the predicted object is valid in the first live interaction interface;

and if the matching degree is smaller than or equal to the matching threshold, displaying a verification result for indicating that the predicted object attribute is invalid in the first live interactive interface.

Optionally, the determining, by the third display module, a matching degree between the standard object attribute and the predicted object attribute includes:

performing feature extraction on the standard object attribute to obtain a feature vector of the standard object attribute;

performing feature extraction on the predicted object attribute to obtain a feature vector of the standard object attribute;

and determining the distance between the feature vector of the standard object attribute and the feature vector of the standard object attribute, and determining the matching degree between the standard object attribute and the predicted object attribute according to the distance.

According to an embodiment of the present application, the steps involved in the live data processing method shown in fig. 9 may be performed by respective modules in the live data processing apparatus shown in fig. 11. For example, step S201 shown in fig. 9 may be performed by the first display module 901 in fig. 11, and step S202 shown in fig. 9 may be performed by the sending module 902 in fig. 11; step S203 shown in fig. 9 may be performed by the second display module 903 in fig. 11; step S204 shown in fig. 9 may be performed by the third display module 904 in fig. 11.

According to an embodiment of the present application, the modules of the live data processing apparatus 2 shown in fig. 11 may be respectively or entirely combined into one or several units to form the unit, or some unit(s) may be further split into multiple sub-units with smaller functions, which may implement the same operation without affecting implementation of technical effects of the embodiment of the present application. The modules are divided based on logic functions, and in practical application, the functions of one module can be realized by a plurality of units, or the functions of a plurality of modules can be realized by one unit. In other embodiments of the present application, the live data processing apparatus may also include other units, and in practical applications, these functions may also be implemented by assistance of other units, and may be implemented by cooperation of multiple units.

According to an embodiment of the present application, a live data processing apparatus as shown in fig. 11 may be constructed by running a computer program (including program codes) capable of executing steps involved in a corresponding method as shown in fig. 9 on a general-purpose computer device such as a computer including a processing element such as a Central Processing Unit (CPU), a random access storage medium (RAM), a read-only storage medium (ROM), and a storage element, and implementing the live data processing method of the embodiment of the present application. The computer program may be recorded on a computer-readable recording medium, for example, and loaded into and executed by the computing apparatus via the computer-readable recording medium.

In the application, the anchor terminal can display a plurality of drawing tasks in a second live broadcast interactive interface in an encrypted manner, and responds to selection operation aiming at the plurality of drawing tasks, and sends the target drawing task selected by the anchor user to the audience terminal. The audience terminal can display a target drawing task selected and obtained by an anchor user in a first live broadcast interactive interface of the virtual live broadcast room, and respond to a drawing request aiming at the target drawing task and display a target object drawn and obtained by the target audience user of the virtual live broadcast room in the first live broadcast interactive interface. Further, the target object is sent to the anchor terminal corresponding to the anchor user, so that the anchor terminal can display the target object in the second live broadcast interactive interface, and the anchor user predicts the object attribute of the target object to obtain a predicted object attribute. And then, the anchor terminal verifies the validity of the predicted object attribute of the target object to obtain a verification result, and the verification result is sent to the audience terminal. And after receiving the verification result, the audience terminal can display the verification result in the first direct-broadcasting interactive interface. The interaction between the anchor user and the target audience user is realized by drawing the target audience user in the live broadcast interaction interface, the participation of the target audience user is enhanced, and the intimacy of the target audience user to the virtual live broadcast room is increased. In addition, different virtual directness rooms, the drawing task that the anchor user selected is inconsistent, consequently, can effectively improve live interactive's variety to improve the interactive effect in virtual live room.

Fig. 12 is a schematic structural diagram of a computer device according to an embodiment of the present application. As shown in fig. 12, the computer apparatus 1000 may include: the processor 1001, the network interface 1004, and the memory 1005, and the computer apparatus 1000 may further include: a user interface 1003, and at least one communication bus 1002. Wherein a communication bus 1002 is used to enable connective communication between these components. The user interface 1003 may include a Display screen (Display) and a Keyboard (Keyboard), and the optional user interface 1003 may also include a standard wired interface and a standard wireless interface. The network interface 1004 may optionally include a standard wired interface, a wireless interface (e.g., WI-FI interface). The memory 1005 may be a high-speed RAM memory or a non-volatile memory (e.g., at least one disk memory). The memory 1005 may optionally be at least one memory device located remotely from the processor 1001. As shown in fig. 12, a memory 1005, which is a kind of computer-readable storage medium, may include therein an operating system, a network communication module, a user interface module, and a device control application program.

In the computer device 1000 shown in fig. 12, the network interface 1004 may provide a network communication function; the user interface 1003 is an interface for providing a user with input; and the processor 1001 may be used to invoke a device control application stored in the memory 1005 to implement:

displaying a target drawing task in a first live broadcast interactive interface of a virtual live broadcast room; the target drawing task is selected from at least two drawing tasks displayed in a second live broadcast interactive interface in an encrypted manner by a main broadcast user of the virtual live broadcast room;

responding to a drawing request aiming at the target drawing task, and displaying a target object drawn by a target audience user of the virtual live broadcast room in the first live broadcast interactive interface;

sending the target object to a main broadcast terminal corresponding to the main broadcast user so that the main broadcast terminal displays the target object in a second live broadcast interaction interface;

displaying a verification result for indicating the effectiveness of the attribute of the prediction object in the first direct-playing interactive interface; the predicted object attribute is obtained by predicting the object attribute of the target object by the anchor user.

Optionally, the number of target audience users is at least two, and the processor 1001 may be configured to invoke a device control application program stored in the memory 1005, so as to display, in the first live interactive interface, a target object drawn by the target audience users in the virtual live room in response to a drawing request for the target drawing task, where the target object includes:

and responding to a drawing request aiming at the target drawing task, and drawing the target object in the first direct-playing interactive interface according to the drawing sequence of each target audience user in the at least two target audience users.

Optionally, the at least two target audience users include a first target audience user and a second target audience user; the drawing order of the first target audience user is prior to the drawing order of the second target audience user;

the processor 1001 may be configured to invoke a device control application stored in the memory 1005 to implement, in response to a drawing request for the target drawing task, drawing the target object in the first live interactive interface according to a drawing order of each of the at least two target viewer users, including:

responding to a drawing request aiming at the target drawing task, and displaying first drawing content drawn by the first target audience user in the first direct-playing interactive interface;

and when the time reaches the drawing starting time of the second target audience user, adding second drawn content drawn by the second target audience user in the area where the first drawn content is located to obtain the target object.

Optionally, the processor 1001 may be configured to call a device control application stored in the memory 1005, so as to implement, in response to a drawing request for the target drawing task, displaying, in the first live interactive interface, first drawing content drawn by the first target viewer user, including:

in response to a drawing request for the target drawing task, highlighting user information of the first target audience user in the first live interactive interface;

and when the time reaches the end drawing time of the first target audience user, in the first direct-playing interactive interface, canceling and highlighting the user information of the first target audience user, and displaying the first drawing content drawn by the first target audience user.

Optionally, the at least two target audience users refer to audience users who establish a voice connection between the virtual live broadcast room and the anchor user; the drawing sequence of each of the at least two target audience users is determined according to the time when the at least two target audience users establish the voice connection with the anchor user.

Optionally, the processor 1001 may be configured to call a device control application stored in the memory 1005, so as to implement, in response to a drawing request for the target drawing task, displaying, in the first live interactive interface, first drawing content drawn by the first target viewer user, including:

responding to a drawing request aiming at the target drawing task, and acquiring initial drawing content drawn by the first target audience user and first size information of a display interface of an audience terminal corresponding to the first target audience user;

acquiring second size information of a display interface of the audience terminal corresponding to the second target audience user;

adjusting the initial drawing content according to the first size information and the second size information to obtain first drawing content;

and displaying the first drawing content in the first direct-playing interactive interface.

Optionally, the processor 1001 may be configured to invoke a device control application program stored in the memory 1005, so as to adjust the initial drawing content according to the first size information and the second size information, so as to obtain a first drawing content, where the method includes:

determining, according to the first size information and the second size information, a size ratio between a display interface of an audience terminal corresponding to the first target audience user and a display interface of an audience terminal corresponding to the second target audience user;

and adjusting the initial drawing content according to the size ratio to obtain a first drawing content.

Optionally, the processor 1001 may be configured to invoke the device control application stored in the memory 1005, so as to implement that when the time reaches the drawing start time of the second target audience user, in an area where the first drawing content is located, adding a second drawing content drawn by the second target audience user to obtain the target object, where the second drawing content includes:

when the time reaches the drawing starting time of the second target audience user, calling a drawing detection function to detect a display interface of an audience terminal corresponding to the second target audience user to obtain a drawing array;

calling an analysis function to analyze the drawing array to obtain drawing information; the drawing information comprises a drawing line segment, position information of the drawing line segment and color information;

and calling a drawing function to draw according to the drawing information to obtain the second drawing content, and adding the second drawing content in the area where the first drawing content is located to obtain the target object.

Optionally, the processor 1001 may be configured to invoke a device control application stored in the memory 1005 to implement:

and if the verification result of the predicted object attribute indicates that the predicted object attribute is invalid, displaying the scores of all target audience users in the first direct-playing interactive interface.

And the scores of all target audience users are obtained by scoring the drawn contents of all target audience users by the rest audience users in the virtual live broadcast room, wherein the rest audience users refer to the audience users in the virtual live broadcast room except for the at least two target audience users.

Optionally, the processor 1001 may be configured to invoke a device control application stored in the memory 1005 to implement:

if the verification result of the predicted object attribute indicates that the predicted object attribute is valid, displaying a first resource package in a resource display area of the first direct-playing interactive interface; resources in the first resource package are for allocation to the anchor user;

and if the verification result of the predicted object attribute indicates that the predicted object attribute is invalid, displaying a second resource packet in the resource display area of the first direct-broadcasting interactive interface, and displaying user information of a specified audience user in the second resource packet in a highlighted mode.

The specified audience user refers to the target audience user with the highest score in the at least two target audience users, and the resources in the second resource packet are used for being distributed to the audience users in the virtual live broadcast room.

Optionally, the processor 1001 may be configured to invoke a device control application stored in the memory 1005 to implement:

responding to an interaction request aiming at a virtual live broadcast room, and displaying at least two drawing tasks in a second live broadcast interaction interface of the virtual live broadcast room in an encrypted manner;

responding to the at least two drawing task selection requests, and sending a target drawing task selected by an anchor user of the virtual live broadcast room to an audience terminal so that the audience terminal can display the target drawing task in a first live broadcast interactive interface;

displaying a target object in the second live interactive interface; the target object is drawn by the target audience user for the target drawing task;

in response to a prediction request for the object attribute of the target object, displaying a predicted object attribute predicted by the anchor user in the second live interactive interface, and displaying a verification result for indicating the validity of the predicted object attribute.

Optionally, the processor 1001 may be configured to invoke the device control application stored in the memory 1005 to implement, in response to a prediction request for an object property of the target object, displaying, in the second live interactive interface, a predicted object property predicted by the anchor user, including:

responding to a prediction request aiming at the object attribute of the target object, and acquiring voice data input by the anchor user for predicting the object attribute of the target object;

performing semantic extraction on the voice data to obtain semantic information of the voice data;

and determining the predicted object attribute of the target object according to the semantic information of the voice data, and displaying the predicted object attribute in the second live broadcast interactive interface.

Optionally, the processor 1001 may be configured to call a device control application stored in the memory 1005, so as to display, in the first live interactive interface, a verification result indicating validity of the property of the prediction object, where the verification result includes:

acquiring standard object attributes of an object indicated by the target drawing task;

determining a degree of match between the standard object attribute and the predicted object attribute;

if the matching degree is larger than a matching threshold value, displaying a verification result for indicating that the attribute of the predicted object is valid in the first live interaction interface;

and if the matching degree is smaller than or equal to the matching threshold, displaying a verification result for indicating that the predicted object attribute is invalid in the first live interactive interface.

Optionally, the processor 1001 may be configured to call a device control application stored in the memory 1005 to implement determining a matching degree between the standard object attribute and the predicted object attribute, including:

performing feature extraction on the standard object attribute to obtain a feature vector of the standard object attribute;

performing feature extraction on the predicted object attribute to obtain a feature vector of the standard object attribute;

and determining the distance between the feature vector of the standard object attribute and the feature vector of the standard object attribute, and determining the matching degree between the standard object attribute and the predicted object attribute according to the distance.

In the application, the anchor terminal can display a plurality of drawing tasks in a second live broadcast interactive interface in an encrypted manner, and responds to selection operation aiming at the plurality of drawing tasks, and sends the target drawing task selected by the anchor user to the audience terminal. The audience terminal can display a target drawing task selected and obtained by an anchor user in a first live broadcast interactive interface of the virtual live broadcast room, and respond to a drawing request aiming at the target drawing task and display a target object drawn and obtained by the target audience user of the virtual live broadcast room in the first live broadcast interactive interface. Further, the target object is sent to the anchor terminal corresponding to the anchor user, so that the anchor terminal can display the target object in the second live broadcast interactive interface, and the anchor user predicts the object attribute of the target object to obtain a predicted object attribute. And then, the anchor terminal verifies the validity of the predicted object attribute of the target object to obtain a verification result, and the verification result is sent to the audience terminal. And after receiving the verification result, the audience terminal can display the verification result in the first direct-broadcasting interactive interface. The interaction between the anchor user and the target audience user is realized by drawing the target audience user in the live broadcast interaction interface, the participation of the target audience user is enhanced, and the intimacy of the target audience user to the virtual live broadcast room is increased. In addition, different virtual directness rooms, the drawing task that the anchor user selected is inconsistent, consequently, can effectively improve live interactive's variety to improve the interactive effect in virtual live room.

It should be understood that the computer device 1000 described in this embodiment of the present application may perform the description of the live data processing method in the embodiment corresponding to fig. 3 and fig. 9, and may also perform the description of the live data processing apparatus in the embodiment corresponding to fig. 10 and fig. 11, which is not described herein again. In addition, the beneficial effects of the same method are not described in detail.

Further, here, it is to be noted that: an embodiment of the present application further provides a computer-readable storage medium, where the computer-readable storage medium stores a computer program executed by the aforementioned live data processing apparatus, and the computer program includes program instructions, and when the processor executes the program instructions, the description of the live data processing method in the embodiment corresponding to fig. 3 and fig. 9 can be executed, so that details are not repeated here. In addition, the beneficial effects of the same method are not described in detail. For technical details not disclosed in embodiments of the computer-readable storage medium referred to in the present application, reference is made to the description of embodiments of the method of the present application.

By way of example, the program instructions described above may be executed on one computer device, or on multiple computer devices located at one site, or distributed across multiple sites and interconnected by a communication network, which may comprise a blockchain network.

It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by a computer program, which can be stored in a computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. The storage medium may be a magnetic disk, an optical disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), or the like.

The above disclosure is only for the purpose of illustrating the preferred embodiments of the present application and is not to be construed as limiting the scope of the present application, so that the present application is not limited thereto, and all equivalent variations and modifications can be made to the present application.

40页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:直播方法、装置、系统、计算机设备以及存储介质

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!

技术分类