Live stream display method and device, electronic equipment and readable storage medium
阅读说明:本技术 直播流显示方法、装置、电子设备及可读存储介质 (Live stream display method and device, electronic equipment and readable storage medium ) 是由 邱俊琪 于 2019-11-07 设计创作,主要内容包括:本申请实施例提供一种直播流显示方法、装置、电子设备及可读存储介质,当检测到增强现实AR显示指令时,进入AR识别平面并在AR识别平面中生成对应的目标模型对象,然后将接收到的直播流渲染到目标模型对象上,以使直播流在目标模型对象上进行显示。如此,能够实现互联网直播流在AR真实场景中的应用,观众可以在真实场景中渲染的目标模型对象上观看互联网直播流,提高直播可玩性,进而有效提高用户的留存率。(When an Augmented Reality (AR) display instruction is detected, the live stream enters an AR recognition plane and generates a corresponding target model object in the AR recognition plane, and then the received live stream is rendered on the target model object, so that the live stream is displayed on the target model object. Therefore, the application of the Internet live stream in the AR real scene can be realized, audiences can watch the Internet live stream on a target model object rendered in the real scene, the live broadcast playability is improved, and the retention rate of users is effectively improved.)
1. A live stream display method is applied to a live viewing terminal and comprises the following steps:
when an Augmented Reality (AR) display instruction is detected, entering an AR recognition plane and generating a corresponding target model object in the AR recognition plane;
rendering the received live stream onto the target model object so that the live stream is displayed on the target model object.
2. The live streaming display method according to claim 1, wherein the step of entering an AR recognition plane and generating a corresponding target model object in the AR recognition plane when detecting an augmented reality AR display instruction includes:
when an Augmented Reality (AR) display instruction is detected, determining a target model object to be generated according to the AR display instruction;
loading a model file of the target model object to obtain the target model object;
entering an AR identification plane, and judging the tracking state of the AR identification plane;
and when the tracking state of the AR identification plane is an online tracking state, generating a corresponding target model object in the AR identification plane.
3. The live-streaming display method according to claim 2, wherein the step of loading the model file of the target model object to obtain the target model object comprises:
importing the three-dimensional model of the target model object by using a preset model import plug-in to obtain an sfb format file corresponding to the target model object;
and loading the sfb format file through a preset rendering model to obtain the target model object.
4. The live-streaming display method according to claim 2, wherein the step of generating a corresponding target model object in the AR recognition plane comprises:
creating a tracing point on a preset point of the AR identification plane so as to fix the target model object on the preset point through the tracing point;
creating a corresponding display node at the position of the point, and creating a first child node inherited to the display node so as to adjust and display the target model object in the AR identification plane through the first child node;
creating a second child node inherited to the first child node to replace a bone adjustment node with the second child node upon detection of an addition request of the bone adjustment node, wherein the bone adjustment node is used to adjust a bone point of the target model object.
5. The live-streaming display method of claim 4, wherein the step of presenting the target model object in the AR recognition plane through the first child node comprises:
and calling a binding setting method of the first child node to bind the target model object to the first child node so as to finish the display of the target model object in the AR identification plane.
6. The live-streaming display method of claim 4, wherein the adjusting the target model object through the first child node comprises one or more of the following adjusting modes:
scaling the target model object;
translating the target model object;
rotating the target model object.
7. The live-stream display method according to any one of claims 1 to 6, wherein the step of rendering the received live stream onto the target model object to display the live stream on the target model object includes:
calling a Software Development Kit (SDK) to pull a live stream from a live server and creating an external texture of the live stream;
transmitting the texture of the live stream to a decoder of the SDK for rendering;
and after receiving the rendering starting state of the SDK decoder, calling an external texture setting method to render the external texture of the live stream to the target model object so as to display the live stream on the target model object.
8. The live-streaming display method according to claim 7, wherein the step of calling an external texture setting method to render an external texture of the live-streaming onto the target model object comprises:
traversing each region in the target model object, and determining at least one model rendering region available for rendering a live stream in the target model object;
and calling an external texture setting method to render the external texture of the live stream onto the at least one model rendering area.
9. A live stream display device, applied to a live viewing terminal, the device comprising:
the generating module is used for entering an AR recognition plane and generating a corresponding target model object in the AR recognition plane when an augmented reality AR display instruction is detected;
and the display module is used for rendering the received live stream to the target model object so as to display the live stream on the target model object.
10. An electronic device comprising a machine-readable storage medium having stored thereon machine-executable instructions and a processor, wherein the processor, when executing the machine-executable instructions, implements the live stream display method of any one of claims 1-8.
11. A readable storage medium having stored therein machine executable instructions which when executed perform the live-stream display method of any one of claims 1-8.
Technical Field
The application relates to the technical field of internet live broadcast, in particular to a live broadcast stream display method and device, electronic equipment and a readable storage medium.
Background
Augmented Reality (AR) is a technology for calculating the position and angle of a camera image in real time and adding a corresponding image, and the technology aims to sleeve a virtual world on a screen in the real world and interact with the virtual world. The augmented reality technology not only shows the information of the real world, but also displays the virtual information at the same time, and the two kinds of information are mutually supplemented and superposed, so that the real world and the computer graph are multiply synthesized together, and the real world can be seen to surround the computer graph.
Although the application of the AR technology is very wide, the application of the AR technology in Internet live broadcast is less, and the application of the Internet live broadcast in an AR real scene is lacked, so that the live broadcast playability is not high, and the retention rate of a user is difficult to effectively improve.
Disclosure of Invention
In view of this, an object of the present application is to provide a live stream display method, an apparatus, an electronic device, and a readable storage medium, which can implement application of an internet live stream in an AR real scene, improve live playability, and further effectively improve a retention rate of a user.
According to an aspect of the present application, a live stream display method is provided, which is applied to a live viewing terminal, and the method includes:
when an Augmented Reality (AR) display instruction is detected, entering an AR recognition plane and generating a corresponding target model object in the AR recognition plane;
rendering the received live stream onto the target model object so that the live stream is displayed on the target model object.
In a possible implementation, the step of entering an AR recognition plane and generating a corresponding target model object in the AR recognition plane when the augmented reality AR display instruction is detected includes:
when an Augmented Reality (AR) display instruction is detected, determining a target model object to be generated according to the AR display instruction;
loading a model file of the target model object to obtain the target model object;
entering an AR identification plane, and judging the tracking state of the AR identification plane;
and when the tracking state of the AR identification plane is an online tracking state, generating a corresponding target model object in the AR identification plane.
In a possible implementation manner, the step of loading the model file of the target model object to obtain the target model object includes:
importing the three-dimensional model of the target model object by using a preset model import plug-in to obtain an sfb format file corresponding to the target model object;
and loading the sfb format file through a preset rendering model to obtain the target model object.
In a possible embodiment, the step of generating a corresponding target model object in the AR recognition plane includes:
creating a tracing point on a preset point of the AR identification plane so as to fix the target model object on the preset point through the tracing point;
creating a corresponding display node at the position of the point, and creating a first child node inherited to the display node so as to adjust and display the target model object in the AR identification plane through the first child node;
creating a second child node inherited to the first child node to replace a bone adjustment node with the second child node upon detection of an addition request of the bone adjustment node, wherein the bone adjustment node is used to adjust a bone point of the target model object. In one possible embodiment, the step of presenting the target model object in the AR recognition plane by the first child node comprises:
and calling a binding setting method of the first child node to bind the target model object to the first child node so as to finish the display of the target model object in the AR identification plane.
In a possible embodiment, the adjusting the target model object by the first child node includes one or more of the following adjusting manners:
scaling the target model object;
translating the target model object;
rotating the target model object.
In a possible implementation, the step of rendering the received live stream onto the target model object to display the live stream on the target model object includes:
calling a Software Development Kit (SDK) to pull a live stream from a live server and creating an external texture of the live stream;
transmitting the texture of the live stream to a decoder of the SDK for rendering;
and after receiving the rendering starting state of the SDK decoder, calling an external texture setting method to render the external texture of the live stream to the target model object so as to display the live stream on the target model object.
In a possible implementation, the step of calling an external texture setting method to render an external texture of the live stream onto the target model object includes:
traversing each region in the target model object, and determining at least one model rendering region available for rendering a live stream in the target model object;
and calling an external texture setting method to render the external texture of the live stream onto the at least one model rendering area.
According to another aspect of the present application, there is provided a live stream display apparatus applied to a live viewing terminal, the apparatus including:
the generating module is used for entering an AR recognition plane and generating a corresponding target model object in the AR recognition plane when an augmented reality AR display instruction is detected;
and the display module is used for rendering the received live stream to the target model object so as to display the live stream on the target model object.
According to another aspect of the present application, an electronic device is provided, which includes a machine-readable storage medium and a processor, where the machine-readable storage medium stores machine-executable instructions, and the processor, when executing the machine-executable instructions, implements the live stream display method described above.
According to another aspect of the present application, there is provided a readable storage medium having stored therein machine executable instructions which, when executed, implement the aforementioned live stream display method.
Based on any one of the above aspects, when an Augmented Reality (AR) display instruction is detected, the method enters an AR recognition plane and generates a corresponding target model object in the AR recognition plane, and then renders the received live stream to the target model object, so that the live stream is displayed on the target model object. Therefore, the application of the Internet live stream in the AR real scene can be realized, audiences can watch the Internet live stream on a target model object rendered in the real scene, the live broadcast playability is improved, and the retention rate of users is effectively improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are required to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present application and therefore should not be considered as limiting the scope, and for those skilled in the art, other related drawings can be obtained from the drawings without inventive effort.
Fig. 1 is a schematic view illustrating an interaction scene of a live broadcast system provided in an embodiment of the present application;
fig. 2 is a flowchart illustrating a live stream display method provided by an embodiment of the present application;
FIG. 3 shows a flow diagram of the sub-steps of step S110 shown in FIG. 2;
FIG. 4 shows a flow diagram of the substeps of step S120 shown in FIG. 2;
FIG. 5 is a diagram illustrating a live stream provided by an embodiment of the present application not displayed on a target model object;
FIG. 6 is a diagram illustrating a live stream displayed on a target model object according to an embodiment of the present application;
fig. 7 is a schematic diagram illustrating functional modules of a live stream display apparatus provided in an embodiment of the present application;
fig. 8 shows a schematic block diagram of a structure of an electronic device for implementing the live stream display method provided in an embodiment of the present application.
Detailed Description
In order to make the purpose, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it should be understood that the drawings in the present application are for illustrative and descriptive purposes only and are not used to limit the scope of protection of the present application. Additionally, it should be understood that the schematic drawings are not necessarily drawn to scale. The flowcharts used in this application illustrate operations implemented according to some of the embodiments of the present application. It should be understood that the operations of the flow diagrams may be performed out of order, and steps without logical context may be performed in reverse order or simultaneously. One skilled in the art, under the guidance of this application, may add one or more other operations to, or remove one or more operations from, the flowchart.
Referring to fig. 1, fig. 1 shows an interaction scene schematic diagram of a
In some implementation scenarios, the
In this embodiment, the
In this embodiment, the
It is understood that the
In order to implement application of an internet live stream in an AR real scene, improve live play and further effectively improve a retention rate of a user, fig. 2 shows a flow diagram of a live stream display method provided in an embodiment of the present application, in this embodiment, the live stream display method may be executed by the
It should be understood that, in other embodiments, the order of some steps in the live stream display method of this embodiment may be interchanged according to actual needs, or some steps may be omitted or deleted. The detailed steps of the live stream display method are described below.
And step S110, when the augmented reality AR display instruction is detected, entering an AR recognition plane and generating a corresponding target model object in the AR recognition plane.
And step S120, rendering the received live stream to a target model object so as to display the live stream on the target model object.
In this embodiment, for step S110, when the viewer of the live viewing terminal 200 logs in the live viewing room that needs to be viewed, the live viewing room may be selected to be displayed in an AR manner, or the
When the target model object is displayed in the AR recognition plane, the
In a possible implementation manner, for step S110, in order to improve stability of AR display and avoid a situation that an abnormality occurs in the AR recognition plane to cause an error in display of a target model object in the process after entering the AR recognition plane, please further refer to fig. 3, step S110 may be implemented by the following sub-steps:
and a substep S111, when detecting the augmented reality AR display instruction, determining the target model object to be generated according to the AR display instruction.
And a substep S112, loading the model file of the target model object to obtain the target model object.
And a substep S113 of entering the AR identification plane and judging the tracking state of the AR identification plane.
And a substep S114, when the tracking state of the AR identification plane is an online tracking state, generating a corresponding target model object in the AR identification plane.
In this embodiment, after entering the AR recognition plane, the tracking state of the AR recognition plane may be determined. For example, when entering an AR identification plane, addonapplatelistener monitoring may be registered, then the currently identified AR identification plane is obtained through arfragment.
Therefore, the tracking state of the AR identification plane is identified when the AR identification plane enters the AR identification plane, and the next operation is executed, so that the stability of AR display can be improved, and the condition that the target model object is displayed wrongly due to the fact that the AR identification plane is abnormal is avoided.
For sub-step S111, the target model object may be a three-dimensional AR model for displaying in an AR recognition plane, and the target model object may be selected by the viewer in advance, may be selected by default by the
For the sub-step S112, in a possible implementation manner, none of the model objects is stored in a file in a standard format, but is stored in a format specified by a software development kit program of the AR, and in order to facilitate loading and format conversion of the model objects, this embodiment may use a preset model import plug-in to import a three-dimensional model of the target model object, obtain an sfb format file corresponding to the target model object, and then load the sfb format file through a preset rendering model, so as to obtain the target model object. For example, taking the software development kit of AR as an example of ARCore, the FBX 3D model of the target model object may be imported by using a google-scene-tools plug-in to obtain an sfb format file corresponding to the target model object, and then the sfb format file is loaded by using a ModelRenderable model to obtain the target model object.
For the sub-step S113, in a possible implementation, in order to ensure that the target model object does not change with the movement of the camera in the AR recognition plane subsequently and facilitate the target model object to be adjusted with the user operation during the process of generating the corresponding target model object in the AR recognition plane, the following describes an exemplary generation process of the target model object with reference to a possible example.
First, a stippling point Anchor may be created on a preset point of the AR recognition plane to fix the target model object on the preset point by the stippling point Anchor.
Then, a corresponding display node Anchor node is created at the position of the stroke Anchor, and a first child node TransformableNode inherited to the display node Anchor node is created, so that the target model object is adjusted and displayed through the first child node TransformableNode. For example, the adjustment of the target model object by the first child node transformabeneode includes one or more of the following adjustment modes:
1) the target model object may be scaled, for example, by adjusting the entire target model object to be enlarged or reduced, or by adjusting a part of the target model object to be enlarged or reduced.
2) The target model object is translated, for example, the target model object may be moved by a preset distance in each direction (left, right, up, down, oblique).
3) The target model object is rotated. For example, the target model object may be rotated in a clockwise or counterclockwise direction.
For another example, the binding setting method of the first child node TransformableNode may be invoked to bind the target model object to the first child node TransformableNode, so as to complete the display of the target model object in the AR recognition plane.
Then, a second child Node inherited to the first child Node transformabeneode is created again to replace the skeletal adjustment Node skeineteonode with the second child Node when an addition request of the skeletal adjustment Node skeineteonode is detected, wherein the target model object may generally have a plurality of skeletal points, and the skeletal adjustment Node skeletteonode may be used to adjust the skeletal points of the target model object.
Therefore, in the process of generating the corresponding target model object in the AR identification plane, the target model object is fixed on the preset point through the tracing point, the target model object is ensured not to change along with the movement of the camera in the AR identification plane subsequently, the target model object is adjusted and displayed through the first sub-node, the target model object can be adjusted and displayed in real time along with the operation of a user, and in consideration of the possibility of adding the bone adjustment node to adjust the bone of the target model object, a second sub-node inherited to the first sub-node can be reserved, so that the bone adjustment node can be used for replacing the second sub-node when the bone adjustment node is added subsequently.
Based on the above description, in a possible implementation manner, with respect to step S120, in order to improve the real scene experience after the live stream is rendered to the target model object, a possible example is given below in conjunction with fig. 4 to describe step S120 in detail. Referring to fig. 4, step S120 may be implemented by the following sub-steps:
in the substep S121, a software development kit SDK is called to pull the live stream from the
And a substep S122, transmitting the texture of the live stream to a decoder of the SDK for rendering.
And a substep S123 of, after receiving the rendering start state of the SDK decoder, calling an external texture setting method to render the external texture of the live stream onto the target model object so as to display the live stream on the target model object.
In this embodiment, for example, when the live viewing terminal 200 runs the android system, the software development kit may be a hySDK, that is, the direct stream may be pulled from the
For example, there may be multiple regions on a typical object model object, some of which may be used only for model presentation and some of which may be used to display related video streams or other information. Based on the above, each region in the target model object can be traversed, at least one model rendering region in the target model object, which is used for rendering the live stream, is determined, and then an external texture setting method is called to render the external texture of the live stream onto the at least one model rendering region. Alternatively, the viewer may determine what may be displayed in each model rendering region through the
In order to facilitate detailed display of the scene in the embodiment of the present application, the following describes the target model object with reference to fig. 5 and fig. 6, and provides schematic diagrams of the live stream not being displayed on the target model object and the live stream being displayed on the target model object, respectively, for brief description.
Referring to fig. 5, an interface schematic diagram of an exemplary AR recognition plane into which the
Referring to fig. 6, an interface schematic diagram of an exemplary AR recognition plane into which the
Therefore, for audiences, the Internet live broadcast stream can be watched on the target model object rendered in the real scene, the live broadcast playability is improved, and the retention rate of the user is effectively improved.
Based on the same inventive concept, please refer to fig. 7, which shows a schematic diagram of functional modules of the live
The
And a
In one possible implementation, the
when an Augmented Reality (AR) display instruction is detected, determining a target model object to be generated according to the AR display instruction;
loading a model file of the target model object to obtain a target model object;
entering an AR identification plane, and judging the tracking state of the AR identification plane;
and when the tracking state of the AR identification plane is an online tracking state, generating a corresponding target model object in the AR identification plane.
In one possible implementation, the
importing a three-dimensional model of a target model object by using a preset model import plug-in to obtain an sfb format file corresponding to the target model object;
and loading the sfb format file through a preset rendering model to obtain a target model object.
In one possible implementation, the
creating a tracing point on a preset point of the AR identification plane so as to fix the target model object on the preset point through the tracing point;
creating a corresponding display node at the position of the drawing point, and creating a first child node inherited to the display node so as to adjust and display the target model object through the first child node;
and creating a second child node inherited to the first child node to replace the bone adjustment node with the second child node when the adding request of the bone adjustment node is detected, wherein the bone adjustment node is used for adjusting the bone point of the target model object.
In one possible implementation, the
and calling a binding setting method of the first child node to bind the target model object to the first child node so as to complete the display of the target model object in the AR identification plane.
In one possible embodiment, the adjustment of the target model object by the first child node may include one or more of the following adjustment modes:
scaling the target model object;
translating the target model object;
the target model object is rotated.
In one possible implementation, the
calling a Software Development Kit (SDK) to pull the live broadcast stream from the
transmitting the texture of the live stream to a decoder of the SDK for rendering;
and after receiving the rendering starting state of the SDK decoder, calling an external texture setting method to render the external texture of the live stream to the target model object so as to display the live stream on the target model object.
In one possible implementation, the
traversing each region in the target model object, and determining at least one model rendering region which can be used for rendering the live stream in the target model object;
and calling an external texture setting method to render the external texture of the live stream onto at least one model rendering area.
Based on the same inventive concept, please refer to fig. 8, which shows a schematic block diagram of a structure of an
In this embodiment, the machine-
The
The
The machine-
The live
Since the
Further, the present application also provides a readable storage medium containing computer executable instructions, and when executed, the computer executable instructions may be used to implement the live stream display method provided by the foregoing method embodiment.
Of course, the storage medium provided in the embodiments of the present application and containing computer-executable instructions is not limited to the above method operations, and may also perform related operations in the live stream display method provided in any embodiment of the present application.
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, cause the processes or functions described in accordance with the embodiments of the application to occur, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored in a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, from one website site, computer, server, or data center to another website site, computer, server, or data center via wired (e.g., coaxial cable, fiber optic, Digital Subscriber Line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device, such as a server, a data center, etc., that incorporates one or more of the available media. The usable medium may be a magnetic medium (e.g., floppy disk, hard disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., Solid State Disk (SSD)), among others.
Embodiments of the present application are described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
It will be apparent to those skilled in the art that various changes and modifications may be made in the embodiments of the present application without departing from the spirit and scope of the application. Thus, if such modifications and variations of the embodiments of the present application fall within the scope of the claims of the present application and their equivalents, the present application is also intended to encompass such modifications and variations.
- 上一篇:一种医用注射器针头装配设备
- 下一篇:低延迟直播方法、装置、存储介质及电子设备