Live broadcasting method and device

文档序号:97953 发布日期:2021-10-12 浏览:27次 中文

阅读说明:本技术 直播方法及装置 (Live broadcasting method and device ) 是由 李爱伟 刘萍 李玉杰 于 2020-04-08 设计创作,主要内容包括:本申请提供一种直播方法及装置,该方法包括:接收服务器发送的至少一个文档,每个文档包括文档信息和文档时刻;从所述服务器中下载至少一个音视频段,并播放所述至少一个音视频段,所述至少一个音视频段中的首个音频段为第一音视频段;根据每个文档对应的文档时刻、所述第一音视频段的切片时刻和所述至少一个音视频段的当前播放进度,确定每个文档的延时显示时长,所述切片时刻为所述服务器对音视频流进行切片处理得到所述第一音视频段时的时刻;根据每个文档的延时显示时长,显示每个文档的文档信息。用于同步显示音视频段和文档信息,进而提高直播设备的用户体验。(The application provides a live broadcast method and a live broadcast device, wherein the method comprises the following steps: receiving at least one document sent by a server, wherein each document comprises document information and document time; downloading at least one audio-video segment from the server and playing the at least one audio-video segment, wherein the first audio segment in the at least one audio-video segment is a first audio-video segment; determining the delayed display duration of each document according to the document time corresponding to each document, the slicing time of the first audio-video segment and the current playing progress of the at least one audio-video segment, wherein the slicing time is the time when the server slices the audio-video stream to obtain the first audio-video segment; and displaying the document information of each document according to the delayed display time length of each document. The method and the device are used for synchronously displaying the audio-video frequency band and the document information, and further improve the user experience of the live broadcast equipment.)

1. A live broadcasting method is applied to live broadcasting equipment, and comprises the following steps:

receiving at least one document sent by a server, wherein each document comprises document information and document time;

downloading at least one audio-video segment from the server and playing the at least one audio-video segment, wherein the first audio segment in the at least one audio-video segment is a first audio-video segment;

determining the delayed display duration of each document according to the document time corresponding to each document, the slicing time of the first audio-video segment and the current playing progress of the at least one audio-video segment, wherein the slicing time is the time when the server slices the audio-video stream to obtain the first audio-video segment;

and displaying the document information of each document according to the delayed display time length of each document.

2. The method of claim 1, wherein downloading at least one audio-video segment from the server comprises:

responding to a live broadcast adding instruction of a user, and acquiring list information corresponding to the first audio and video segment from the server, wherein the list information comprises an identifier of the at least one audio and video segment;

and downloading the at least one audio-video segment from the server according to the identifier of the at least one audio-video segment in the list information.

3. The method according to claim 1 or 2, wherein determining the time length of the delayed display of each document according to the document time corresponding to each document, the slicing time of the first audio-video segment and the current playing progress of the at least one audio-video segment comprises:

sending request information to the server, wherein the request information is used for requesting the slicing time of the first audio-video segment;

acquiring the current playing progress of the at least one audio-video segment;

and determining the time length of delayed display of each document according to the document time corresponding to each document, the slicing time of the first audio-video segment and the current playing progress of the at least one audio-video segment.

4. The method of claim 3, wherein for a first document of the at least one document; determining the time length of the delayed display of the first document according to the document time of the first document, the slicing time of the first audio-video segment and the current playing progress of the at least one audio-video segment, including:

and determining the difference value of the document time of the first document, the slicing time of the first audio-video segment and the current playing progress of the at least one audio-video segment as the time length of the delayed display of the first document.

5. The method of claim 4, wherein for a first document of the at least one document; displaying the document information of the first document according to the time delay display duration of the first document, wherein the displaying comprises:

if the time length of the delayed display of the first document is less than or equal to a preset threshold value, displaying the document information of the first document;

and if the time delay display duration of the first document is greater than the preset threshold, displaying the document information of the first document after the time delay display duration.

6. A live broadcast method is applied to a server, and comprises the following steps:

configuring document time for at least one document information from a publishing device;

determining at least one document according to each document information and the document time corresponding to each document information;

sending the at least one document to a live device, wherein each document comprises document information and document time;

and configuring a slicing moment for the acquired at least one audio and video segment, wherein the slicing moment is the moment when the server slices the video stream from the release equipment to obtain the audio and video segment.

7. The method according to claim 6, wherein after configuring the slicing time for the acquired at least one audio video segment, further comprising:

and acquiring list information corresponding to a first audio and video segment from the server, wherein the list information comprises an identifier of the at least one audio and video segment, and the at least one audio and video segment comprises a first audio and video segment which is the first audio and video segment displayed by the live broadcast equipment.

8. The method according to claim 7, after acquiring list information corresponding to a first audio/video segment displayed by a live device, further comprising:

receiving request information sent by the live broadcast equipment, wherein the request information is used for requesting the slicing time of the first audio-video segment, and the slicing time is the time when the server slices the video stream from the distribution equipment to obtain the first audio-video segment.

9. A live device, which is applied to live equipment, the device comprises: a receiving module, a downloading module, a display module and a determining module, wherein,

the receiving module is used for receiving at least one document sent by the server, and each document comprises document information and document time;

the downloading module is used for downloading at least one audio-video segment from the server;

the display module is used for playing the at least one audio-video segment, wherein the first audio segment in the at least one audio-video segment is a first audio-video segment;

the determining module is used for determining the time delay display duration of each document according to the document time corresponding to each document, the slicing time of the first audio-video segment and the current playing progress of the at least one audio-video segment, wherein the slicing time is the time when the server slices the audio-video stream to obtain the first audio-video segment;

the display module is further used for displaying the document information of each document according to the delay display duration of each document.

10. A live broadcast device, applied to a server, the device comprising: a configuration module, a determination module, and a sending module, wherein,

the configuration module is used for configuring document time for at least one piece of document information from the publishing equipment;

the determining module is used for determining at least one document according to each document information and the document time corresponding to each document information;

the sending module is used for sending the at least one document to the live broadcast equipment, wherein each document comprises document information and document time;

the configuration module is further configured to configure a slicing time for the acquired at least one audio/video segment, where the slicing time is a time when the server slices the video stream from the distribution device to obtain the audio/video segment.

Technical Field

The embodiment of the invention relates to the field of audio and video live broadcast, in particular to a live broadcast method and device.

Background

The live broadcast system is generally a system which enables a user at a live broadcast browser end to view audio/video information streams and document information streams published in real time by a user of a publishing device.

Currently, a live system includes a distribution device, a server, and a live device. The server receives a document information stream from the publishing device and provides the document information stream to the live broadcast device, and the live broadcast device receives the document information stream and displays the document information stream; the server receives the audio and video information stream from the publishing device, processes the audio and video information stream to obtain an audio and video information segment, provides the audio and video information segment for the live broadcast device, and decodes, renders and displays the audio and video information segment after the live broadcast device receives the audio and video information segment.

In the process, after receiving the audio and video clip and the document information stream, the live broadcast equipment independently displays the audio and video clip and the document information stream, so that the audio and video clip and the document information stream cannot be synchronously displayed.

Disclosure of Invention

The application provides a live broadcast method and device, which are used for synchronously displaying audio and video frequency segments and document information so as to improve the user experience of live broadcast equipment.

In a first aspect, the present application provides a live broadcast method, which is applied to a live broadcast device, and the method includes:

receiving at least one document sent by a server, wherein each document comprises document information and document time;

downloading at least one audio-video segment from the server and playing the at least one audio-video segment, wherein the first audio segment in the at least one audio-video segment is a first audio-video segment;

determining the delayed display duration of each document according to the document time corresponding to each document, the slicing time of the first audio-video segment and the current playing progress of the at least one audio-video segment, wherein the slicing time is the time when the server slices the audio-video stream to obtain the first audio-video segment;

and displaying the document information of each document according to the delayed display time length of each document.

In one possible embodiment, downloading at least one audio-video segment from the server comprises:

responding to a live broadcast adding instruction of a user, and acquiring list information corresponding to the first audio and video segment from the server, wherein the list information comprises an identifier of the at least one audio and video segment;

and downloading the at least one audio-video segment from the server according to the identifier of the at least one audio-video segment in the list information.

In a possible implementation manner, determining a time length of delayed display of each document according to a document time corresponding to each document, a slicing time of the first audio-video segment, and a current playing progress of the at least one audio-video segment includes:

sending request information to the server, wherein the request information is used for requesting the slicing time of the first audio-video segment;

acquiring the current playing progress of the at least one audio-video segment;

and determining the time length of delayed display of each document according to the document time corresponding to each document, the slicing time of the first audio-video segment and the current playing progress of the at least one audio-video segment.

In one possible implementation, for a first document of the at least one document; determining the time length of the delayed display of the first document according to the document time of the first document, the slicing time of the first audio-video segment and the current playing progress of the at least one audio-video segment, including:

and determining the difference value of the document time of the first document, the slicing time of the first audio-video segment and the current playing progress of the at least one audio-video segment as the time length of the delayed display of the first document.

In one possible implementation, for a first document of the at least one document; displaying the document information of the first document according to the time delay display duration of the first document, wherein the displaying comprises:

if the time length of the delayed display of the first document is less than or equal to a preset threshold value, displaying the document information of the first document;

and if the time delay display duration of the first document is greater than the preset threshold, displaying the document information of the first document after the time delay display duration.

In a second aspect, the present application provides a live broadcast method, which is applied to a server, and the method includes:

configuring document time for at least one document information from a publishing device;

determining at least one document according to each document information and the document time corresponding to each document information;

sending the at least one document to a live device, wherein each document comprises document information and document time;

and configuring a slicing moment for the acquired at least one audio and video segment, wherein the slicing moment is the moment when the server slices the video stream from the release equipment to obtain the audio and video segment.

In a possible implementation, after configuring the slice time for the acquired at least one audio-video segment, the method further includes:

and acquiring list information corresponding to a first audio and video segment from the server, wherein the list information comprises an identifier of the at least one audio and video segment, and the at least one audio and video segment comprises a first audio and video segment which is the first audio and video segment displayed by the live broadcast equipment.

In a possible implementation manner, after obtaining list information corresponding to a first audio/video segment displayed by a live device, the method further includes:

receiving request information sent by the live broadcast equipment, wherein the request information is used for requesting the slicing time of the first audio-video segment, and the slicing time is the time when the server slices the video stream from the distribution equipment to obtain the first audio-video segment.

In a third aspect, the present application provides a live broadcasting apparatus, which is applied to live broadcasting equipment, and the apparatus includes: a receiving module, a downloading module, a display module and a determining module, wherein,

the receiving module is used for receiving at least one document sent by the server, and each document comprises document information and document time;

the downloading module is used for downloading at least one audio-video segment from the server;

the display module is used for playing the at least one audio-video segment, wherein the first audio segment in the at least one audio-video segment is a first audio-video segment;

the determining module is used for determining the time delay display duration of each document according to the document time corresponding to each document, the slicing time of the first audio-video segment and the current playing progress of the at least one audio-video segment, wherein the slicing time is the time when the server slices the audio-video stream to obtain the first audio-video segment;

the display module is further used for displaying the document information of each document according to the delay display duration of each document.

In a possible implementation manner, the downloading module is specifically configured to:

responding to a live broadcast adding instruction of a user, and acquiring list information corresponding to the first audio and video segment from the server, wherein the list information comprises an identifier of the at least one audio and video segment;

and downloading the at least one audio-video segment from the server according to the identifier of the at least one audio-video segment in the list information.

In a possible implementation, the determining module is specifically configured to:

sending request information to the server, wherein the request information is used for requesting the slicing time of the first audio-video segment;

acquiring the current playing progress of the at least one audio-video segment;

and determining the time length of delayed display of each document according to the document time corresponding to each document, the slicing time of the first audio-video segment and the current playing progress of the at least one audio-video segment.

In one possible implementation, for a first document of the at least one document; the determination module is specifically configured to:

and determining the difference value of the document time of the first document, the slicing time of the first audio-video segment and the current playing progress of the at least one audio-video segment as the time length of the delayed display of the first document.

In one possible implementation, for a first document of the at least one document; the display module is further specifically configured to:

if the time length of the delayed display of the first document is less than or equal to a preset threshold value, displaying the document information of the first document;

and if the time delay display duration of the first document is greater than the preset threshold, displaying the document information of the first document after the time delay display duration.

In a fourth aspect, the present application provides a live broadcast apparatus, which is applied to a server, the apparatus includes: a configuration module, a determination module, and a sending module, wherein,

the configuration module is used for configuring document time for at least one piece of document information from the publishing equipment;

the determining module is used for determining at least one document according to each document information and the document time corresponding to each document information;

the sending module is used for sending the at least one document to the live broadcast equipment, wherein each document comprises document information and document time;

the configuration module is further configured to configure a slicing time for the acquired at least one audio/video segment, where the slicing time is a time when the server slices the video stream from the distribution device to obtain the audio/video segment.

In one possible embodiment, the apparatus further comprises: an acquisition module, wherein,

the acquisition module is used for acquiring list information corresponding to a first audio and video segment from the server after the slicing time is configured for the acquired at least one audio and video segment, wherein the list information comprises an identifier of the at least one audio and video segment, the at least one audio and video segment comprises the first audio and video segment, and the first audio and video segment is a first audio and video segment displayed by the live broadcast equipment.

In a possible embodiment, the apparatus further comprises: a receiving module for receiving, wherein,

the receiving module is used for receiving request information sent by the live broadcast equipment after list information corresponding to a first audio and video segment displayed by the live broadcast equipment is acquired, the request information is used for requesting the slicing time of the first audio and video segment, and the slicing time is the time when the server slices a video stream from the distribution equipment to obtain the first audio and video segment.

In a fifth aspect, the present application provides a live broadcast apparatus, including: at least one processor and memory;

the memory stores computer-executable instructions;

the at least one processor executes computer-executable instructions stored by the memory to cause the at least one processor to perform a live method as in any one of the first aspects.

In a sixth aspect, the present application provides a live broadcast apparatus, including: at least one processor and memory;

the memory stores computer-executable instructions;

the at least one processor executes the memory-stored computer-executable instructions to cause the at least one processor to perform a live method as in any one of the second aspects.

In a seventh aspect, the present application provides a computer-readable storage medium, where computer-executable instructions are stored, and when the processor executes the computer-executable instructions, the live broadcast method according to any one of the first aspect is implemented.

In an eighth aspect, the present application provides a computer-readable storage medium, in which computer-executable instructions are stored, and when the processor executes the computer-executable instructions, the live broadcast method according to any one of the second aspect is implemented.

The application provides a live broadcasting method and a live broadcasting device, wherein the live broadcasting method comprises the following steps: the server configures document time for at least one piece of document information from the issuing equipment; the server determines at least one document according to each piece of document information and the document time corresponding to each piece of document information, wherein each document comprises the document information and the document time; the server sends at least one document to the live broadcast equipment; the method comprises the steps that the live broadcast equipment downloads at least one audio-video segment from a server and plays the at least one audio-video segment, wherein the first audio segment in the at least one audio-video segment is a first audio-video segment; the method comprises the steps that a live broadcast device determines the delayed display duration of each document according to the document time corresponding to each document, the slicing time of a first audio-video segment and the current playing progress of at least one audio-video segment, wherein the slicing time is the time when a server slices an audio-video stream to obtain the first audio-video segment; and displaying the document information of each document according to the delayed display time length of each document. According to the method, the live broadcast equipment plays at least one audio-video segment, meanwhile, the live broadcast equipment determines the delay display time length of each document according to the document time corresponding to each document, the slicing time of the first audio-video segment and the current playing progress of the at least one audio-video segment, and displays the document information of each document according to the delay display time length of each document, so that the audio-video segment and the document information can be synchronously displayed when network downloading of the live broadcast equipment is unstable or a server performs slicing operation to introduce delay errors, and further the user experience of the live broadcast equipment is improved.

Drawings

In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to these drawings without creative efforts.

Fig. 1 is a schematic view of an application scenario of a live broadcast method provided in the present application;

fig. 2 is a first schematic flow chart of a live broadcasting method provided in the present application;

fig. 3 is a schematic flow chart diagram ii of a live broadcasting method provided in the present application;

FIG. 4 is a schematic diagram of an information transfer process provided herein;

fig. 5 is a schematic structural diagram of a live broadcasting device provided in the present application;

fig. 6 is a first schematic structural diagram of another live broadcasting device provided in the present application;

fig. 7 is a schematic structural diagram of another live broadcasting device provided in the present application;

fig. 8 is a schematic hardware structure diagram of a live broadcast apparatus provided in the present application;

fig. 9 is a schematic hardware structure diagram of another live device provided in the present application.

Detailed Description

To make the purpose, technical solutions and advantages of the present application clearer, the technical solutions in the present application will be clearly and completely described below with reference to the drawings in the present application, and it is obvious that the described embodiments are some, but not all embodiments of the present application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.

Fig. 1 is a schematic view of an application scenario of a live broadcast method provided by the present application. As shown in fig. 1, includes: a live device 101, a server 102 and a publishing device 103. The live broadcast device 101 may interact with the server 102 through a wired network or a wireless network, and the distribution device 103 may interact with the server 102 through a wired network or a wireless network.

Optionally, the wired network may include a coaxial cable, a twisted pair, an optical fiber, and the like, and the Wireless network may be a 2G network, a 3G network, a 4G network, or a 5G network, a Wireless Fidelity (WIFI) network, and the like. Alternatively, the live device 101 and the distribution device 103 may be computer devices, tablet computers, mobile phones (or "cellular" phones), and the like, and the terminal devices may also be portable, pocket-sized, hand-held, computer-embedded mobile devices or devices, which are not limited herein.

The publishing device 103 may send the document stream and the audiovisual stream to the server 102. After the live device 101 performs live broadcasting, the server 102 sends at least one document to the live device 101 according to a document stream, and the live device 101 downloads at least one audio and video segment (determined according to the audio and video stream) from the server 102. The live broadcast equipment 101 plays each audio-video segment after receiving at least one audio-video segment, the live broadcast equipment 101 determines the delay display duration of each document after receiving at least one document, and displays the document information in each document according to the delay display duration of each document. In the method, the live broadcast equipment 101 plays at least one audio-video segment, and displays the document information in each document according to the delay display duration of each document, so that the problem that the audio-video segment and the document information cannot be synchronously displayed can be avoided.

The technical solution of the present invention will be described in detail below with specific examples. The following several specific embodiments may be combined with each other, and details of the same or similar concepts or processes may not be repeated in some embodiments.

Fig. 2 is a first flowchart of a live broadcasting method provided by the present application. As shown in fig. 2, the live broadcasting method provided by this embodiment includes:

s201: the server configures a document time for at least one document information from the publishing device.

The document information is information included in a document stream that is sent to the server by the publishing device through a Transmission Control Protocol (TCP). The document stream is a local document stream in the publishing device.

The document time is the current time of the clock acquired when the server receives the document information. Specifically, a live broadcast cloud platform is arranged in the server, and the live broadcast cloud platform is provided with the clock.

In practice, the server acquires the current time of a clock every time the server receives one piece of document information, and takes the current time as the document time corresponding to the document information, so that the time configures the document time for the document information.

S202: the server determines at least one document according to each piece of document information and the document time corresponding to each piece of document information, wherein each document comprises the document information and the document time.

Specifically, the server may perform combination processing on the document information and the document time to obtain the document.

For example, the server performs a combination process on the document information (i) and the document time (i) to obtain the document (i), specifically, see fig. 4. Note that, the document (i) is not shown in fig. 4, and only the document information (i) and the document time (i) are shown.

Wherein i is the identifier of the document, and optionally, i may be 0, 1, 2, 3, etc. arranged in sequence.

S203: the server sends at least one document to the live device.

After the live device joins the live, the server may send at least one document to the live device.

S204: the live broadcast equipment downloads at least one audio-video segment from the server and plays the at least one audio-video segment, wherein the first audio segment in the at least one audio-video segment is the first audio-video segment.

After the live device joins the live, the live device may download at least one audio video segment from the server. And further, decoding, rendering and displaying the at least one audio-video segment, thereby realizing the playing of the at least one audio-video segment.

S205: the live broadcast equipment determines the delayed display duration of each document according to the document time corresponding to each document, the slicing time of the first audio-video segment and the current playing progress of at least one audio-video segment, wherein the slicing time is the time when the server slices the audio-video stream to obtain the first audio-video segment.

Specifically, the time-lapse display duration of each document can be determined by (equation 1) as follows:

ti=Ti-t0-tx(formula 1);

wherein, tiTime length of delayed display for ith document, TiThe document time corresponding to the ith document, t0Is the slicing instant, t, of the first audio-video segmentxIs the current playing progress of at least one audio-video segment.

Specifically, the audio/video stream is an audio/video stream that is sent to the server by a Real-Time Transport Protocol (RTP) or a Real-Time Messaging Protocol (RTMP) after the local audio/video source is collected and encoded by the publishing terminal.

In practical application, the server performs slicing operation on the received audio and video stream to obtain at least one audio and video segment, and records the slicing time corresponding to each audio and video segment.

For example, a first slicing operation is performed on the audio/video stream to obtain a first audio/video segment, a slicing time corresponding to the beginning of the first slicing operation is recorded, and the slicing time is determined as the slicing time of the first audio/video segment; and performing second slicing operation on the audio and video stream to obtain a second audio and video segment, recording the slicing time corresponding to the beginning of the second slicing operation, and determining the slicing time as the slicing time of the second audio and video segment.

S206: and displaying the document information of each document according to the delayed display time length of each document.

Specifically, when the delay display duration is any number greater than 0, the document information of the document is displayed after the delay display duration. For example, if the time of delayed display of a document is 5 milliseconds, the document information of the document is displayed after being delayed by 5 milliseconds.

Compared with the prior art, if network downloading is unstable or a server performs slicing operation to introduce delay errors, after the live broadcast equipment receives the audio and video clips and the document information stream, if the audio and video clips and the document information stream are independently displayed, the audio and video clips and the document information stream cannot be synchronously displayed, and further the user experience of the live broadcast equipment is reduced. In the method, the live broadcast equipment plays at least one audio-video segment, meanwhile, the live broadcast equipment determines the time delay display duration of each document according to the document time corresponding to each document, the slicing time of the first audio-video segment and the current playing progress of the at least one audio-video segment, and displays the document information of each document according to the time delay display duration of each document, so that the audio-video segments and the document information can be synchronously displayed when network downloading of the live broadcast equipment is unstable or a server performs slicing operation to introduce delay errors, and further the user experience of the live broadcast equipment is improved.

The live broadcasting method provided by the embodiment comprises the following steps: the server configures document time for at least one piece of document information from the issuing equipment; the server determines at least one document according to each piece of document information and the document time corresponding to each piece of document information, wherein each document comprises the document information and the document time; the server sends at least one document to the live broadcast equipment; the method comprises the steps that the live broadcast equipment downloads at least one audio-video segment from a server and plays the at least one audio-video segment, wherein the first audio segment in the at least one audio-video segment is a first audio-video segment; the method comprises the steps that a live broadcast device determines the delayed display duration of each document according to the document time corresponding to each document, the slicing time of a first audio-video segment and the current playing progress of at least one audio-video segment, wherein the slicing time is the time when a server slices an audio-video stream to obtain the first audio-video segment; and displaying the document information of each document according to the delayed display time length of each document. According to the method, the live broadcast equipment plays at least one audio-video segment, meanwhile, the live broadcast equipment determines the delay display time length of each document according to the document time corresponding to each document, the slicing time of the first audio-video segment and the current playing progress of the at least one audio-video segment, and displays the document information of each document according to the delay display time length of each document, so that the audio-video segment and the document information can be synchronously displayed when network downloading of the live broadcast equipment is unstable or a server performs slicing operation to introduce delay errors, and further the user experience of the live broadcast equipment is improved.

Based on the embodiment of fig. 2, the following describes in detail a live broadcast method provided in the present application with reference to fig. 3, specifically, please refer to fig. 3.

Fig. 3 is a flowchart illustrating a second live broadcasting method provided by the present application. As shown in fig. 3, the live broadcasting method provided by this embodiment includes:

s301: the server configures a document time for at least one document information from the publishing device.

S302: the server determines at least one document according to each piece of document information and the document time corresponding to each piece of document information, wherein each document comprises the document information and the document time.

S303: the server sends at least one document to the live device.

Specifically, the execution methods of S301 to S303 are the same as those of S201 to S203, and the execution processes of S301 to S303 are not described herein again.

S304: the live broadcast equipment responds to a live broadcast adding instruction of a user, and list information corresponding to a first audio and video segment is obtained from the server, wherein the list information comprises an identifier of at least one audio and video segment, the at least one audio and video segment comprises the first audio and video segment, and the first audio and video segment is the first audio and video segment displayed by the live broadcast equipment.

Specifically, a live broadcast joining control is displayed in a browser client side arranged in the live broadcast equipment, and a user can input a live broadcast joining instruction through the live broadcast joining control.

After the live broadcast device receives the live broadcast joining instruction, the live broadcast joining instruction can be responded, and list information (namely m3u8 index files) corresponding to the first audio and video segment can be obtained from the server.

Alternatively, the identity of at least one audio video segment may be, for example, 0, 1, 2, 3, etc. in succession.

In practical application, the list information is generated in the process that the server slices the audio and video stream, and the list information can be updated along with the slicing processing of the server on the audio and video stream.

S305: and the live broadcast equipment downloads at least one audio-video segment from the server according to the identifier of at least one audio-video segment in the list information.

S306: the live broadcast equipment sends request information to the server, the request information is used for requesting the slicing time of the first audio-video segment, and the slicing time is the time when the server slices the video stream from the distribution equipment to obtain the first audio-video segment.

Specifically, after the list information is acquired, the list information may be assigned to a video tag (e.g., html5) in a browser client in the live device, the video tag monitors an ontimeupdate event, and when the ontimeupdate event is triggered for the first time, request information is sent to the server.

S307: the server sends the slicing time of the first audio-video segment to the live broadcast equipment.

S308: the live broadcast equipment acquires the current playing progress of at least one audio-video segment.

Specifically, the video tag monitors an ontimeupdate event, and when the ontimeupdate event is triggered for the first time and triggered for the second time, the browser client records the current playing progress (currentPlayTime), so that the live broadcast device can directly obtain the current playing progress.

S309: and the live broadcast equipment determines the time delay display duration of each document according to the document time corresponding to each document, the slicing time of the first audio-video segment and the current playing progress of at least one audio-video segment.

Specifically, the execution process of S309 is the same as the execution process of S205, and the execution method of S309 is not described herein again.

S310: and the live broadcast equipment displays the document information of each document according to the delayed display duration of each document.

In one possible implementation, for a first document of the at least one document; displaying the document information of the first document according to the delayed display duration of the first document, wherein the document information comprises:

if the time delay display duration of the first document is less than or equal to a preset threshold value, displaying the document information of the first document;

and if the time delay display duration of the first document is greater than the preset threshold, displaying the document information of the first document after the time delay display duration.

Wherein the preset threshold is 0. For example, if the time-lapse display time length of the first document is 1 ms, the document information of the first document is displayed after 1 ms, and if the time-lapse display time length of the first document is-1 ms, the document information of the first document is displayed.

The live broadcasting method provided by the embodiment comprises the following steps: the server configures document time for at least one piece of document information from the issuing equipment; the server determines at least one document according to each piece of document information and the document time corresponding to each piece of document information, wherein each document comprises the document information and the document time; the server sends at least one document to the live broadcast equipment; the method comprises the steps that a live broadcast device responds to a live broadcast adding instruction of a user, and list information corresponding to a first audio and video segment is obtained from a server, wherein the list information comprises an identifier of at least one audio and video segment, the at least one audio and video segment comprises the first audio and video segment, and the first audio and video segment is the first audio and video segment displayed by the live broadcast device; the live broadcast equipment downloads at least one audio-video frequency segment from the server according to the identifier of at least one audio-video frequency segment in the list information; the method comprises the steps that a live broadcast device sends request information to a server, wherein the request information is used for requesting the slicing time of a first audio-video segment, and the slicing time is the time when the server slices a video stream from a release device to obtain the first audio-video segment; the server sends the slicing time of the first audio-video band to the live broadcast equipment; the method comprises the steps that live broadcast equipment obtains the current playing progress of at least one audio-video segment; and the live broadcast equipment determines the time delay display duration of each document according to the document time corresponding to each document, the slicing time of the first audio-video segment and the current playing progress of at least one audio-video segment. And displaying the document information of each document according to the delayed display time length of each document.

According to the method, the live broadcast equipment plays at least one audio-video segment, meanwhile, the live broadcast equipment determines the delay display time length of each document according to the document time corresponding to each document, the slicing time of the first audio-video segment and the current playing progress of the at least one audio-video segment, and displays the document information of each document according to the delay display time length of each document, so that the audio-video segment and the document information can be synchronously displayed when network downloading of the live broadcast equipment is unstable or a server performs slicing operation to introduce delay errors, and further the user experience of the live broadcast equipment is improved.

Fig. 4 is a schematic diagram of an information transmission process provided in the present application. As shown in fig. 4, includes: a publishing device 401, a server 402 and a live device 403. The publishing device 401 is provided with a live broadcast client, the live broadcast client is provided with a local document stream and a local audio and video source, the server 402 is provided with a live broadcast cloud platform, the live broadcast cloud platform comprises a clock, and the live broadcast device 403 is provided with a browser client.

The live client sends a local document stream (including at least one document information) to the live cloud platform through a TCP (transmission control protocol) transmission protocol. And the live broadcast client acquires and encodes a local audio and video source and then sends audio and video streams to the live broadcast cloud platform through RTP or RTMP.

The live broadcast cloud platform configures corresponding document time for each document information according to the clock, determines at least one document according to each document information and the document time corresponding to each document information, and sends the at least one document to a live broadcast client added to the live broadcast cloud platform. And the live broadcast cloud platform slices the audio and video stream, configures a slicing time for at least one audio and video segment obtained after the slicing processing, and stores the slicing time and at least one audio and video segment corresponding to each audio and video segment.

Each audio-video segment has an identifier (e.g., m, n in fig. 4, where m, n may be a continuous integer greater than or equal to 0, such as 0, 1, 2, 3, 4, etc.), and a corresponding slice time of each audio-video segment also has a corresponding identifier (e.g., m, n in fig. 4). The document information and the document time in each document have the same identification (e.g. i, j in fig. 4, wherein i, j may be a continuous integer greater than or equal to 0, such as 0, 1, 2, 3, 4, etc.)

After a browser client is added into a live broadcast cloud platform, the slicing time of a first audio-video segment and list information corresponding to the first audio-video segment are obtained, at least one audio-video segment (such as the audio-video segment (2), the audio-video segment (3) and the like) is downloaded from a server according to the list information, the at least one audio-video segment is played, the delay display duration of each document is determined after the at least one document is received, each document is cached, and the document information (such as the document information (1), the document information (2) and the like) of each document is displayed according to the delay display duration of each document.

Note that, as illustrated in fig. 4, the first audio-video segment is taken as the audio-video segment (2), and the slice time of the first audio-video segment is taken as the slice time (2).

Fig. 5 is a schematic structural diagram of a live broadcast device provided in the present application. The live device 10 is applied to a live device, and optionally, the live device 10 may be implemented by a combination of software and/or hardware. As shown in fig. 5, the live device 10 includes: a receiving module 11, a downloading module 12, a displaying module 13 and a determining module 14, wherein,

the receiving module 11 is configured to receive at least one document sent by a server, where each document includes document information and document time;

the downloading module 12 is configured to download at least one audio-video segment from the server;

the display module 13 is configured to play the at least one audio-video segment, where a first audio segment in the at least one audio-video segment is a first audio-video segment;

the determining module 14 is configured to determine a time delay display duration of each document according to a document time corresponding to each document, a slicing time of the first audio-video segment, and a current playing progress of the at least one audio-video segment, where the slicing time is a time when the server slices an audio-video stream to obtain the first audio-video segment;

the display module 13 is further configured to display the document information of each document according to the time delay display duration of each document.

The live broadcast apparatus 10 provided in the present application may execute the technical solution that the live broadcast device may execute in the foregoing method embodiment, and the implementation principle and the beneficial effect thereof are similar, and details are not described here again.

In a possible implementation, the downloading module 12 is specifically configured to:

responding to a live broadcast adding instruction of a user, and acquiring list information corresponding to the first audio and video segment from the server, wherein the list information comprises an identifier of the at least one audio and video segment;

and downloading the at least one audio-video segment from the server according to the identifier of the at least one audio-video segment in the list information.

In a possible implementation, the determining module 14 is specifically configured to:

sending request information to the server, wherein the request information is used for requesting the slicing time of the first audio-video segment;

acquiring the current playing progress of the at least one audio-video segment;

and determining the time length of delayed display of each document according to the document time corresponding to each document, the slicing time of the first audio-video segment and the current playing progress of the at least one audio-video segment.

In one possible implementation, for a first document of the at least one document; the determining module 14 is specifically configured to:

and determining the difference value of the document time of the first document, the slicing time of the first audio-video segment and the current playing progress of the at least one audio-video segment as the time length of the delayed display of the first document.

In one possible implementation, for a first document of the at least one document; the display module 13 is further specifically configured to:

if the time length of the delayed display of the first document is less than or equal to a preset threshold value, displaying the document information of the first document;

and if the time delay display duration of the first document is greater than the preset threshold, displaying the document information of the first document after the time delay display duration.

The live broadcast apparatus 10 provided in the present application may execute the technical solution that the live broadcast device may execute in the foregoing method embodiment, and the implementation principle and the beneficial effect thereof are similar, and details are not described here again.

Fig. 6 is a first schematic structural diagram of another live broadcast apparatus provided in the present application. The live device 20 is applied to a server, and optionally, the live device 20 may be implemented by a combination of software and/or hardware. As shown in fig. 6, the live device 20 includes: a configuration module 21, a determination module 22 and a sending module 23, wherein,

the configuration module 21 is configured to configure a document time for at least one document information from the publishing device;

the determining module 22 is configured to determine at least one document according to each piece of document information and a document time corresponding to each piece of document information;

the sending module 23 is configured to send the at least one document to a live device, where each document includes document information and a document time;

the configuration module 21 is further configured to configure a slicing time for the acquired at least one audio/video segment, where the slicing time is a time when the server slices the video stream from the distribution device to obtain the audio/video segment.

The live broadcast apparatus 20 provided by the present application may execute the technical solution that the server may execute in the foregoing method embodiments, and the implementation principle and the beneficial effect thereof are similar, which are not described herein again.

Fig. 7 is a schematic structural diagram of another live broadcasting device provided in the present application. On the basis of fig. 6, as shown in fig. 7, the live device 20 further includes: an acquisition module 24 that, among other things,

the obtaining module 24 is configured to obtain, from the server, list information corresponding to a first audio/video segment after configuring a slicing time for the obtained at least one audio/video segment, where the list information includes an identifier of the at least one audio/video segment, and the at least one audio/video segment includes the first audio/video segment, and the first audio/video segment is a first audio/video segment displayed by the live broadcast device.

The live broadcasting device 20 further includes: the reception module 25 receives, among other things,

the receiving module 25 is configured to receive request information sent by a live device after list information corresponding to a first audio/video segment displayed by the live device is acquired, where the request information is used to request a slicing time of the first audio/video segment, and the slicing time is a time when the server slices a video stream from the distribution device to obtain the first audio/video segment.

The live broadcast apparatus 20 provided by the present application may execute the technical solution that the server may execute in the foregoing method embodiments, and the implementation principle and the beneficial effect thereof are similar, which are not described herein again.

Fig. 8 is a schematic hardware structure diagram of a live broadcast apparatus provided in the present application. The live broadcasting device 30 is provided in the live broadcasting apparatus. As shown in fig. 8, the live broadcast apparatus 30 includes: at least one processor 31 and a memory 32. The processor 31 and the memory 32 are connected by a bus 33.

In particular implementations, at least one processor 31 executes computer-executable instructions stored by memory 32 to cause at least one processor 31 to perform a live method as may be performed by a live device as described above.

For a specific implementation process of the processor 31, reference may be made to a live broadcast method that can be executed by a live broadcast device in the foregoing method embodiment, and an implementation principle and a technical effect of the method are similar, which are not described herein again.

Fig. 9 is a schematic hardware structure diagram of another live device provided in the present application. The live device 40 is provided in the server. As shown in fig. 9, the live device 40 includes: at least one processor 41 and a memory 42. The processor 41 and the memory 42 are connected by a bus 43.

In particular implementations, at least one processor 41 executes computer-executable instructions stored by memory 42 to cause at least one processor 41 to perform a live method as described above.

For a specific implementation process of the processor 41, reference may be made to a live broadcast method that can be executed by the server in the foregoing method embodiment, and the implementation principle and the technical effect are similar, which is not described herein again.

In the embodiments shown in fig. 8-9, it should be understood that the Processor may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of a method disclosed in connection with the present invention may be embodied directly in a hardware processor, or in a combination of the hardware and software modules within the processor.

The memory may comprise high speed RAM memory and may also include non-volatile storage NVM, such as at least one disk memory.

The bus may be an Industry Standard Architecture (ISA) bus, a Peripheral Component Interconnect (PCI) bus, an Extended ISA (EISA) bus, or the like. The bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, the buses in the figures of the present application are not limited to only one bus or one type of bus.

The application also provides a computer-readable storage medium, wherein computer-executable instructions are stored in the computer-readable storage medium, and when the processor executes the computer-executable instructions, the live broadcast method executable by the television is realized.

The present application also provides another computer-readable storage medium, in which computer-executable instructions are stored, and when the processor executes the computer-executable instructions, the live broadcast method executable by the server is implemented.

The computer-readable storage medium may be implemented by any type of volatile or non-volatile storage device or combination thereof, such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disk. Readable storage media can be any available media that can be accessed by a general purpose or special purpose computer.

An exemplary readable storage medium is coupled to the processor such the processor can read information from, and write information to, the readable storage medium. Of course, the readable storage medium may also be an integral part of the processor. The processor and the readable storage medium may reside in an Application Specific Integrated Circuits (ASIC). Of course, the processor and the readable storage medium may also reside as discrete components in the apparatus.

The division of a cell is merely a logical division, and in actual implementation, there may be other divisions, for example, multiple cells or components may be combined or integrated into another system, or some features may be omitted, or not implemented. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.

The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may also be distributed on a plurality of block chain units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.

In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.

The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention or portions thereof that substantially or partially contribute to the prior art may be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a block chain device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.

Those of ordinary skill in the art will understand that: all or a portion of the steps of implementing the above-described method embodiments may be performed by hardware associated with program instructions. The program may be stored in a computer-readable storage medium. When executed, the program performs steps comprising the method embodiments described above; and the aforementioned storage medium includes: various media that can store program codes, such as ROM, RAM, magnetic or optical disks.

Finally, it should be noted that: the above embodiments are only used to illustrate the technical solution of the present invention, and not to limit the same; while the invention has been described in detail and with reference to the foregoing embodiments, it will be understood by those skilled in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present invention.

20页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:一种基于分差-时间函数算法的直播方法、赛事直播终端、电子设备及计算机可读存储介质

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!

技术分类