Video interaction method, device and system, live broadcast backpack and interaction receiver

文档序号:107627 发布日期:2021-10-15 浏览:12次 中文

阅读说明:本技术 视频互动方法、装置、系统以及直播背包和互动接收器 (Video interaction method, device and system, live broadcast backpack and interaction receiver ) 是由 刘超 施磊 于 2021-07-08 设计创作,主要内容包括:本申请涉及一种视频互动方法、装置、系统以及第一直播背包和互动接收器,其中方法包括:第一直播背包获取到第一待传输视频后,进行编码,得到第一待传输编码视频,将该第一待传输编码视频发送给互动接收器或者第二直播背包,互动接收器或者第二直播背包同样利用H.264/H.265解码器对其进行解码,得到对应的待播放视频,然后互动接收器通过SDI或HDMI接口发送给预设显示设备进行播放,第二直播背包利用自身配备的显示器播放,基于此,利用H.264/H.265解码器可以有效减小视频码流的大小,提升互动视频传输的清晰度以及传输效率,实现4K超高清的互动视频传输。(The application relates to a video interaction method, a device and a system, a first live backpack and an interaction receiver, wherein the method comprises the following steps: after the first to-be-transmitted video is acquired by the first live broadcast backpack, the first to-be-transmitted encoded video is acquired by encoding, the first to-be-transmitted encoded video is sent to the interactive receiver or the second live broadcast backpack, the interactive receiver or the second live broadcast backpack also decodes the first to-be-transmitted encoded video by using the H.264/H.265 decoder to acquire the corresponding to-be-played video, then the interactive receiver sends the video to the preset display equipment through the SDI or HDMI interface to play, and the second live broadcast backpack plays the video by using the display equipped by the second live broadcast backpack.)

1. A video interaction method is applied to a first live backpack, and is characterized by comprising the following steps:

acquiring a first video to be transmitted, an interactive video sent by an interactive receiver and a second coded video to be transmitted sent by a second live broadcast backpack;

sending the interactive video or the second to-be-transmitted coded video to a preset H.264/H.265 decoder for decoding to obtain a to-be-played video, and playing the to-be-played video at the same time;

and coding the first to-be-transmitted video according to a preset coding mode to obtain a first to-be-transmitted coded video, and sending the first to-be-transmitted coded video to the interactive receiver or the second live broadcast backpack so that the interactive receiver or the second live broadcast backpack decodes and plays the first to-be-transmitted coded video.

2. The video interaction method of claim 1, wherein sending the first to-be-transmitted encoded video to the interactive receiver or the second live backpack comprises:

and sending the first to-be-transmitted coded video to a preset 5G module through a preset high-speed connector, so that the preset 5G module sends the first to-be-transmitted coded video to the interactive receiver or the second live broadcast backpack through a 5G channel.

3. A video interaction method applied to an interactive receiver is characterized by comprising the following steps:

acquiring a first to-be-transmitted coded video sent by a first direct-broadcasting backpack;

acquiring an interactive video, encoding the interactive video according to a preset encoding mode to obtain an encoded interactive video, and sending the encoded interactive video to the first direct-playing backpack for decoding and playing;

sending the first to-be-transmitted coded video to a preset H.264/H.265 decoder for decoding to obtain a to-be-played video;

and sending the video to be played to a preset display device through an SDI (Serial digital interface) or HDMI (high-definition multimedia interface) so that the preset display device plays the video to be played.

4. A video interaction device, comprising:

the first acquisition module is used for acquiring a video to be transmitted and an interactive video sent by an interactive receiver or a second live broadcast backpack;

the first decoding module is used for sending the interactive video to a preset H.264/H.265 decoder for decoding to obtain a video to be played and playing the video to be played simultaneously;

the first encoding module is used for encoding the video to be transmitted according to a preset encoding mode to obtain the encoded video to be transmitted, and sending the encoded video to be transmitted to the interactive receiver or the second live broadcast backpack so that the encoded video to be transmitted is decoded and played by the interactive receiver or the second live broadcast backpack.

5. A video interaction device, comprising:

the second acquisition module is used for acquiring the coded video to be transmitted sent by the first live broadcast backpack or the second live broadcast backpack;

the acquisition module is used for acquiring an interactive video, encoding the interactive video according to a preset encoding mode to obtain an encoded interactive video, and sending the encoded interactive video to the first live broadcast backpack or the second live broadcast backpack for decoding and playing;

the second decoding module is used for sending the coded video to be transmitted to a preset H.264/H.265 decoder for decoding to obtain a video to be played;

and the sending module is used for sending the video to be played to preset display equipment through an SDI (Serial digital interface) or HDMI (high-definition multimedia interface) so that the preset display equipment plays the video to be played.

6. A live backpack, comprising: the device comprises a processor, a memory, a 4G/5G module, a display module and a coding and decoding module;

the memory is used for storing a video interaction program so that the processor executes the video interaction method according to claim 1 or 2 when the video interaction program is called;

the 4G/5G module is used for sending the first to-be-transmitted coded video to the interactive receiver, the display module is used for playing the to-be-played video, and the coding and decoding module is used for decoding the interactive video and coding the to-be-transmitted video according to a preset coding mode.

7. The live backpack of claim 6, wherein the processor is of a model BCM 7252.

8. An interactive receiver, comprising: at least one first processor and a first memory;

the first processor is configured to execute the video interaction program stored in the first memory to implement the video interaction method of claim 3.

9. A video interaction system comprising a plurality of live backpacks as claimed in claim 6 or 7 and an interaction receiver as claimed in claim 8.

10. A storage medium storing one or more programs executable by one or more processors to implement the video interaction method of any one of claims 1 to 3.

Technical Field

The present application relates to the field of video interaction technologies, and in particular, to a video interaction method, apparatus, system, live broadcast backpack, and interaction receiver.

Background

With the continuous development of network technology, the technology of making and broadcasting television programs has been developed rapidly, and with the continuous progress of display technology, the definition of television programs has been improved. For the live broadcast interactive program, the currently adopted scheme is generally a software scheme, an FPGA scheme or a monitoring coding chip scheme, and the above schemes all receive the limitation of hardware, and cannot meet the requirement of the live broadcast interactive program on 4K ultra-high definition.

Disclosure of Invention

To overcome at least some of the problems in the related art, the present application provides a video interaction method, apparatus, system, live backpack and interaction receiver.

According to a first aspect of the present application, there is provided a video interaction method applied to a first live backpack, the method comprising:

acquiring a video to be transmitted and an interactive video sent by an interactive receiver or a second live broadcast backpack;

sending the interactive video to a preset H.264/H.265 decoder for decoding to obtain a video to be played, and playing the video to be played simultaneously;

and coding the video to be transmitted according to a preset coding mode to obtain a coded video to be transmitted, and sending the coded video to be transmitted to the interactive receiver or a second live broadcast backpack so that the coded video to be transmitted is decoded and played by the interactive receiver or the second live broadcast backpack.

Optionally, sending the encoded video to be transmitted to the interactive receiver includes:

and sending the coded video to be transmitted to a preset 5G module through a preset high-speed connector, so that the preset 5G module sends the coded video to be transmitted to the interactive receiver or a second live broadcast backpack through a 5G channel.

According to a second aspect of the present application, there is provided a video interaction method, including:

acquiring coded video to be transmitted sent by a first live broadcast backpack or a second live broadcast backpack;

acquiring an interactive video, encoding the interactive video according to a preset encoding mode to obtain an encoded interactive video, and sending the encoded interactive video to the first live broadcast backpack or the second live broadcast backpack for decoding and playing;

sending the coded video to be transmitted to a preset H.264/H.265 decoder for decoding to obtain a video to be played;

and sending the video to be played to a preset display device through an SDI (Serial digital interface) or HDMI (high-definition multimedia interface) so that the preset display device plays the video to be played.

According to a third aspect of the present application, there is provided a video interaction device, comprising:

the first acquisition module is used for acquiring a video to be transmitted and an interactive video or a second live broadcast backpack sent by an interactive receiver;

the first decoding module is used for sending the interactive video to a preset H.264/H.265 decoder for decoding to obtain a video to be played and playing the video to be played simultaneously;

the first encoding module is used for encoding the video to be transmitted according to a preset encoding mode to obtain the encoded video to be transmitted, and sending the encoded video to be transmitted to the interactive receiver or the second live broadcast backpack so that the encoded video to be transmitted is decoded and played by the interactive receiver or the second live broadcast backpack.

According to a fourth aspect of the present application, there is provided a video interaction device, comprising:

the second acquisition module is used for acquiring the coded video to be transmitted sent by the first live broadcast backpack or the second live broadcast backpack;

the acquisition module is used for acquiring an interactive video, encoding the interactive video according to a preset encoding mode to obtain an encoded interactive video, and sending the encoded interactive video to the first live broadcast backpack or the second live broadcast backpack for decoding and playing;

the second decoding module is used for sending the coded video to be transmitted to a preset H.264/H.265 decoder for decoding to obtain a video to be played;

and the sending module is used for sending the video to be played to preset display equipment through an SDI (Serial digital interface) or HDMI (high-definition multimedia interface) so that the preset display equipment plays the video to be played.

According to a fifth aspect of the present application, there is provided a live backpack comprising: the device comprises a processor, a memory, a 4G/5G module, a display module and a coding and decoding module;

the memory is used for storing a video interaction program, so that the processor executes the video interaction method according to the first aspect of the application when the video interaction program is called;

the 4G/5G module is used for sending the coded video to be transmitted to the interactive receiver, the display module is used for playing the video to be played, and the coding and decoding module is used for decoding the interactive video and coding the video to be transmitted according to a preset coding mode.

Optionally, the processor is of a model BCM 7252.

According to a sixth aspect of the present application, there is provided an interactive receiver comprising: at least one first processor and a first memory;

the first processor is configured to execute the video interaction program stored in the first memory to implement the video interaction method according to the second aspect of the present application.

According to a seventh aspect of the present application, there is provided a video interaction system comprising a plurality of live backpacks as defined in the fifth aspect of the present application and an interaction receiver as defined in the sixth aspect of the present application.

According to an eighth aspect of the present application, there is provided a storage medium storing one or more programs executable by one or more processors to implement the video interaction method of the first or second aspect of the present application.

In the scheme of the application, after the interactive video is collected by the interactive receiver or the second live broadcast backpack, the interactive video is coded and transmitted to the first live broadcast backpack, and the first live broadcast backpack decodes the interactive video by using a preset H.264/H.265 decoder to obtain the video to be played and further play the video to be played; in addition, after the first live broadcast backpack acquires the video to be transmitted, the video is encoded to obtain the encoded video to be transmitted, the encoded video to be transmitted is sent to the interactive receiver or the second live broadcast backpack, the interactive receiver or the second live broadcast backpack also decodes the encoded video by using an H.264/H.265 decoder to obtain the corresponding video to be played, and then the video is sent to the preset display device through an SDI (serial digital interface) or an HDMI (high-definition multimedia interface) interface to be played.

It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application.

Drawings

The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present application and together with the description, serve to explain the principles of the application.

Fig. 1 is a flowchart illustrating a video interaction method according to an embodiment of the present application.

Fig. 2 is a flowchart illustrating a video interaction method according to another embodiment of the present application.

Fig. 3 is a flowchart illustrating a video interaction method according to another embodiment of the present application.

Fig. 4 is a schematic structural diagram of a video interaction apparatus according to another embodiment of the present application.

Fig. 5 is a schematic structural diagram of a video interaction apparatus according to another embodiment of the present application.

Fig. 6 is a schematic structural diagram of a first direct-seeding backpack according to another embodiment of the present application.

Fig. 7 is a pin connection diagram of BCM7252 provided by the present application.

FIG. 8 is a schematic structural diagram of a 4G/5G module provided in the present application.

Fig. 9 is a schematic structural diagram of an interactive receiver according to another embodiment of the present application.

Fig. 10 is a schematic structural diagram of a video interaction system according to another embodiment of the present application.

Detailed Description

Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present application. Rather, they are merely examples of apparatus consistent with certain aspects of the present application, as detailed in the appended claims.

Referring to fig. 1, fig. 1 is a schematic flowchart illustrating a video interaction method according to an embodiment of the present disclosure.

In this embodiment, taking the interaction between the first live broadcast backpack, the second live broadcast backpack, and the interaction receiver as an example for description, as shown in fig. 1, the video interaction method provided in this embodiment may include:

step S101, an interactive receiver collects an interactive video.

In this embodiment, the interactive receiver refers to a receiver with convergence service, generally, the output of the first direct broadcast backpack is a multilink IP video stream, and after the multilink IP video stream passes through the interactive receiver, the interactive receiver forwards the IP video stream of each link to a corresponding service decoding card in a decoder, so as to ensure that each link is correctly decoded, thereby obtaining an original video code stream.

The interactive video in this step is a video returned by the video acquisition device and used for interaction, and the interactive video often occurs in the live broadcast process, and if a party watching the live broadcast needs to interact with a party performing the live broadcast, or when the party needing to perform the rebroadcast, the party watching the live broadcast or the party needing to retransmit needs to return the video used for interaction to the interactive receiver, that is, the interactive video in this step.

It should be noted that the acquisition in this step is an action of acquiring an interactive video, and a specific acquisition process may be that a video shooting device connected to an interactive receiver acquires the interactive video and then sends the acquired interactive video to the interactive receiver, where the video shooting device may be, but is not limited to, a mobile phone, a camera, and the like.

Step S102, the first direct-playing backpack acquires a first video to be transmitted.

It should be noted that, the first live broadcast backpack refers to an equipment terminal carried by a live broadcast party, and the first live broadcast backpack can acquire a video to be transmitted acquired by a camera device, upload the video to be transmitted to an interactive receiver through a 5G/4G module, and receive the interactive video returned by the interactive receiver.

In this step, the video to be transmitted acquired by the first live-action backpack may be a video acquired by a camera device connected to the first live-action backpack, and the camera device may be, but is not limited to, a mobile intelligent terminal (a mobile phone, a tablet computer, etc.) having a camera function, a professional camera, etc.

In this embodiment, the first direct-broadcast backpack may acquire a plurality of video formats, for example, the SDI acquisition card or the HDMI acquisition card is used to connect the adaptive camera devices.

And S103, the second live broadcast backpack acquires a second video to be transmitted.

In this step, the process of acquiring the second video to be transmitted by the second live broadcast backpack is similar to or the same as the process of acquiring the first video to be transmitted by the first live broadcast backpack, and for the detailed description of this step, reference may be made to the content in step S102, which is not described herein again.

And S104, the interactive receiver encodes the interactive video according to a preset encoding mode to obtain the encoded interactive video.

In this embodiment, the preset encoding mode is an encoding mode corresponding to the decoding mode used in this embodiment, and since encoding and decoding are often corresponding, in a specific example, encoding and decoding may adopt a high-performance multi-channel encoding and decoding manner of h.265/HEVC and/or h.264/AVC, and the manner may have an encoding and decoding capability of 4K60P when a single card is used, and may also have a simultaneous encoding and decoding capability of 4-channel 1080P60 high definition signals when a single card is not used. Based on the coding and decoding mode in the embodiment, the coding and decoding speed can be increased, and the time delay is greatly reduced.

Meanwhile, thanks to the encoding and decoding mode, the multi-machine-position PTP time service technology is utilized, multi-machine-position time errors can be effectively controlled, the multi-machine-position time service can be more accurate, the time service can be more accurate, and the later time when the videos are gathered can be correspondingly more accurate.

And S105, the first direct-broadcast backpack codes the first video to be transmitted according to a preset coding mode to obtain the first video to be transmitted.

In this embodiment, the preset encoding mode is an encoding mode corresponding to the decoding mode used in this embodiment, and since encoding and decoding are often corresponding, in a specific example, encoding and decoding may adopt a high-performance multi-channel encoding and decoding manner of h.265/HEVC and/or h.264/AVC, and the manner may have an encoding and decoding capability of 4K60P when a single card is used, and may also have a simultaneous encoding and decoding capability of 4-channel 1080P60 high definition signals when a single card is not used. Based on the coding and decoding mode in the embodiment, the coding and decoding speed can be increased, and the time delay is greatly reduced.

And S106, the second live broadcast backpack codes the second video to be transmitted according to a preset coding mode to obtain the second coded video to be transmitted.

And S107, the interactive receiver sends the coded interactive video to the first direct-playing backpack.

When the interactive receiver sends the interactive video, the interactive receiver may rely on a variety of communication modes, for example, data transmission may be performed through WiFi, or data transmission may be performed through a 5G/4G module.

And S108, receiving the coded interactive video by the first live backpack.

In this step, the interactive video received by the first direct-broadcast backpack may be received by wireless communication, for example, data transmission may be performed by WiFi, or data transmission may be performed by a 5G/4G module. Of course, the 5G/4G module or the WiFi module is relied on, and the specific structure of the 5G/4G module or the WiFi module will be explained in the following embodiments, which are not described herein again.

Step S109, the first live broadcast backpack sends the first encoded video to be transmitted to the interactive receiver or the second live broadcast backpack.

When the first direct-broadcast backpack transmits the coded video to be transmitted, the first direct-broadcast backpack depends on various communication modes, for example, data transmission can be performed through WiFi, and data transmission can also be performed through a 5G/4G module.

Step S110, the interactive receiver or the second live broadcast backpack receives the first to-be-transmitted encoded video.

In this step, the mode that the interactive receiver or the second live broadcast backpack receives the coded video to be transmitted may be received in a wireless communication mode, for example, data transmission may be performed through WiFi, or data transmission may be performed through a 5G/4G module. Of course, the 5G/4G module or the WiFi module is relied on, and the specific structure of the 5G/4G module or the WiFi module will be explained in the following embodiments, which are not described herein again.

And S111, the second live broadcast backpack sends the second coded video to be transmitted to the first live broadcast backpack.

And step S112, the first direct-broadcasting backpack receives the second coded video to be transmitted.

And S113, the first direct-playing backpack sends the interactive video or the second to-be-transmitted coded video to a preset H.264/H.265 decoder for decoding to obtain a to-be-played video, and simultaneously plays the to-be-played video.

It should be noted that h.264 and h.265 are different highly compressed digital video codec standards, and h.264 has a very high data compression ratio, and under the condition of the same image quality, the compression ratio of h.264 is more than 2 times that of MPEG-2, and is 1.5-2 times that of MPEG-4. In a specific example, if the original file size is 88GB, the compression ratio is 25: 1 when the original file size is compressed to 3.5GB by the MPEG-2 compression standard, and the compression ratio is 879MB when the original file size is compressed by the H.264 compression standard, the compression ratio of H.264 reaches amazing 102: 1 from 88GB to 879 MB. By utilizing the compression coding standard, the size of the code stream can be effectively reduced, and the transmission rate and efficiency can be improved to a greater extent by the same network channel in the live broadcasting process.

H.265 provides more different tools to reduce the code rate compared to h.264, the smallest 8x8 to the largest 64x64 in coding units. The macro blocks divided by the regions with small information amount (the color change is not obvious, such as the red part of a vehicle body and the gray part of the ground) are larger, the coded code words are fewer, the macro blocks divided by the regions (tires) with more details are correspondingly smaller and larger, and the coded code words are more, so that the image is coded with emphasis, the integral code rate is reduced, and the coding efficiency is correspondingly improved. Meanwhile, the intra prediction mode of h.265 supports 33 directions (h.264 supports only 8 directions), and provides a better motion compensation process and vector prediction method. When the h.265 standard is adopted, the size of the video code stream is smaller than that of h.264, and the data will change correspondingly due to different determination methods of quality control. The data from subjective visual testing showed that the quality of h.265 encoded video could also be similar to or even better than h.264 encoded video with a 51-74% reduction in code rate, which is inherently better than the expected signal-to-noise ratio (PSNR).

In this step, an h.264/h.265 decoder is used, which supports both the h.264 standard and the h.265 standard, and for different image capturing devices (for example, some of which do not support h.265), a matching standard can be adaptively selected for encoding and decoding.

And S114, the second live broadcast backpack sends the first to-be-transmitted coded video to a preset H.264/H.265 decoder for decoding to obtain a to-be-played video, and plays the to-be-played video at the same time.

And S115, the interactive receiver sends the coded video to be transmitted to a preset H.264/H.265 decoder for decoding to obtain the video to be played.

Similar to step S113, h.264 and h.265 in this step are different highly compressed digital video codec standards, and h.264 has a very high data compression ratio, and under the condition of the same image quality, the compression ratio of h.264 is more than 2 times that of MPEG-2 and 1.5-2 times that of MPEG-4. In a specific example, if the original file size is 88GB, the compression ratio is 25: 1 when the original file size is compressed to 3.5GB by the MPEG-2 compression standard, and the compression ratio is 879MB when the original file size is compressed by the H.264 compression standard, the compression ratio of H.264 reaches amazing 102: 1 from 88GB to 879 MB. By utilizing the compression coding standard, the size of the code stream can be effectively reduced, and the transmission rate and efficiency can be improved to a greater extent by the same network channel in the live broadcasting process.

H.265 provides more different tools to reduce the code rate compared to h.264, the smallest 8x8 to the largest 64x64 in coding units. The macro blocks divided by the regions with small information amount (the color change is not obvious, such as the red part of a vehicle body and the gray part of the ground) are larger, the coded code words are fewer, the macro blocks divided by the regions (tires) with more details are correspondingly smaller and larger, and the coded code words are more, so that the image is coded with emphasis, the integral code rate is reduced, and the coding efficiency is correspondingly improved. Meanwhile, the intra prediction mode of h.265 supports 33 directions (h.264 supports only 8 directions), and provides a better motion compensation process and vector prediction method. When the h.265 standard is adopted, the size of the video code stream is smaller than that of h.264, and the data will change correspondingly due to different determination methods of quality control. The data from subjective visual testing showed that the quality of h.265 encoded video could also be similar to or even better than h.264 encoded video with a 51-74% reduction in code rate, which is inherently better than the expected signal-to-noise ratio (PSNR).

In this step, an h.264/h.265 decoder is used, which supports both the h.264 standard and the h.265 standard, and for different image capturing devices (for example, some of which do not support h.265), a matching standard can be adaptively selected for encoding and decoding.

And step S116, the interactive receiver sends the video to be played to a preset display device through an SDI or HDMI interface, so that the preset display device plays the video to be played.

The preset display device in step S116 refers to a type of device having a display function, such as a display, a television, a touch screen, an intelligent mobile terminal, and the like. For the first direct-broadcast backpack, the preset display device may be a touch screen fixed on the surface of the first direct-broadcast backpack. In addition, the SDI or HDMI interface refers to a common video interface, and the specific transmission principle may refer to the prior art, which is not described herein again.

In addition, it should be noted that the second live broadcast backpack can interact with the interactive receiver in addition to the first live broadcast backpack, and the interaction between the second live broadcast backpack and the interactive receiver is similar to or even identical to the interaction between the first live broadcast backpack and the interactive receiver, and the interaction between the second live broadcast backpack and the interactive receiver is not repeated here.

In this embodiment, after acquiring the interactive video, the interactive receiver encodes the interactive video and transmits the encoded interactive video to the first live-broadcasting backpack, and the first live-broadcasting backpack decodes the interactive video by using a preset h.264/h.265 decoder, so as to obtain a video to be played and play the video to be played; in addition, after the first direct-play backpack acquires the video to be transmitted, the video is encoded to obtain the encoded video to be transmitted, the encoded video to be transmitted is sent to the interactive receiver, the interactive receiver also decodes the encoded video by using an H.264/H.265 decoder to obtain a corresponding video to be played, and then the video is sent to the preset display device through an SDI (serial digital interface) or HDMI (high-definition multimedia interface) interface to be played.

Referring to fig. 2, fig. 2 is a schematic flowchart illustrating a video interaction method according to another embodiment of the present application.

In this embodiment, the execution of the first direct-playing backpack is taken as an example for explanation, as shown in fig. 2, the video interaction method provided in this embodiment may include:

step S201, acquiring a first video to be transmitted, an interactive video sent by an interactive receiver and a second coded video to be transmitted sent by a second live broadcast backpack;

step S202, sending the interactive video or the second to-be-transmitted coded video to a preset H.264/H.265 decoder for decoding to obtain a to-be-played video, and playing the to-be-played video at the same time;

step S203, encoding the first to-be-transmitted video according to a preset encoding mode to obtain a first to-be-transmitted encoded video, and sending the first to-be-transmitted encoded video to the interactive receiver or the second live broadcast backpack, so that the interactive receiver or the second live broadcast backpack decodes and plays the first to-be-transmitted encoded video.

In this embodiment, after acquiring the interactive video, the interactive receiver encodes the interactive video and transmits the encoded interactive video to the first live-broadcasting backpack, and the first live-broadcasting backpack decodes the interactive video by using a preset h.264/h.265 decoder, so as to obtain a video to be played and play the video to be played; in addition, after the first direct-play backpack acquires the video to be transmitted, the video is encoded to obtain the encoded video to be transmitted, the encoded video to be transmitted is sent to the interactive receiver, the interactive receiver also decodes the encoded video by using an H.264/H.265 decoder to obtain a corresponding video to be played, and then the video is sent to the preset display device through an SDI (serial digital interface) or HDMI (high-definition multimedia interface) interface to be played.

Further, the step S203 of sending the encoded video to be transmitted to the interactive receiver in this embodiment may include: and sending the coded video to be transmitted to a preset 5G module through a preset high-speed connector, so that the preset 5G module sends the coded video to be transmitted to the interactive receiver through a 5G channel.

Referring to fig. 3, fig. 3 is a schematic flowchart illustrating a video interaction method according to another embodiment of the present application.

In this embodiment, the execution of the interactive receiver side is taken as an example for explanation, as shown in fig. 3, the video interaction method provided in this embodiment may include:

s301, acquiring a coded video to be transmitted, which is sent by a first direct-broadcasting backpack;

step S302, acquiring an interactive video, encoding the interactive video according to a preset encoding mode to obtain an encoded interactive video, and sending the encoded interactive video to the first direct-playing backpack for decoding and playing;

step S303, sending the coded video to be transmitted to a preset H.264/H.265 decoder for decoding to obtain a video to be played;

step S304, sending the video to be played to a preset display device through an SDI or HDMI interface so that the preset display device can play the video to be played.

In this embodiment, after acquiring the interactive video, the interactive receiver encodes the interactive video and transmits the encoded interactive video to the first live-broadcasting backpack, and the first live-broadcasting backpack decodes the interactive video by using a preset h.264/h.265 decoder, so as to obtain a video to be played and play the video to be played; in addition, after the first direct-play backpack acquires the video to be transmitted, the video is encoded to obtain the encoded video to be transmitted, the encoded video to be transmitted is sent to the interactive receiver, the interactive receiver also decodes the encoded video by using an H.264/H.265 decoder to obtain a corresponding video to be played, and then the video is sent to the preset display device through an SDI (serial digital interface) or HDMI (high-definition multimedia interface) interface to be played.

Referring to fig. 4, fig. 4 is a schematic structural diagram of a video interaction device according to another embodiment of the present application.

As shown in fig. 4, the video interaction apparatus provided in this embodiment may include:

a first obtaining module 401, configured to obtain a video to be transmitted and an interactive video sent by an interactive receiver;

a first decoding module 402, configured to send the interactive video to a preset h.264/h.265 decoder for decoding, so as to obtain a video to be played, and play the video to be played at the same time;

the first encoding module 403 is configured to encode the video to be transmitted according to a preset encoding manner to obtain an encoded video to be transmitted, and send the encoded video to be transmitted to the interactive receiver, so that the interactive receiver decodes the encoded video to be transmitted and then plays the decoded video.

Referring to fig. 5, fig. 5 is a schematic structural diagram of a video interaction device according to another embodiment of the present application.

As shown in fig. 5, the video interaction apparatus provided in this embodiment may include:

a second obtaining module 501, configured to obtain a to-be-transmitted encoded video sent by the first direct-broadcast backpack;

the acquisition module 502 is configured to acquire an interactive video, encode the interactive video according to a preset encoding mode to obtain an encoded interactive video, and send the encoded interactive video to the first live backpack for decoding and playing;

the second decoding module 503 is configured to send the encoded video to be transmitted to a preset h.264/h.265 decoder for decoding, so as to obtain a video to be played;

the sending module 504 is configured to send the video to be played to a preset display device through an SDI or HDMI interface, so that the preset display device plays the video to be played.

Referring to fig. 6, fig. 6 is a schematic structural diagram of a first direct-seeding backpack according to another embodiment of the present application.

As shown in fig. 6, the first direct-broadcast backpack provided by this embodiment may include: a processor 601, a memory 602, a 4G/5G module 603, a display module 604 and a codec module 605;

the memory is used for storing a video interaction program, so that the processor executes the video interaction method executed on one side of the first direct-playing backpack provided by the embodiment when calling the video interaction program;

the 4G/5G module is used for sending the first to-be-transmitted coded video to the interactive receiver or the second live broadcast backpack, the display module is used for playing the to-be-played video in the step S113, and the coding and decoding module is used for decoding the interactive video or the second to-be-transmitted coded video and coding the to-be-transmitted video according to a preset coding mode. The processor may be, but not limited to, BCM 7252.

In this embodiment, in order to ensure stable and reliable transmission of a video code stream, the 4G/5G module may adopt a mode of adding 3 5G modules and 3 4G modules, and specifically refer to fig. 7 and 8, fig. 7 is a pin connection schematic diagram of BCM7252 provided by the present application, and fig. 8 is a structural schematic diagram of a 4G/5G module provided by the present application.

As shown in fig. 7, the 4G/5G module is connected to a USB3.0 pin of the chip, and the acquired video to be transmitted is encoded by the encoding board to form a code stream, which is input to the BCM 7252. In addition, fig. 7 also relates to a part of an external adapter or a battery (including power management/charging, an internal battery, TPS65251, RT7299, TPS51116, ADP2303), DDR, EMMC16G, MSATA, HDMI2.0, RG45 giga, USB3.0, MIC audio, AIC3106, 5 inch LCD touch, TW8836, and all the parts related to the above are conventional technologies in the art or chip models, and specific principles may refer to the prior art and are not described herein again.

The 4G/5G module can be as shown in fig. 8, through high-speed connector, connect the WIFI module on the one hand, connect USB concentrator (USB HUB) on the other hand, and the USB concentrator can connect the mode of 3 5G modules plus 3 4G modules to guarantee the reliable and stable transmission of video code stream. Specifically, the 4G module adopts an LTE module with a MiniPCE interface; the 5G module is a mainstream technical scheme, and a remote 5G module and a Huawei 5G module can be selected.

Referring to fig. 9, fig. 9 is a schematic structural diagram of an interactive receiver according to another embodiment of the present application.

As shown in fig. 9, the computing device 900 provided in this embodiment may include: at least one processor 901, memory 902, at least one network interface 903, and other user interfaces 904. Various components in computing device 900 are coupled together by a bus system 905. It is understood that the bus system 905 is used to enable communications among the components. The bus system 905 includes a power bus, a control bus, and a status signal bus, in addition to a data bus. For clarity of illustration, however, the various buses are labeled in fig. 9 as bus system 905.

The user interface 904 may include, among other things, a display, a keyboard, or a pointing device (e.g., a mouse, trackball, touch pad, or touch screen, among others.

It is to be understood that the memory 902 in embodiments of the present invention may be either volatile memory or nonvolatile memory, or may include both volatile and nonvolatile memory. The non-volatile Memory may be a Read-Only Memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an Electrically Erasable PROM (EEPROM), or a flash Memory. Volatile Memory can be Random Access Memory (RAM), which acts as external cache Memory. By way of illustration and not limitation, many forms of RAM are available, such as Static random access memory (Static RAM, SRAM), Dynamic Random Access Memory (DRAM), Synchronous Dynamic random access memory (Synchronous DRAM, SDRAM), Double Data Rate Synchronous Dynamic random access memory (ddr Data Rate SDRAM, ddr SDRAM), Enhanced Synchronous SDRAM (ESDRAM), synchlronous SDRAM (SLDRAM), and Direct Rambus RAM (DRRAM). The memory 902 described herein is intended to comprise, without being limited to, these and any other suitable types of memory.

In some embodiments, memory 902 stores the following elements, executable units or data structures, or a subset thereof, or an expanded set thereof: an operating system 9021 and a second application 9022.

The operating system 9021 includes various system programs, such as a framework layer, a core library layer, a driver layer, and the like, and is configured to implement various basic services and process hardware-based tasks. The second application 9022 includes various second applications, such as a Media Player (Media Player), a Browser (Browser), and the like, for implementing various application services. A program implementing the method of an embodiment of the present invention may be included in the second application 9022.

In the embodiment of the present invention, by calling a program or an instruction stored in the memory 902, specifically, a program or an instruction stored in the second application 9022, the processor 901 is configured to execute the method steps provided by the method embodiments, for example, including:

acquiring a coded video to be transmitted sent by a first direct-broadcasting backpack;

acquiring an interactive video, encoding the interactive video according to a preset encoding mode to obtain an encoded interactive video, and sending the encoded interactive video to the first direct-playing backpack for decoding and playing;

sending the coded video to be transmitted to a preset H.264/H.265 decoder for decoding to obtain a video to be played;

and sending the video to be played to a preset display device through an SDI (Serial digital interface) or HDMI (high-definition multimedia interface) so that the preset display device plays the video to be played.

The method disclosed in the above embodiments of the present invention may be applied to the processor 901, or implemented by the processor 901. The processor 901 may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the above method may be implemented by integrated logic circuits of hardware or instructions in the form of software in the processor 901. The Processor 901 may be a general-purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, a discrete Gate or transistor logic device, or a discrete hardware component. The various methods, steps and logic blocks disclosed in the embodiments of the present invention may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of the method disclosed in connection with the embodiments of the present invention may be directly implemented by a hardware decoding processor, or implemented by a combination of hardware and software elements in the decoding processor. The software elements may be located in ram, flash, rom, prom, or eprom, registers, among other storage media that are well known in the art. The storage medium is located in the memory 902, and the processor 901 reads the information in the memory 902, and completes the steps of the above method in combination with the hardware thereof.

It is to be understood that the embodiments described herein may be implemented in hardware, software, firmware, middleware, microcode, or any combination thereof. For a hardware implementation, the Processing units may be implemented in one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), general purpose processors, controllers, micro-controllers, microprocessors, other electronic units configured to perform the functions of the present Application, or a combination thereof.

For a software implementation, the techniques herein may be implemented by means of units performing the functions herein. The software codes may be stored in a memory and executed by a processor. The memory may be implemented within the processor or external to the processor.

Referring to fig. 10, fig. 10 is a schematic structural diagram of a video interaction system according to another embodiment of the present application.

As shown in fig. 10, the video interactive system provided in this embodiment may include a first live backpack 1001 and an interactive receiver 1002 as provided in the above embodiments.

In addition, the present application also provides a storage medium, where one or more programs are stored, and the one or more programs are executable by one or more processors to implement the video interaction method provided by the foregoing embodiment. With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.

It is understood that the same or similar parts in the above embodiments may be mutually referred to, and the same or similar parts in other embodiments may be referred to for the content which is not described in detail in some embodiments.

It should be noted that, in the description of the present application, the terms "first", "second", etc. are used for descriptive purposes only and are not to be construed as indicating or implying relative importance. Further, in the description of the present application, the meaning of "a plurality" means at least two unless otherwise specified.

Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or steps of the process, and the scope of the preferred embodiments of the present application includes other implementations in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the present application.

It should be understood that portions of the present application may be implemented in hardware, software, firmware, or a combination thereof. In the above embodiments, the various steps or methods may be implemented in software or firmware stored in memory and executed by a suitable instruction execution system. For example, if implemented in hardware, as in another embodiment, any one or combination of the following techniques, which are known in the art, may be used: a discrete logic circuit having a logic gate circuit for implementing a logic function on a data signal, an application specific integrated circuit having an appropriate combinational logic gate circuit, a Programmable Gate Array (PGA), a Field Programmable Gate Array (FPGA), or the like.

It will be understood by those skilled in the art that all or part of the steps carried by the method for implementing the above embodiments may be implemented by hardware that is related to instructions of a program, and the program may be stored in a computer-readable storage medium, and when executed, the program includes one or a combination of the steps of the method embodiments.

In addition, functional units in the embodiments of the present application may be integrated into one processing module, or each unit may exist alone physically, or two or more units are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. The integrated module, if implemented in the form of a software functional module and sold or used as a stand-alone product, may also be stored in a computer readable storage medium.

The storage medium mentioned above may be a read-only memory, a magnetic or optical disk, etc.

In the description herein, reference to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the application. In this specification, the schematic representations of the terms used above do not necessarily refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.

Although embodiments of the present application have been shown and described above, it is understood that the above embodiments are exemplary and should not be construed as limiting the present application, and that variations, modifications, substitutions and alterations may be made to the above embodiments by those of ordinary skill in the art within the scope of the present application.

20页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:一种基于数据分析的商品智能选品系统及方法

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!

技术分类