Self-adaptive panoramic video transmission method and system based on reinforcement learning

文档序号:833898 发布日期:2021-03-30 浏览:10次 中文

阅读说明:本技术 一种基于强化学习的自适应全景视频传输方法及系统 (Self-adaptive panoramic video transmission method and system based on reinforcement learning ) 是由 潘宇轩 胡欣珏 刘雨 张琳 于 2020-11-24 设计创作,主要内容包括:本发明公开了一种基于强化学习的自适应全景视频传输方法及系统,用户观看全景视频前,下载深度学习模型,在正式观看全景视频时,客户端收集用户观看信息,通过预测用户在未来观看的内容,使用强化学习模型灵活根据带宽自适应选择获取预测内容所在瓦片的质量等级,对于需求更高的内容优先选择获取更高的质量,当相应的编码文件由远端视频服务器传输到用户所在的客户端后,客户端进行解码并播放给用户,满足用户的观看需求。(The invention discloses a self-adaptive panoramic video transmission method and a system based on reinforcement learning.)

1. An adaptive panoramic video transmission method based on reinforcement learning is characterized by comprising the following steps:

the remote video server analyzes the video content, and obtains the motion speed and the depth of field of the video content by using an optical flow method to obtain a numerical result of the video quality;

the remote video server carries out tile segmentation according to the video quality, adopts a two-dimensional clustering algorithm to spatially divide the video into a specified number of tiles with different sizes, and carries out coding of different quality grades on the tiles to obtain coding results of a plurality of quality versions;

the remote video server trains a deep learning model according to the stored bandwidth data and the coding results of the multiple quality versions, and the deep learning model is used as a tile self-adaptive quality selector;

downloading and locally running a deep learning model by a client, collecting panoramic video watching information from user equipment, obtaining a tile range contained in a future watching view field region of a user through viewpoint prediction, and requesting and obtaining corresponding video content from a remote server according to a quality selection result of the deep learning model;

and after the client side obtains the video content, decoding, tile splicing and rendering are carried out on the video content, and the picture is presented to the user.

2. The reinforcement learning-based adaptive panoramic video transmission method according to claim 1, wherein the remote video server analyzes video content, obtains a motion speed and a depth of field of the video content by using an optical flow method, and obtains a numerical result of video quality, and specifically comprises:

the remote video server reads the stored video file and carries out segmentation processing according to the time sequence;

the remote video server calls an optical flow neural network, inputs the segmented video into the optical flow neural network and outputs an optical flow detection result;

and the far-end video server obtains the relative motion speed and the relative depth of field of the video content according to the detection result of the optical flow method, obtains the lowest detectable loss through quantification, calculates the current video quality, and subtracts the lowest detectable loss to obtain the actual numerical result of the video quality.

3. The reinforcement learning-based adaptive panoramic video transmission method according to claim 2, wherein the remote video server performs tile segmentation according to video quality, spatially divides a video into a predetermined number of tiles with different sizes by using a two-dimensional clustering algorithm, and performs coding of different quality levels on the tiles to obtain coding results of multiple quality versions, specifically comprising:

the remote video server divides the video segment into rectangular basic tiles with preset area sizes in space;

calculating the mass growth efficiency of each rectangular basic tile;

clustering the rectangular basic tiles, and synthesizing the adjacent rectangular basic tiles into a specified number of tiles which are finally required to be transmitted to users;

HEVC coding is carried out on the tiles obtained through segmentation, so that each tile is allocated with a plurality of coding qualities of different levels;

and recording the information of the encoding result by using the attribute file.

4. The reinforcement learning-based adaptive panoramic video transmission method according to claim 3, wherein the quality increase efficiency is obtained by dividing a difference between video quality values of basic tiles in the highest level coding and the lowest level coding by a difference between quantization parameters corresponding to the highest level coding and the lowest level coding.

5. The reinforcement learning-based adaptive panoramic video transmission method according to claim 3, wherein the remote video server trains a deep learning model according to the stored bandwidth data and the coding results of the quality versions, and uses the deep learning model as a tile adaptive quality selector, which specifically includes:

the remote video server initializes a deep learning network, assigns a random initial value to the network state and sets a feedback function to be 0;

reading bandwidth data in the network bandwidth data set and updating the network state;

the network decision device determines a decision for selecting the quality level of the tiles according to the network state and confirms which version of the tile coding file is transmitted to the user;

the network decision device calculates the value of the feedback function according to the selection of the network decision device and the network state;

judging whether the feedback function is larger than 0, if not, indicating that the decision is not suitable for the current network state, continuously reading the bandwidth data in the network bandwidth data set, updating the network state, and if so, indicating that the decision gains the current state;

if the feedback function is larger than 0, updating the network parameters of the network decision maker according to the feedback function value and decision back propagation, and adding one to the number of training rounds;

judging whether the number of training rounds reaches a preset value, if not, continuing to train the network, continuing to read the bandwidth data in the network bandwidth data set, and updating the network state;

and if the number of training rounds reaches a preset value, finishing training, and storing the parameters of the network decision maker as the model parameters for reinforcement learning.

6. The reinforcement learning-based adaptive panoramic video transmission method according to claim 5, wherein the client downloads and locally runs a deep learning model, collects panoramic video viewing information from the user equipment, obtains a tile range included in a future viewing field area of the user through viewpoint prediction, and requests and obtains corresponding video content from a remote server according to a selection result of the deep learning model on quality, and specifically comprises:

the client is connected to a remote video server through a network, downloads the deep learning model and preloads the deep learning model at the client;

the client collects user watching information;

according to the user viewing information, using a linear regression algorithm to predict the user viewpoint, and calculating the index and the lowest perceivable loss of the tile where the next viewing position of the user is located;

inputting a prediction result, current bandwidth data and the like into a deep learning model, and operating the model to determine the quality level of the tile data to be acquired;

and the client sends a corresponding data acquisition request in an HTTP format to the remote video server according to the quality level prediction result.

7. The reinforcement learning-based adaptive panoramic video transmission method according to claim 6, wherein after obtaining the video content, the client decodes, tiles and renders the video content, and presents the picture to the user, and specifically comprises:

the remote video server receives the response of the client and then sends video data, and the client receives a data packet;

the client sends the video coding file contained in the data packet into a system cache;

taking out the attribute file in the data packet, and analyzing the tile position information contained in the attribute file;

decoding the encoded data according to the information obtained in the attribute file to obtain an original tile data file;

and splicing the tiles to form a complete video picture, sending the video picture into viewing equipment for rendering, and presenting the panoramic video content for viewing to a user.

8. The reinforcement learning-based adaptive panoramic video transmission method according to claim 3, wherein the information of the encoding result comprises: quantization parameter, resolution and tile position in the whole picture.

9. The reinforcement learning-based adaptive panoramic video transmission method according to claim 6, wherein the user viewing information comprises: viewing location, network bandwidth, and video quality.

10. An adaptive panoramic video transmission system based on reinforcement learning, which is characterized by comprising:

the system comprises a remote server and a client, wherein the client establishes connection with the remote video server through a network;

the remote video server analyzes the video content, and obtains the motion speed and the depth of field of the video content by using an optical flow method to obtain a numerical result of the video quality;

the remote video server carries out tile segmentation according to the video quality, adopts a two-dimensional clustering algorithm to spatially divide the video into a specified number of tiles with different sizes, and carries out coding of different quality grades on the tiles to obtain coding results of a plurality of quality versions;

the remote video server trains a deep learning model according to the stored bandwidth data and the coding results of the multiple quality versions, and the deep learning model is used as a tile self-adaptive quality selector;

downloading and locally running a deep learning model by a client, collecting panoramic video watching information from user equipment, obtaining a tile range contained in a future watching view field region of a user through viewpoint prediction, and requesting and obtaining corresponding video content from a remote server according to a quality selection result of the deep learning model;

and after the client side obtains the video content, decoding, tile splicing and rendering are carried out on the video content, and the picture is presented to the user.

Technical Field

The invention relates to the technical field of panoramic videos, in particular to a self-adaptive panoramic video transmission method and system based on reinforcement learning.

Background

The panoramic video is a video shot in all directions at 360 degrees by using a 3D camera, and a user can adjust the video to watch the video up, down, left and right at will when watching the video.

A user is very sensitive to loss of video Quality when watching a panoramic video, and thus it is necessary to guarantee video Quality QoE (Quality of Experience) when the user watches the panoramic video. In addition, the transmission of the panoramic video itself needs a large amount of bandwidth, the watching demand of the user puts a great demand on the network resource, and the network becomes too heavy. In this situation, how to design a transmission mode of the panoramic video, saving network resources and improving QoE of the user is an urgent problem to be solved.

The prior art scheme mainly starts from objective video quality evaluation indexes to realize the improvement of the scheme, and rarely considers the perception mode of a user on the panoramic video, and the subjective perception mode can actually generate redundancy on the panoramic video content. By measuring the characteristics of subjective feelings of users, the content of the panoramic video is processed in a targeted manner, the video content redundancy is removed, a large amount of network bandwidth is saved, and the QoE of the users is improved. A great deal of research is carried out on the problem of subjective quality perception of ordinary videos, namely processing of the lowest perceivable loss in non-360-degree panoramic videos, but the processing is still in a preliminary exploration stage in the field of 360-degree panoramic videos. At present, the more advanced scheme is that the subjective feeling of a user is measured based on object detection, and the user is considered to have higher requirements on the quality such as definition and the like of an object in the center of a picture in a video. After detecting the central object, the 360-degree panoramic video is divided into a plurality of tiles (tiles) from the spatial dimension, and the detected object and other contents are respectively extracted. The method comprises the steps that tiles are coded in multiple levels, a terminal uses optimization algorithms such as linear programming and the like to adaptively select the quality level of a video according to the range of a current View port (FOV) and a network bandwidth, and a higher coding quality version is preferentially transmitted in the transmission process of the tiles where detected objects are located. Therefore, the method can improve the QoE of the user under limited network resources, but the object detection can only realize the extraction of common objects, is not suitable for panoramic videos of different types, and can not be timely identified when fast moving objects exist, so that the effect of the algorithm is weakened.

Accordingly, the prior art is yet to be improved and developed.

Disclosure of Invention

The invention mainly aims to provide a self-adaptive panoramic video transmission method and system based on reinforcement learning, and aims to solve the problem that the existing panoramic video transmission mode in the prior art cannot provide high-quality video for a user to watch.

In order to achieve the above object, the present invention provides an adaptive panoramic video transmission method based on reinforcement learning, which includes the following steps:

the remote video server analyzes the video content, and obtains the motion speed and the depth of field of the video content by using an optical flow method to obtain a numerical result of the video quality;

the remote video server carries out tile segmentation according to the video quality, adopts a two-dimensional clustering algorithm to spatially divide the video into a specified number of tiles with different sizes, and carries out coding of different quality grades on the tiles to obtain coding results of a plurality of quality versions;

the remote video server trains a deep learning model according to the stored bandwidth data and the coding results of the multiple quality versions, and the deep learning model is used as a tile self-adaptive quality selector;

downloading and locally running a deep learning model by a client, collecting panoramic video watching information from user equipment, obtaining a tile range contained in a future watching view field region of a user through viewpoint prediction, and requesting and obtaining corresponding video content from a remote server according to a quality selection result of the deep learning model;

and after the client side obtains the video content, decoding, tile splicing and rendering are carried out on the video content, and the picture is presented to the user.

Optionally, the method for transmitting an adaptive panoramic video based on reinforcement learning, where the remote video server analyzes the video content, obtains a motion speed and a depth of field of the video content by using an optical flow method, and obtains a numerical result of video quality, specifically includes:

the remote video server reads the stored video file and carries out segmentation processing according to the time sequence;

the remote video server calls an optical flow neural network, inputs the segmented video into the optical flow neural network and outputs an optical flow detection result;

and the far-end video server obtains the relative motion speed and the relative depth of field of the video content according to the detection result of the optical flow method, obtains the lowest detectable loss through quantification, calculates the current video quality, and subtracts the lowest detectable loss to obtain the actual numerical result of the video quality.

Optionally, the method for transmitting a self-adaptive panoramic video based on reinforcement learning, where the remote video server performs tile segmentation according to video quality, spatially divides a video into a predetermined number of tiles with different sizes by using a two-dimensional clustering algorithm, and performs coding on the tiles at different quality levels to obtain coding results of multiple quality versions, specifically including:

the remote video server divides the video segment into rectangular basic tiles with preset area sizes in space;

calculating the mass growth efficiency of each rectangular basic tile;

clustering the rectangular basic tiles, and synthesizing the adjacent rectangular basic tiles into a specified number of tiles which are finally required to be transmitted to users;

HEVC coding is carried out on the tiles obtained through segmentation, so that each tile is allocated with a plurality of coding qualities of different levels;

and recording the information of the encoding result by using the attribute file.

Optionally, in the method for transmitting adaptive panoramic video based on reinforcement learning, the quality increase efficiency is obtained by dividing a difference between video quality values of base tiles when the highest level coding and the lowest level coding are adopted by a difference between quantization parameters corresponding to the highest level coding and the lowest level coding.

Optionally, the method for transmitting an adaptive panoramic video based on reinforcement learning, where the remote video server trains a deep learning model according to the stored bandwidth data and the coding results of multiple quality versions, and uses the deep learning model as a tile adaptive quality selector, specifically includes:

the remote video server initializes a deep learning network, assigns a random initial value to the network state and sets a feedback function to be 0;

reading bandwidth data in the network bandwidth data set and updating the network state;

the network decision device determines a decision for selecting the quality level of the tiles according to the network state and confirms which version of the tile coding file is transmitted to the user;

the network decision device calculates the value of the feedback function according to the selection of the network decision device and the network state;

judging whether the feedback function is larger than 0, if not, indicating that the decision is not suitable for the current network state, continuously reading the bandwidth data in the network bandwidth data set, updating the network state, and if so, indicating that the decision gains the current state;

if the feedback function is larger than 0, updating the network parameters of the network decision maker according to the feedback function value and decision back propagation, and adding one to the number of training rounds;

judging whether the number of training rounds reaches a preset value, if not, continuing to train the network, continuing to read the bandwidth data in the network bandwidth data set, and updating the network state;

and if the number of training rounds reaches a preset value, finishing training, and storing the parameters of the network decision maker as the model parameters for reinforcement learning.

Optionally, the method for transmitting an adaptive panoramic video based on reinforcement learning, where the client downloads and runs a deep learning model locally, collects panoramic video viewing information from the user equipment, obtains a tile range included in a future viewing field area of the user through viewpoint prediction, and requests and obtains corresponding video content from a remote server according to a selection result of the deep learning model on quality, and specifically includes:

the client is connected to a remote video server through a network, downloads the deep learning model and preloads the deep learning model at the client;

the client collects user watching information;

according to the user viewing information, using a linear regression algorithm to predict the user viewpoint, and calculating the index and the lowest perceivable loss of the tile where the next viewing position of the user is located;

inputting a prediction result, current bandwidth data and the like into a deep learning model, and operating the model to determine the quality level of the tile data to be acquired;

and the client sends a corresponding data acquisition request in an HTTP format to the remote video server according to the quality level prediction result.

Optionally, the method for transmitting an adaptive panoramic video based on reinforcement learning, where after obtaining video content, the client decodes, tiles and renders the video content, and presents a picture to a user, specifically includes:

the remote video server receives the response of the client and then sends video data, and the client receives a data packet;

the client sends the video coding file contained in the data packet into a system cache;

taking out the attribute file in the data packet, and analyzing the tile position information contained in the attribute file;

decoding the encoded data according to the information obtained in the attribute file to obtain an original tile data file;

and splicing the tiles to form a complete video picture, sending the video picture into viewing equipment for rendering, and presenting the panoramic video content for viewing to a user.

Optionally, the method for transmitting an adaptive panoramic video based on reinforcement learning, wherein the information of the encoding result includes: quantization parameter, resolution and tile position in the whole picture.

Optionally, the reinforcement learning-based adaptive panoramic video transmission method, wherein the user viewing information includes: viewing location, network bandwidth, and video quality.

In addition, to achieve the above object, the present invention further provides an adaptive panoramic video transmission system based on reinforcement learning, wherein the adaptive panoramic video transmission system based on reinforcement learning includes:

the system comprises a remote server and a client, wherein the client establishes connection with the remote video server through a network;

the remote video server analyzes the video content, and obtains the motion speed and the depth of field of the video content by using an optical flow method to obtain a numerical result of the video quality;

the remote video server carries out tile segmentation according to the video quality, adopts a two-dimensional clustering algorithm to spatially divide the video into a specified number of tiles with different sizes, and carries out coding of different quality grades on the tiles to obtain coding results of a plurality of quality versions;

the remote video server trains a deep learning model according to the stored bandwidth data and the coding results of the multiple quality versions, and the deep learning model is used as a tile self-adaptive quality selector;

downloading and locally running a deep learning model by a client, collecting panoramic video watching information from user equipment, obtaining a tile range contained in a future watching view field region of a user through viewpoint prediction, and requesting and obtaining corresponding video content from a remote server according to a quality selection result of the deep learning model;

and after the client side obtains the video content, decoding, tile splicing and rendering are carried out on the video content, and the picture is presented to the user.

The method comprises the steps of analyzing video contents through a remote video server, and acquiring the motion speed and the depth of field of the video contents by using an optical flow method to obtain a numerical result of video quality; the remote video server carries out tile segmentation according to the video quality, adopts a two-dimensional clustering algorithm to spatially divide the video into a specified number of tiles with different sizes, and carries out coding of different quality grades on the tiles to obtain coding results of a plurality of quality versions; the remote video server trains a deep learning model according to the stored bandwidth data and the coding results of the multiple quality versions, and the deep learning model is used as a tile self-adaptive quality selector; downloading and locally running a deep learning model by a client, collecting panoramic video watching information from user equipment, obtaining a tile range contained in a future watching view field region of a user through viewpoint prediction, and requesting and obtaining corresponding video content from a remote server according to a quality selection result of the deep learning model; and after the client side obtains the video content, decoding, tile splicing and rendering are carried out on the video content, and the picture is presented to the user. According to the method, the content watched by the user in the future is predicted, the quality grade of the tile where the predicted content is located is flexibly selected and obtained according to the bandwidth self-adaption model, the content with higher requirements is preferentially selected and obtained with higher quality, and after the corresponding encoded file is transmitted to the client where the user is located from the remote video server, the client decodes and plays the encoded file to the user, so that the watching requirements of the user are met.

Drawings

FIG. 1 is a flow chart of a preferred embodiment of the adaptive panoramic video transmission method based on reinforcement learning according to the present invention;

FIG. 2 is a flowchart of step s101 in the preferred embodiment of the adaptive panoramic video transmission method based on reinforcement learning according to the present invention;

FIG. 3 is a flowchart of step s102 in the preferred embodiment of the adaptive panoramic video transmission method based on reinforcement learning according to the present invention;

FIG. 4 is a flowchart of step s103 in the preferred embodiment of the adaptive panoramic video transmission method based on reinforcement learning according to the present invention;

FIG. 5 is a flowchart of step s104 in the preferred embodiment of the adaptive panoramic video transmission method based on reinforcement learning according to the present invention;

FIG. 6 is a flowchart of step s105 in the preferred embodiment of the adaptive panoramic video transmission method based on reinforcement learning according to the present invention;

fig. 7 is a schematic diagram of an adaptive panoramic video transmission system based on reinforcement learning according to a preferred embodiment of the present invention.

Detailed Description

In order to make the objects, technical solutions and advantages of the present invention clearer and clearer, the present invention is further described in detail below with reference to the accompanying drawings and examples. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.

As shown in fig. 1, the reinforcement learning-based adaptive panoramic video transmission method according to the preferred embodiment of the present invention includes the following steps:

and step s101, the far-end video server analyzes the video content, and obtains the motion speed and the depth of field of the video content by using an optical flow method to obtain a numerical result of the video quality.

Fig. 2 is a flowchart of step s101 in the station caption adjusting method according to the present invention.

As shown in fig. 2, the step s101 includes:

s201, the remote video server reads the stored video file, and performs segmentation (chunk) processing according to the time sequence, and defaults to 1 second/segment;

s202, the far-end video server calls an optical flow neural network, inputs the segmented video into the optical flow neural network, and outputs an optical flow detection result;

and s203, the far-end video server obtains the relative motion speed and the relative depth of field of the video content according to the detection result of the Optical flow method, obtains the lowest perceivable loss (OJND) through quantification, calculates the current video quality, and subtracts the lowest perceivable loss (OJND) to obtain the numerical result of the actual video quality.

Wherein, the optical flow (optical flow) is the instantaneous speed of the pixel motion of the space motion object on the observation imaging plane; the optical flow method is a method for calculating motion information of an object between adjacent frames by using the change of pixels in an image sequence in a time domain and the correlation between adjacent frames to find the corresponding relationship between a previous frame and a current frame.

The optical flow Neural network is a Convolutional Neural Network (CNN), which is a kind of feed-forward Neural network containing convolution calculation and having a deep structure, and is one of the representative algorithms of deep learning (deep learning); the convolutional neural network has a representation learning (representation learning) capability, and can perform translation invariant classification on input information according to a hierarchical structure of the convolutional neural network.

And step s102, the remote video server performs tile segmentation according to the video quality, the video is spatially divided into a specified number of tiles with different sizes by adopting a two-dimensional clustering algorithm, and the tiles are encoded at different quality grades to obtain encoding results of a plurality of quality versions.

Please refer to fig. 3, which is a flowchart of step s102 in the station caption adjusting method according to the present invention.

As shown in fig. 3, the step s102 includes:

s301, the remote video server spatially divides the video segment into rectangular basic tiles (for example, 12 × 24 tiles) of a preset area size (smaller area);

s302, calculating the mass growth efficiency of each rectangular basic tile; wherein, the Quality increase efficiency is the difference between the video Quality values of the basic tiles when the highest level coding and the lowest level coding are adopted, and the difference between the Quantization Parameters (QPs) corresponding to the highest level coding and the lowest level coding is divided by the difference;

s303, clustering the rectangular basic tiles, and synthesizing the adjacent rectangular basic tiles into a specified number of tiles which are finally transmitted to the user; wherein, the clustering algorithm is realized by minimizing the sum of the total variances of the quality growth efficiency of the basic tiles contained in the tiles;

s304, performing HEVC (High Efficiency Video Coding, which is a new Video compression standard) Coding on the tiles obtained by the segmentation, so that each tile is allocated with a plurality of Coding qualities of different levels to meet the needs of the client;

s305, recording information of the encoding result, such as quantization parameter, resolution, position of tile in the whole picture, etc., using the attribute file.

And step s103, the remote video server trains a deep learning model according to the stored bandwidth data and the coding results of the multiple quality versions, and the deep learning model is used as a tile self-adaptive quality selector.

Please refer to fig. 4, which is a flowchart of step s103 in the station caption adjusting method according to the present invention.

As shown in fig. 4, the step s103 includes:

s401, the far-end video server initializes the deep learning network, assigns a random initial value to the network state, and sets the feedback function as 0;

s402, reading the bandwidth data in the network bandwidth data set, and updating the network state;

s403, the network decider determines a decision for selecting the tile quality level according to the network status, i.e. determining what version(s) of the tile code file to transmit to the user;

s404, the network decision device calculates the value of the feedback function according to the selection of the network decision device and the network state;

s405, judging whether the feedback function is larger than 0, if not, proving that the decision is not suitable for the current network state, and returning to continue executing the step s 402; if yes, it indicates that the decision has a gain for the current state, and continues to execute step s 406;

s406, updating the network parameters of the network decision device according to the feedback function values and decision back propagation, and adding one to the number of training rounds;

s407, judging whether the number of training rounds reaches a preset value, if not, the network needs to continue training, and returning to continue executing the step s 402; if yes, go on to step s 408;

and s408, if the number of training rounds reaches a preset value, finishing training, and storing the parameters of the network decision maker as the model parameters of reinforcement learning.

And step s104, downloading and locally running the deep learning model by the client, collecting panoramic video watching information from the user equipment, obtaining a tile range contained in a future watching view field region of the user through view point prediction, and requesting and obtaining corresponding video content from the remote server according to a selection result of the deep learning model on the quality.

Please refer to fig. 5, which is a flowchart of step s104 in the station caption adjusting method according to the present invention.

As shown in fig. 5, the step s104 includes:

s501, connecting a client (a user uses a viewing device as the client) to a remote video server through a network, downloading a deep learning model, and preloading the deep learning model at the client;

s502, the client collects the user viewing information, such as viewing position, network bandwidth, video quality and other information;

s503, according to the user viewing information, using a linear regression algorithm (linear regression is that the relationship between the data can be accurately described by using a straight line, so that when new data appears, a simple value can be predicted), performing user viewpoint prediction, and calculating the index and the lowest perceivable loss of the tile where the next viewing position of the user is located;

s504, inputting the prediction result, the current bandwidth data and the like into a deep learning model, and operating the model to determine the quality level of the tile data to be acquired;

and s505, the client sends a corresponding data acquisition request in an HTTP format to the remote video server according to the quality level prediction result.

And step s105, downloading and locally running a deep learning model by the client, collecting panoramic video watching information from the user equipment, obtaining a tile range contained in a future watching view field region of the user through view point prediction, and requesting and obtaining corresponding video content from a remote server according to a selection result of the deep learning model on quality.

Please refer to fig. 6, which is a flowchart of step s105 in the station caption adjusting method according to the present invention.

As shown in fig. 6, the step s105 includes:

s601, the remote video server receives the client response and then sends video data, and the client receives the data packet;

s602, the client sends the video coding file contained in the data packet into a system cache;

s603, taking out the attribute file in the data packet, and analyzing the tile position information contained in the attribute file;

s604, decoding the coded data according to the information obtained from the attribute file to obtain an original tile data file;

and s605, splicing the tiles to form a complete video picture, sending the video picture into a viewing device for rendering, and presenting the panoramic video content for viewing to a user.

In the invention, the subjective feeling of the user is modeled, and the region (RoI) of interest of the user in the 360-degree panoramic video is found to be a small part of the whole video picture content, so that the method has higher quality requirements on the motion content and the foreground content.

The invention builds a system frame of panoramic video transmission based on reinforcement learning on the basis, firstly, a far-end video server analyzes the panoramic video by using optical flow detection, extracts the information of the motion speed and the depth of field of the video content, quantizes the information by using the lowest perceivable loss, calculates the numerical value of a video evaluation index considering the subjective quality experience of a user, takes the evaluation index as the reference of a segmentation tile, segments the 360-degree panoramic video to be transmitted, divides the content with higher user demand and other content into separate tiles respectively, each video tile is compressed into video files with different quality grades by an HEVC (high efficiency video coding) coder, trains a deep learning model in deep learning aiming at the video in the far-end video server, is used for realizing the self-adaptive quality selection of the tiles, and firstly downloads the deep learning model before the user watches the video, when the panoramic video is formally watched, the client-side head-mounted equipment and the like collect user watching information, the content watched by the user in the future is predicted, the quality grade of a tile where the predicted content is located is flexibly selected and obtained according to the bandwidth self-adaption through a deep learning model, higher quality is preferentially selected and obtained for the content with higher demand, and after a corresponding coding file is transmitted to the client side where the user is located from the server side, the client side decodes and plays the coding file to the user, so that the watching demand of the user is met.

Further, as shown in fig. 7, based on the above adaptive panoramic video transmission method based on reinforcement learning, the present invention also provides an adaptive panoramic video transmission system based on reinforcement learning, wherein the adaptive panoramic video transmission system based on reinforcement learning includes:

the system comprises a remote server and a client, wherein the client establishes connection with the remote video server through a network; the remote video server analyzes the video content, and obtains the motion speed and the depth of field of the video content by using an optical flow method to obtain a numerical result of the video quality; the remote video server carries out tile segmentation according to the video quality, adopts a two-dimensional clustering algorithm to spatially divide the video into a specified number of tiles with different sizes, and carries out coding of different quality grades on the tiles to obtain coding results of a plurality of quality versions; the remote video server trains a deep learning model according to the stored bandwidth data and the coding results of the multiple quality versions, and the deep learning model is used as a tile self-adaptive quality selector; downloading and locally running a deep learning model by a client, collecting panoramic video watching information from user equipment, obtaining a tile range contained in a future watching view field region of a user through viewpoint prediction, and requesting and obtaining corresponding video content from a remote server according to a quality selection result of the deep learning model; and after the client side obtains the video content, decoding, tile splicing and rendering are carried out on the video content, and the picture is presented to the user.

The invention establishes a panoramic video transmission framework compatible with a DASH (dynamic Adaptive Streaming over HTTP) protocol and edge calculation, adopts a Convolution Neural Network (CNN) -based optical flow estimation method to accurately extract the relative speed and the depth of field of each pixel point, quantizes the perception degree of a user to the playing quality distortion of the panoramic video, provides a 360-degree video quality evaluation index considering the subjective feeling of the user, provides a multi-size tile segmentation scheme, distributes contents with similar quality to the same tile, and uses an enhanced learning network as a tool for selecting a tile Adaptive Bit Rate (ABR).

In summary, the present invention provides a method and a system for adaptive panoramic video transmission based on reinforcement learning, wherein the method comprises: the remote video server analyzes the video content, and obtains the motion speed and the depth of field of the video content by using an optical flow method to obtain a numerical result of the video quality; the remote video server carries out tile segmentation according to the video quality, adopts a two-dimensional clustering algorithm to spatially divide the video into a specified number of tiles with different sizes, and carries out coding of different quality grades on the tiles to obtain coding results of a plurality of quality versions; the remote video server trains a deep learning model according to the stored bandwidth data and the coding results of the multiple quality versions, and the deep learning model is used as a tile self-adaptive quality selector; downloading and locally running a deep learning model by a client, collecting panoramic video watching information from user equipment, obtaining a tile range contained in a future watching view field region of a user through viewpoint prediction, and requesting and obtaining corresponding video content from a remote server according to a quality selection result of the deep learning model; and after the client side obtains the video content, decoding, tile splicing and rendering are carried out on the video content, and the picture is presented to the user. According to the method, the content watched by the user in the future is predicted, the quality grade of the tile where the predicted content is located is flexibly selected and obtained according to the bandwidth self-adaption model, the content with higher requirements is preferentially selected and obtained with higher quality, and after the corresponding encoded file is transmitted to the client where the user is located from the remote video server, the client decodes and plays the encoded file to the user, so that the watching requirements of the user are met.

Of course, it will be understood by those skilled in the art that all or part of the processes of the methods of the above embodiments may be implemented by a computer program instructing relevant hardware (such as a processor, a controller, etc.), and the program may be stored in a computer readable storage medium, and when executed, the program may include the processes of the above method embodiments. The storage medium may be a memory, a magnetic disk, an optical disk, etc.

It is to be understood that the invention is not limited to the examples described above, but that modifications and variations may be effected thereto by those of ordinary skill in the art in light of the foregoing description, and that all such modifications and variations are intended to be within the scope of the invention as defined by the appended claims.

15页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:一种视频融合方法

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!

技术分类