Video file restoration method and device, computer equipment and storage medium

文档序号:38505 发布日期:2021-09-24 浏览:16次 中文

阅读说明:本技术 视频文件还原方法、装置、计算机设备和存储介质 (Video file restoration method and device, computer equipment and storage medium ) 是由 韩大炜 刘立 李开科 孙浩 于 2021-05-28 设计创作,主要内容包括:本申请涉及一种视频文件还原方法、装置、计算机设备和存储介质。所述方法包括:获取目标视频对应的视频流;提取视频流中上行数据包的特征信息,当视频流中的上行数据包中的特征信息满足预设特征规则时,提取视频流的下行数据包中包含的视频数据分片长度;提取下行数据包中的视频数据,生成视频数据分片长度的分片视频文件。采用本方法,可以实现将视频流中视频数据还原为可播放的视频文件,以用于对视频内容的监管。(The application relates to a video file restoration method, a video file restoration device, computer equipment and a storage medium. The method comprises the following steps: acquiring a video stream corresponding to a target video; extracting feature information of an uplink data packet in a video stream, and extracting video data fragment length contained in a downlink data packet of the video stream when the feature information in the uplink data packet in the video stream meets a preset feature rule; and extracting the video data in the downlink data packet to generate a fragment video file with the fragment length of the video data. By adopting the method, the video data in the video stream can be restored into the playable video file for monitoring the video content.)

1. A method for restoring a video file, the method comprising:

acquiring a video stream corresponding to a target video;

extracting feature information of an uplink data packet in the video stream, and extracting video data fragment length contained in a downlink data packet of the video stream when the feature information in the uplink data packet in the video stream meets a preset feature rule;

and extracting the video data in the downlink data packet to generate a fragment video file with the fragment length of the video data.

2. The method of claim 1, further comprising:

acquiring a mirror image video stream, performing order preserving processing on a data packet of the mirror image video stream, and storing the data packet of the mirror image video stream after the order preserving processing to a transmission control protocol stream table;

the acquiring of the video stream corresponding to the target video includes:

and extracting and identifying a process identifier in the mirror image video stream in the transmission control protocol stream table, and acquiring a video stream corresponding to the target video after order preserving processing according to the process identifier.

3. The method of claim 1, wherein the extracting the feature information of the upstream data packet in the video stream comprises:

identifying a target identification field of an uplink data packet in the video stream;

and extracting attribute features contained in the target identification field, and determining whether an uplink data packet in the video stream meets a preset feature rule according to preset target attribute features.

4. The method of claim 3, further comprising:

and extracting the playing resource name of the target field from the uplink data packet meeting the preset characteristic rule to be used as the file name of the fragmented video file.

5. The method of claim 1, wherein prior to said extracting video data from said downstream packet, said method further comprises:

identifying whether the length of video data to be processed in a downlink data packet of the video stream is zero or not;

and if the length of the video data to be processed is zero and the state code of the first response data packet in the downlink data packet does not meet the target state code, ending the processing flow of the downlink data packet.

6. The method of claim 1, wherein the extracting the video data slice length included in the downstream packet of the video stream comprises:

extracting a state code of a first response data packet in a downlink data packet of the video stream, and judging that the downlink data packet of the video stream meets a fragmentation processing state according to a target state code;

and extracting data type information in the downlink data packet, and extracting the video data fragment length contained in the video data of the downlink data packet when the data type information is determined to contain video data keywords.

7. The method according to claim 1, wherein said extracting video data from said downstream data packet and generating a fragmented video file of the fragmented length of said video data comprises:

extracting and processing the video data in the downlink data packet, and recording the length of the processed video data;

and determining the length of the video data to be processed according to the video data fragment length and the length of the processed video data, and generating a fragment video file of the video data fragment length when the length of the video data to be processed is zero.

8. An apparatus for generating a video file, the apparatus comprising:

the acquisition module is used for acquiring a video stream corresponding to a target video;

the extraction module is used for extracting the characteristic information of the uplink data packet in the video stream, and extracting the video data fragment length contained in the downlink data packet of the video stream when the characteristic information in the uplink data packet in the video stream meets a preset characteristic rule;

and the generating module is used for extracting the video data in the downlink data packet and generating a fragment video file with the fragment length of the video data.

9. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor, when executing the computer program, implements the steps of the method of any of claims 1 to 7.

10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method of any one of claims 1 to 7.

Technical Field

The present application relates to the field of traffic processing technologies, and in particular, to a video file generation method and apparatus, a computer device, and a storage medium.

Background

Hls (HTTP Live streaming) is a streaming media network transport protocol based on HTTP. The method has the working principle that the whole stream is divided into small HTTP-based files to be downloaded, only part of the small HTTP-based files can be downloaded each time, the processing mode is called as the HLS fragmentation principle, and the HLS can realize audio and video data communication based on the fragmentation principle.

In traditional audio and video data communication, data communication is generally performed between a data request end and a data sending end, and both data transmission parties can be determined between the end and the end based on address information of the request end to realize data analysis. With the gradual increase of the amount of audio and video data, a large amount of video content is uploaded to a network for transmission and sharing, illegal content may exist in part of videos, and the audio and video content needs to be monitored and reviewed in order to maintain the safety and health of a network environment.

However, if a bypass data communication scenario is added on the basis of end-to-end data communication, that is, a bypass server is added, a network device (such as a gateway) is used to mirror a video stream to the bypass server, and the video data is supervised by the bypass server, which does not satisfy the address rule during end-to-end communication transmission, so that mirrored audio/video data acquired by the bypass server cannot be restored into a video file, and further cannot realize supervision of audio/video content.

Disclosure of Invention

In view of the foregoing, it is desirable to provide a video file restoration method, apparatus, computer device and storage medium.

A method of video file restoration, the method comprising:

acquiring a video stream corresponding to a target video;

extracting feature information of an uplink data packet in the video stream, and extracting video data fragment length contained in a downlink data packet of the video stream when the feature information in the uplink data packet in the video stream meets a preset feature rule;

extracting the video data in the downlink data packet to generate a fragment video file with the fragment length of the video data

By adopting the method, for the server which is connected with the transmission channel, the fragment video file which can be played is generated by extracting and restoring the video data in the acquired video stream, thereby realizing the restoration and supervision of the video content.

In one embodiment, the method further comprises:

acquiring a mirror image video stream, performing order preserving processing on a data packet of the mirror image video stream, and storing the data packet of the mirror image video stream after the order preserving processing to a transmission control protocol stream table;

the acquiring of the video stream corresponding to the target video includes:

and extracting and identifying a process identifier in the mirror image video stream in the transmission control protocol stream table, and acquiring a video stream corresponding to the target video after order preserving processing according to the process identifier.

In this embodiment, the TCP flow table is created to implement order preserving processing of the data packets in the video stream, and the server may obtain the video stream of the target video in the TCP flow table, thereby ensuring that the transmission sequence of the data packets in the video stream is more stable in the transmission process.

In one embodiment, the extracting the feature information of the upstream data packet in the video stream includes:

identifying a target identification field of an uplink data packet in the video stream;

and extracting attribute features contained in the target identification field, and determining whether an uplink data packet in the video stream meets a preset feature rule according to preset target attribute features.

In this embodiment, the server identifies and extracts the attribute features of the uplink data packet in the video stream, and determines that the uplink data packet in the video stream satisfies the preset feature rule, thereby determining that the server can execute the subsequent processing logic of the video stream.

In one embodiment, the method further comprises:

and extracting the playing resource name of the target field from the uplink data packet meeting the preset characteristic rule to be used as the file name of the fragmented video file.

Based on the processing, the server can display the name information of the restored video file to the user, so that technicians can conveniently inquire the specific name of the target video.

In one embodiment, before the extracting the video data in the downlink data packet, the method further includes:

identifying whether the length of video data to be processed in a downlink data packet of the video stream is zero or not;

and if the length of the video data to be processed is zero and the state code of the first response data packet in the downlink data packet does not meet the target state code, ending the processing flow of the downlink data packet.

In this embodiment, the current video stream processing state is determined by the length of the number of videos to be processed in the video stream and the state code information included in the video stream, and if the current video stream does not satisfy the video file restoration state, the video stream processing process is ended in time, so that the video file processing efficiency is improved.

In one embodiment, the extracting the video data slice length included in the downlink data packet of the video stream includes:

extracting a state code of a first response data packet in a downlink data packet of the video stream, and judging that the downlink data packet of the video stream meets a fragmentation processing state according to a target state code;

and extracting data type information in the downlink data packet, and extracting the video data fragment length contained in the video data of the downlink data packet when the data type information is determined to contain video data keywords.

In this embodiment, when the status code in the video stream satisfies the preset target status code, it is determined that the downlink data packet of the video stream satisfies the fragmentation processing status, and then the fragmentation length of the video data is extracted from the downlink data packet of the video stream, and according to the fragmentation length information, the writing and restoring of the fragmented video file can be completed.

In one embodiment, the extracting video data in the downlink data packet and generating a fragment video file with a fragment length of the video data includes:

extracting and processing the video data in the downlink data packet, and recording the length of the processed video data;

and determining the length of the video data to be processed according to the video data fragment length and the length of the processed video data, and generating a fragment video file of the video data fragment length when the length of the video data to be processed is zero.

In this embodiment, the length of the video data to be processed in the video stream is determined, so as to clarify the correspondence between the length of the processed video data in the video stream and the length of the fragmented video data, thereby realizing the automatic generation of the fragmented video file and improving the video file restoration efficiency.

A video file generation apparatus, the apparatus comprising:

the acquisition module is used for acquiring a video stream corresponding to a target video;

the extraction module is used for extracting the characteristic information of the uplink data packet in the video stream, and extracting the video data fragment length contained in the downlink data packet of the video stream when the characteristic information in the uplink data packet in the video stream meets a preset characteristic rule;

and the generating module is used for extracting the video data in the downlink data packet and generating a fragment video file with the fragment length of the video data.

A computer device comprising a memory and a processor, the memory storing a computer program, the processor implementing the following steps when executing the computer program:

acquiring a video stream corresponding to a target video;

extracting feature information of an uplink data packet in the video stream, and extracting video data fragment length contained in a downlink data packet of the video stream when the feature information in the uplink data packet in the video stream meets a preset feature rule;

and extracting the video data in the downlink data packet to generate a fragment video file with the fragment length of the video data.

A computer-readable storage medium, on which a computer program is stored which, when executed by a processor, carries out the steps of:

acquiring a video stream corresponding to a target video;

extracting feature information of an uplink data packet in the video stream, and extracting video data fragment length contained in a downlink data packet of the video stream when the feature information in the uplink data packet in the video stream meets a preset feature rule;

and extracting the video data in the downlink data packet to generate a fragment video file with the fragment length of the video data.

According to the video file restoration method, the video file restoration device, the computer equipment and the storage medium, the server obtains the video stream corresponding to the target video. Extracting feature information of an uplink data packet in the video stream, and when the feature information in the uplink data packet in the video stream meets a preset feature rule, extracting the video data fragment length contained in the downlink data packet of the video stream by a server; and further extracting the video data in the downlink data packet to generate a fragment video file with the fragment length of the video data. By adopting the method, for the server which is connected with the transmission channel, the fragment video file which can be played is generated by extracting and restoring the video data in the acquired video stream, thereby realizing the restoration and supervision of the video content.

Drawings

FIG. 1 is a schematic flow chart diagram illustrating a video restoration method according to an embodiment;

FIG. 2 is a flowchart illustrating the steps of creating a TCP flow table and performing packet ordering in one embodiment;

FIG. 3 is a flowchart illustrating the steps of determining attributes of a video stream according to one embodiment;

FIG. 4 is a flowchart illustrating the step of extracting the name of the resource to be played in one embodiment;

FIG. 5 is a flow diagram illustrating a method for determining a processing state of a video stream according to one embodiment;

FIG. 6 is a flowchart illustrating the step of extracting the slice length of the video data in one embodiment;

FIG. 7 is a flowchart illustrating a method for generating a fragmented video file, under an embodiment;

FIG. 8 is a block diagram showing the structure of a video file restoration apparatus according to an embodiment;

FIG. 9 is a diagram illustrating an internal structure of a computer device according to an embodiment.

Detailed Description

In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.

First, before specifically describing the technical solution of the embodiment of the present application, a technical background or a technical evolution context on which the embodiment of the present application is based is described. In general, in the field of flow processing, the current technical background is: the video traffic between the terminals is processed, and the video stream is identified and analyzed based on the address information between the terminals, however, the video stream content between the terminals needs to be monitored by the video monitoring device (generally, a server) connected in parallel, and since the address information of the video monitoring device does not conform to the address rule between the terminals, the monitored mirror image video traffic cannot be played, and further, the problem that the mirror image video traffic cannot be restored and played occurs. Based on the background, the applicant finds that the monitored mirror image flow can be restored again through long-term model simulation research and development and experimental data collection, demonstration and verification, and can play videos based on the restored video files, so that the monitoring of video contents is realized. Therefore, how to realize the restoration of the video file becomes a difficult problem to be solved urgently at present. In addition, it should be noted that the applicant has paid a lot of creative efforts in finding the technical problems of the present application and the technical solutions described in the following embodiments.

In the embodiment of the present application, the format of the Video data is a Flash Video format (Flash Video format), that is, a Video file in the FLV format can be read to play a Video.

In an embodiment, as shown in fig. 1, a video file restoration method is provided, and this embodiment is illustrated by applying the method to a server, it is to be understood that the method may also be applied to a terminal, and may also be applied to a system including the terminal and the server, and is implemented by interaction between the terminal and the server. The method comprises the following steps:

step 101, obtaining a video stream corresponding to a target video.

In implementation, when video transmission is performed between the terminals, a video stream sent by a video sending terminal is transmitted to a video requesting terminal through network equipment. After receiving the video stream, the network device (e.g., gateway) images the video stream and sends the video stream to a server accessing the transmission path, and the server may receive the image video stream sent by the network device. The video stream may be a video stream of any video (referred to as a target video) transmitted by a network device. For example, a user of a video request terminal a sends a video request to a video sending terminal B, and the video sending terminal B sends a target video stream to the video request terminal a in response to a video data request in the video request, and at the same time, the video stream is also mirrored to a video monitoring terminal (server terminal C) for video monitoring through a network device.

And 102, extracting the characteristic information of the uplink data packet in the video stream, and extracting the video data fragment length contained in the downlink data packet of the video stream when the characteristic information in the uplink data packet in the video stream meets a preset characteristic rule.

In implementation, the server extracts feature information included in a specific field in an upstream packet in the video stream, and may determine whether data in the upstream packet of the video stream satisfies a preset feature rule according to the feature information, for example, the feature information in the upstream packet is a feature code of a URL (uniform resource locator) field, and if the feature code carries ts or m3u8, it is determined that the video stream belongs to an HLS (HTTP Live Streaming), and based on a sliced transmission mode of the HLS protocol, if the feature information in the upstream packet satisfies the preset feature rule, it is determined that a downstream packet processing logic of the video stream can be entered, that is, a video data slice length (Content-length) of the video stream belonging to the HLS protocol is extracted from the downstream packet.

And 103, extracting the video data in the downlink data packet to generate a fragment video file with the fragment length of the video data.

In implementation, the server extracts video data in the downlink data packet according to the obtained video data fragment length (Content-length), and writes the video data with the video data fragment length into the created video file to obtain a fragment video file (i.e., a fragment FLV file).

According to the video file restoration method, a server acquires a video stream corresponding to a target video, extracts feature information of an uplink data packet in the video stream, and extracts video data fragment length contained in a downlink data packet of the video stream when the feature information in the uplink data packet in the video stream meets a preset feature rule; and extracting the video data in the downlink data packet to generate a fragment video file with the fragment length of the video data. By adopting the method, the server which is connected with the transmission channel can extract and restore the video data in the acquired video stream to generate the fragment video file which can be played, thereby realizing the restoration and supervision of the video content.

In an alternative embodiment, as shown in fig. 2, the method further comprises:

step 201, obtaining a mirror image video stream, performing order preserving processing on a data packet of the mirror image video stream, and storing the data packet of the mirror image video stream after the order preserving processing to a transmission control protocol stream table.

In implementation, the network device may mirror all video streams in the network to the server, and the server may perform order preservation on data packets in the mirrored video streams when receiving the mirrored video streams, and sequentially store the data packets in the mirrored video streams after the order preservation in a Transmission Control Protocol (TCP) stream table.

The specific processing procedure of the server in step 101 acquiring the video stream of the target video includes the following steps:

step 202, extracting and identifying a flow identifier in the mirror image video stream in the transmission control protocol flow table, and acquiring a video stream corresponding to the target video after the order preserving processing according to the flow identifier.

In implementation, in a transmission control protocol flow table (TCP flow table), a server extracts and identifies a flow identifier carried by a mirror image video stream, and can determine whether each video stream can enter a subsequent processing flow according to the flow identifier, for example, if the flow identifier of a certain video stream meets a preset target flow identifier, it is determined and obtained that the video stream after order preservation is a video stream corresponding to a target video.

In this embodiment, the TCP flow table is created to implement order preserving processing of the data packets in the video stream, and the server may obtain the video stream of the target video in the TCP flow table, thereby ensuring that the transmission sequence of the data packets in the video stream is more stable in the transmission process.

In an alternative embodiment, as shown in fig. 3, the specific step of extracting the feature information of the uplink data packet in the video stream in step 102 includes:

step 301, identifying a target identification field of an uplink data packet in a video stream.

In implementation, the server identifies a corresponding target identification field in the upstream packet data of each video stream according to a preset field position of the target identification field.

Step 302, extracting the attribute features contained in the target identification field, and determining whether the uplink data packet in the video stream meets the preset feature rule according to the preset target attribute features.

In implementation, since each video stream is transmitted based on one transport protocol, different transport protocols correspond to different attribute features, and after the server identifies the location of the target identifier field in the uplink data packet, the server extracts specific attribute feature information contained in the target identifier field, and can determine the transport protocol to which the corresponding video stream belongs, for example, the preset target attribute feature is a unique feature code (ts or m3u8) belonging to the HLS transport protocol, then according to whether the uplink data packet carries the target feature code (target attribute feature), it can be determined whether the uplink data packet in the video stream satisfies the preset feature rule, if the uplink data packet in the video stream carries the feature code ts or m3u8, the uplink data packet is further processed, and if the uplink data packet in the video stream does not carry the feature code ts or m3u8, it is determined that the video stream does not belong to the HLS protocol, subsequent packets in the video stream are not processed.

In this embodiment, the server identifies and extracts the attribute features of the uplink data packet in the video stream, and determines that the uplink data packet in the video stream satisfies the preset feature rule, thereby determining that the server can execute the subsequent processing logic of the video stream.

In an optional embodiment, the method further comprises: and extracting the playing resource name of the target field from the uplink data packet meeting the preset characteristic rule to be used as the file name of the fragmented video file.

In implementation, after determining that the attribute characteristics of the upstream data packet in the video stream satisfy the preset characteristic rule (that is, under the condition that the protocol type of the video stream is the HLS protocol), the server may locate a preset character string (that is, determine a target field) in the upstream data packet through character matching, and further extract the playing resource name information in the target field, and use the playing resource name information as the file name of a fragmented video file (fragmented FLV file).

Based on the processing, the server can display the name information of the restored video file to the user, so that technicians can conveniently inquire the specific name of the target video.

In an alternative embodiment, as shown in fig. 4, before step 103, the method further comprises:

step 401, identify whether the length of the video data to be processed in the downlink data packet of the video stream is zero.

In implementation, before processing a downstream packet in a video stream, a server identifies in advance whether a length of video data to be processed in the downstream packet of the video stream is zero, so as to determine a current processing state of the downstream packet of the video stream.

Optionally, in a transmission control protocol flow table (TCP flow table), for a video flow after order preserving processing, the length of video data to be processed in a downlink data packet of the video flow may be identified, and therefore, the length of video data to be processed may also be referred to as the length of video data not to be processed in the TCP flow table.

Step 402, if the length of the video data to be processed is zero and the status code of the first response data packet in the downlink data packet does not satisfy the target status code, ending the processing flow of the downlink data packet.

In implementation, if the length of the video data to be processed is 0, the server further determines a status code included in a first response packet (also referred to as an HTTP response packet) in the downlink data packet, and if the status code also does not satisfy the target status code, which indicates that the downlink data packet in the current video stream does not satisfy the processing status, ends the processing flow of the downlink data packet in the video stream.

Optionally, if the status code in the first response data packet satisfies the target status code, the data type of the downlink data packet needs to be further determined, specifically, after the status code in the first response data packet satisfies the target status code, detailed description is subsequently performed on the process of determining the data type of the downlink data packet, which is not described in detail in this embodiment.

The target status code, i.e. HTTP 200 or HTTP 206, is used to represent a status of a successful response of a request receiving end (i.e. an end of the present application and a data transmitting end in the end).

In this embodiment, the current video stream processing state is determined by the length of the number of videos to be processed in the video stream and the state code information included in the video stream, and if the current video stream does not satisfy the video file restoration state, the video stream processing process is ended in time, so that the video file processing efficiency is improved.

In an alternative embodiment, as shown in fig. 5, the specific process of extracting the video data slice length included in the downlink data packet of the video stream in step 102 includes the following steps:

step 501, extracting the status code of the first response data packet in the downlink data packet of the video stream, and determining that the downlink data packet of the video stream satisfies the fragmentation processing status according to the target status code.

In implementation, the server extracts a status code of a first response packet (i.e., an HTTP response packet) in downlink packets of the video stream, determines a processing state of the downlink packets of the video stream according to a comparison between the status code and a target status code (HTTP 200 or HTTP 206), and determines that the downlink packets of the video stream satisfy a slicing processing state of the video data if the status code is the target status code.

Step 502, extracting data type information in the downlink data packet, and when determining that the data type information contains video data keywords, extracting the video data fragment length contained in the video data of the downlink data packet.

In implementation, the server extracts the data Type information in the downlink data packet, determines whether Content-Type information in the downlink data packet contains a keyword of video or audio, and when it is determined that the Content-Type information contains the keyword of video data, the server extracts the video data slice length (Content-length) contained in the video data in the downlink data packet.

The video data fragment length is based on the principle of the HLS protocol, and the file length of the HTTP fragment file is divided into the whole video stream.

In this embodiment, when the status code in the video stream satisfies the preset target status code, it is determined that the downlink data packet of the video stream satisfies the fragmentation processing status, and then the fragmentation length of the video data is extracted from the downlink data packet of the video stream, and according to the fragmentation length information, the writing and restoring of the fragmented video file can be completed.

In an alternative embodiment, as shown in fig. 6, the specific processing procedure of step 103 includes the following steps:

step 601, extracting and processing the video data in the downlink data packet, and recording the length of the processed video data.

In implementation, the server extracts and processes video data in the downlink data packet, determines that the video data format is the FLV format, writes the video data in the processed downlink data packet into a created video file (FLV file), and records the length of the processed video data in the video file (WriteDoneSize).

Step 602, determining the length of the video data to be processed according to the length of the video data fragment and the length of the processed video data, and generating a fragment video file with the fragment length of the video data when the length of the video data to be processed is zero.

In implementation, the server determines the length of the video data to be processed (UndoneSize) according to the video data fragment length (Content-length) and the length of the processed video data (WriteDoneSize), and specifically, the formula for determining the length of the video data to be processed is as follows: UndonEsize ═ Content-length-WriteDoneSize. When the length of the video data to be processed is zero, the current video data is represented to be written, that is, a fragment video file with the fragment length of the video data is generated.

Optionally, the video stream may be correspondingly restored to at least one fragmented video file, and the server may play the video according to the restored fragmented video file, and may also monitor the video content of the played video.

In this embodiment, the length of the video data to be processed in the video stream is determined, so as to clarify the correspondence between the length of the processed video data in the video stream and the length of the fragmented video data, thereby realizing the automatic generation of the fragmented video file and improving the video file restoration efficiency.

In an alternative embodiment, an example of a video restoring method is provided, as shown in fig. 7, the specific steps are as follows:

step 701, receiving a video stream sent by a network device.

Step 702, establishing a TCP flow table, and storing the data packet contained in the video flow into the TCP flow table after performing order preserving processing.

Step 703, determining a flow identifier carried by each video stream in the TCP flow table, and if the flow identifier is true, executing step 704; if the flow mark is false, the video stream processing flow is ended.

Step 704, identifying whether the video data packet of each video stream in the TCP stream table is an uplink data packet, if yes, executing step 705-step 706; if not, step 707-step 709 are executed.

Step 705, extracting a URL field (i.e., HTTP GET URL field) in an uplink packet of the video stream, determining whether the URL field contains a ts or m3u8 feature code, if yes, performing step 706; if not, ending the uplink data packet processing flow of the video stream, adding a No identifier to the video stream, and representing that the video stream is not processed.

Step 706, extracting the playing resource name of the target field in the uplink data packet, and using the playing resource name as the file name of the restored video file.

Step 707, determining whether the video data length of the downlink data packet to be processed included in the video stream of the target video in the TCP stream table is greater than 0, if so, executing step 708; if not, the video stream processing flow is ended.

Step 708, determining a status code included in the HTTP response packet in the downlink data packet, and if the status code is not any of HTTP 200 or HTTP 206, ending the video stream processing flow; if the status code is HTTP 200 or HTTP 206, step 709 is executed.

Step 709, extracting keywords in a data Type field Content-Type in an HTTP Header of the downlink data packet, and if the keywords comprise keywords of a video or audio, executing step 710; and if the keywords do not include the keywords of the video or audio, ending the video stream processing flow.

Step 710, storing Content-Length (video data fragment Length) in HTTP Header of the extracted downlink data packet, creating a fragment video file, and using the extracted playing resource name as the file name of the fragment video file.

Step 711, writing the video data in the downlink data packet into the fragmented video file, and recording the length WriteDoneSize of the processed video data written into the fragmented video file.

Step 712, calculating the Length of the video data to be processed in the video stream, i.e. UndonEsize, according to Content-Length and WriteDoneSize.

Step 713, determining whether the video data length (undo) of the downlink data packet to be processed is greater than 0, if so, executing step 708; if not, ending the video stream processing flow, and generating a fragment video with the fragment length of the video data if the video data is not greater than 0.

It should be understood that although the various steps in the flow charts of fig. 1-7 are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least some of the steps in fig. 1-7 may include multiple steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, which are not necessarily performed in sequence, but may be performed in turn or alternately with other steps or at least some of the other steps.

In one embodiment, as shown in fig. 8, there is provided a video file restoring apparatus 800, including: an obtaining module 810, an extracting module 820 and a generating module 830, wherein:

an obtaining module 810, configured to obtain a video stream corresponding to a target video;

an extracting module 820, configured to extract feature information of an uplink data packet in a video stream, and extract a video data fragment length included in a downlink data packet of the video stream when the feature information in the uplink data packet in the video stream meets a preset feature rule;

the generating module 830 is configured to extract video data in the downlink data packet, and generate a fragment video file with a fragment length of the video data.

In an alternative embodiment, the apparatus 800 further comprises:

the order preserving module is used for acquiring the mirror image video stream, carrying out order preserving processing on a data packet of the mirror image video stream, and storing the data packet of the mirror image video stream after the order preserving processing to the transmission control protocol stream table;

the obtaining module 810 is specifically configured to extract and identify a flow identifier in the mirror image video stream in the tcp flow table, and obtain a video stream corresponding to the target video after the order preserving processing according to the flow identifier.

In an optional embodiment, the extracting module 820 is specifically configured to identify a target identification field of an upstream packet in a video stream;

and extracting attribute features contained in the target identification field, and determining whether an uplink data packet in the video stream meets a preset feature rule or not according to preset target attribute features.

In an alternative embodiment, the apparatus 800 further comprises:

and the naming module is used for extracting the playing resource name of the target field from the uplink data packet meeting the preset characteristic rule to be used as the file name of the fragmented video file.

In an alternative embodiment, the apparatus 800 further comprises:

the identification module is used for identifying whether the length of video data to be processed in a downlink data packet of the video stream is zero or not;

and the judging module is used for ending the processing flow of the downlink data packet if the length of the video data to be processed is zero and the state code of the first response data packet in the downlink data packet does not meet the target state code.

In an alternative embodiment, the apparatus 800 further comprises:

the first extraction module is used for extracting the state code of a first response data packet in the downlink data packet of the video stream and judging that the downlink data packet of the video stream meets the fragmentation processing state according to the target state code;

and the second extraction module is used for extracting the data type information in the downlink data packet, and extracting the video data fragment length contained in the video data of the downlink data packet when the data type information is determined to contain the video data keyword.

In an alternative embodiment, the generating module 830 is specifically configured to extract and process video data in the downlink data packet, and record the length of the processed video data;

and determining the length of the video data to be processed according to the video data fragment length and the length of the processed video data, and generating a fragment video file with the video data fragment length when the length of the video data to be processed is zero.

For specific limitations of the video file restoration apparatus, reference may be made to the above limitations of the video file restoration method, which are not described herein again. The modules in the video file restoration apparatus can be wholly or partially implemented by software, hardware and a combination thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.

In one embodiment, a computer device is provided, which may be a server, and its internal structure diagram may be as shown in fig. 9. The computer device includes a processor, a memory, and a network interface connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, a computer program, and a database. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The database of the computer device is used for storing data in upstream data packets and downstream data packets in the video stream. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement a video file restoration method.

Those skilled in the art will appreciate that the architecture shown in fig. 9 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects apply, as particular computing devices may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.

In one embodiment, a computer device is provided, comprising a memory and a processor, the memory having a computer program stored therein, the processor implementing the following steps when executing the computer program:

acquiring a video stream corresponding to a target video;

extracting feature information of an uplink data packet in a video stream, and extracting video data fragment length contained in a downlink data packet of the video stream when the feature information in the uplink data packet in the video stream meets a preset feature rule;

and extracting the video data in the downlink data packet to generate a fragment video file with the fragment length of the video data.

In one embodiment, the processor, when executing the computer program, further performs the steps of:

acquiring a mirror image video stream, performing order preserving processing on a data packet of the mirror image video stream, and storing the data packet of the mirror image video stream after the order preserving processing to a transmission control protocol stream table;

and extracting and identifying a flow identifier in the mirror image video stream in the transmission control protocol flow table, and acquiring the video stream corresponding to the target video after the order preserving processing according to the flow identifier.

In one embodiment, the processor, when executing the computer program, further performs the steps of:

identifying a target identification field of an uplink data packet in a video stream;

and extracting attribute features contained in the target identification field, and determining whether an uplink data packet in the video stream meets a preset feature rule or not according to preset target attribute features.

In one embodiment, the processor, when executing the computer program, further performs the steps of:

and extracting the playing resource name of the target field from the uplink data packet meeting the preset characteristic rule to be used as the file name of the fragmented video file.

In one embodiment, the processor, when executing the computer program, further performs the steps of:

identifying whether the length of video data to be processed in a downlink data packet of a video stream is zero or not;

and if the length of the video data to be processed is zero and the state code of the first response data packet in the downlink data packet does not meet the target state code, ending the processing flow of the downlink data packet.

In one embodiment, the processor, when executing the computer program, further performs the steps of:

extracting a state code of a first response data packet in a downlink data packet of the video stream, and judging that the downlink data packet of the video stream meets a fragmentation processing state according to a target state code;

and extracting data type information in the downlink data packet, and extracting the video data fragment length contained in the video data of the downlink data packet when the data type information is determined to contain the video data keyword.

In one embodiment, a computer-readable storage medium is provided, having a computer program stored thereon, which when executed by a processor, performs the steps of:

acquiring a video stream corresponding to a target video;

extracting feature information of an uplink data packet in a video stream, and extracting video data fragment length contained in a downlink data packet of the video stream when the feature information in the uplink data packet in the video stream meets a preset feature rule;

and extracting the video data in the downlink data packet to generate a fragment video file with the fragment length of the video data.

In one embodiment, the computer program when executed by the processor further performs the steps of:

acquiring a mirror image video stream, performing order preserving processing on a data packet of the mirror image video stream, and storing the data packet of the mirror image video stream after the order preserving processing to a transmission control protocol stream table;

and extracting and identifying a flow identifier in the mirror image video stream in the transmission control protocol flow table, and acquiring the video stream corresponding to the target video after the order preserving processing according to the flow identifier.

In one embodiment, the computer program when executed by the processor further performs the steps of:

identifying a target identification field of an uplink data packet in a video stream;

and extracting attribute features contained in the target identification field, and determining whether an uplink data packet in the video stream meets a preset feature rule or not according to preset target attribute features.

In one embodiment, the computer program when executed by the processor further performs the steps of:

and extracting the playing resource name of the target field from the uplink data packet meeting the preset characteristic rule to be used as the file name of the fragmented video file.

In one embodiment, the computer program when executed by the processor further performs the steps of:

identifying whether the length of video data to be processed in a downlink data packet of a video stream is zero or not;

and if the length of the video data to be processed is zero and the state code of the first response data packet in the downlink data packet does not meet the target state code, ending the processing flow of the downlink data packet.

In one embodiment, the computer program when executed by the processor further performs the steps of:

extracting a state code of a first response data packet in a downlink data packet of the video stream, and judging that the downlink data packet of the video stream meets a fragmentation processing state according to a target state code;

and extracting data type information in the downlink data packet, and extracting the video data fragment length contained in the video data of the downlink data packet when the data type information is determined to contain the video data keyword.

In one embodiment, the computer program when executed by the processor further performs the steps of:

extracting and processing video data in the downlink data packet, and recording the length of the processed video data;

and determining the length of the video data to be processed according to the video data fragment length and the length of the processed video data, and generating a fragment video file with the video data fragment length when the length of the video data to be processed is zero.

It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database or other medium used in the embodiments provided herein can include at least one of non-volatile and volatile memory. Non-volatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical storage, or the like. Volatile Memory can include Random Access Memory (RAM) or external cache Memory. By way of illustration and not limitation, RAM can take many forms, such as Static Random Access Memory (SRAM) or Dynamic Random Access Memory (DRAM), among others.

The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.

The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

19页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:一种超清视频传输方法及系统

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!

技术分类