Audio and video signal synchronization method and device

文档序号:1342038 发布日期:2020-07-17 浏览:6次 中文

阅读说明:本技术 一种音视频信号同步方法和装置 (Audio and video signal synchronization method and device ) 是由 胡建成 于 2019-04-29 设计创作,主要内容包括:本申请提供一种音视频信号同步方法和装置。本申请提供的音视频信号同步方法,包括:通过音频缓存区和视频缓存区分别缓存音频数据和视频数据;在所述视频缓存区缓存的视频数据的第一播放时长与所述音频缓存区缓存的音频数据的第二播放时长两者之间相差的时长不小于指定值时,通过对所述视频缓存区缓存的视频数据进行处理缩短所述第一播放时长与所述第二播放时长两者之间相差的时长;分别从所述音频缓存区和所述视频缓存区取出等时间量的音频数据和视频数据输出。本申请提供的音视频信号同步方法和装置,在实现音视频同步时,不需要技术人员的参与,工作量较小。(The application provides an audio and video signal synchronization method and device. The audio and video signal synchronization method provided by the application comprises the following steps: respectively caching audio data and video data through an audio cache region and a video cache region; when the time length of the difference between the first playing time length of the video data cached in the video cache region and the second playing time length of the audio data cached in the audio cache region is not less than a specified value, the time length of the difference between the first playing time length and the second playing time length is shortened by processing the video data cached in the video cache region; and respectively fetching the audio data and the video data of the equal time quantity from the audio buffer area and the video buffer area for output. According to the audio and video signal synchronization method and device, when the audio and video synchronization is achieved, the participation of technicians is not needed, and the workload is small.)

1. An audio-video signal synchronization method, characterized in that the method comprises:

respectively caching audio data and video data through an audio cache region and a video cache region;

when the time length of the difference between the first playing time length of the video data cached in the video cache region and the second playing time length of the audio data cached in the audio cache region is not less than a specified value, the time length of the difference between the first playing time length and the second playing time length is shortened by processing the video data cached in the video cache region;

and respectively fetching the audio data and the video data of the equal time quantity from the audio buffer area and the video buffer area for output.

2. The method of claim 1, wherein the reducing the duration of the difference between the first playback duration and the second playback duration by processing the video data buffered in the video buffer comprises:

when the second playing time length is longer than the first playing time length, frame supplementing processing is carried out on the video data cached in the video cache region;

and when the second playing time length is less than the first playing time length, performing frame extraction processing on the video data cached in the video cache region.

3. The method according to claim 2, wherein the frame-extracting process of the video data buffered in the video buffer includes:

deleting the first target frame video data which is initially cached in the video cache region from the video cache region.

4. The method of claim 3, wherein after the deleting the first target frame video data buffered at the beginning of the video buffer from the video buffer, the method further comprises:

and when the playing time length of the video data left in the video cache area and the second playing time length is not less than the specified value, performing frame extraction processing on the video data left in the video cache area.

5. The method according to claim 4, wherein the step of performing frame extraction on the remaining video data in the video buffer comprises:

when the difference between the first playing time length and the second playing time length is greater than a preset threshold value, deleting the video data at intervals from the second frame of video data in the video data left in the video cache region according to the sequence of the cache time from morning to evening until the difference between the playing time length of the video data left in the video cache region and the second playing time length is smaller than the specified value;

and when the difference between the first playing time length and the second playing time length is not greater than a preset threshold value, sequentially deleting the video data from the first frame of video data in the video data left in the video cache region according to the sequence from early to late of the caching time until the difference between the playing time length of the video data left in the video cache region and the second playing time length is smaller than the specified value.

6. The method according to claim 2, wherein the performing frame-filling processing on the video data buffered in the video buffer includes:

and copying the second target frame video data newly cached in the video cache region to obtain copied frame video data, and inserting the copied frame video data into the second target frame video data to obtain processed video data after frame supplementing processing.

7. The method of claim 6, wherein after obtaining the processed video data after the frame-filling processing, the method further comprises:

and when the time length of the difference between the playing time length of the processed video data and the second playing time length is not less than the specified value, performing frame supplementing processing on the processed video data.

8. The method according to claim 7, wherein the performing frame-filling processing on the processed video data comprises:

and according to the sequence of the caching time from late to early, copying the video data at intervals from third target frame video data which is separated from the second target frame video data by one frame of video data to obtain copied frame video data, and inserting the copied frame video data into the copied video data until the time length of the difference between the playing time length of the video data finally stored in the video cache area and the second playing time length is less than the specified value.

9. An audio and video signal synchronization device is characterized by comprising a buffer module, a processing module and an output module, wherein,

the cache module is used for respectively caching audio data and video data through an audio cache region and a video cache region;

the processing module is configured to shorten a time length of a difference between a first playing time length of the video data cached in the video cache area and a second playing time length of the audio data cached in the audio cache area by processing the video data cached in the video cache area when the time length of the difference between the first playing time length and the second playing time length is not less than a specified value;

and the output module is used for respectively taking out the audio data and the video data with equal time quantity from the audio buffer area and the video buffer area and outputting the audio data and the video data.

10. The apparatus of claim 9, wherein the processing module comprises a frame filling module and a frame extracting module, wherein,

the frame supplementing module is specifically configured to perform frame supplementing processing on the video data cached in the video cache region when the second playing duration is longer than the first playing duration;

the frame extracting module is specifically configured to perform frame extraction processing on the video data cached in the video cache area when the second playing duration is shorter than the first playing duration.

Technical Field

The present application relates to the field of audio and video technologies, and in particular, to an audio and video signal synchronization method and apparatus.

Background

When the audio and video equipment plays audio and video data, audio and video synchronization is required to be ensured. The audio and video synchronization method is mainly carried out at a coding end and a playing end, and is mainly realized by stamping a time stamp on audio and video data by the coding end during coding and packaging and playing the audio and video data according to the time stamp by the decoding end during playing. However, when the audio and video synchronization is realized by the above method, it is necessary to ensure that the audio and video signals are synchronized before encoding and packaging.

At present, audio and video devices have various differences, it is difficult to acquire synchronous audio and video signals, and a method of manually setting audio and video delay by a technician is often used to ensure that the audio and video signals are synchronous before encoding and packaging, so as to ensure the correctness of a timestamp and realize audio and video synchronization during playing. However, when the audio and video synchronization is realized by the method, the participation of technicians is required, and the workload is large.

Disclosure of Invention

In view of this, the present application provides an audio and video signal synchronization method and apparatus, so as to reduce workload of audio and video synchronization.

A first aspect of the present application provides an audio and video signal synchronization method, where the method includes:

respectively caching audio data and video data through an audio cache region and a video cache region;

when the time length of the difference between the first playing time length of the video data cached in the video cache region and the second playing time length of the audio data cached in the audio cache region is not less than a specified value, the time length of the difference between the first playing time length and the second playing time length is shortened by processing the video data cached in the video cache region;

and respectively fetching the audio data and the video data of the equal time quantity from the audio buffer area and the video buffer area for output.

A second aspect of the present application provides an audio/video signal synchronization apparatus, which includes a buffer module, a processing module, and an output module,

the cache module is used for respectively caching audio data and video data through an audio cache region and a video cache region;

the processing module is used for shortening the time length of the difference between the first playing time length and the second playing time length by processing the video data cached in the video cache area when the time length of the difference between the first playing time length of the video data cached in the video cache area and the second playing time length of the audio data cached in the audio cache area is not less than a specified value;

and the output module is used for respectively taking out the audio data and the video data with equal time quantity from the audio buffer area and the video buffer area and outputting the audio data and the video data.

According to the audio and video signal synchronization method and device, firstly, audio data and video data are respectively cached through an audio cache region and a video cache region, when the time length of the difference between the first playing time length of the video data cached in the video cache region and the second playing time length of the audio data cached in the audio cache region is not smaller than a specified value, the time length of the difference between the first playing time length and the second playing time length is shortened through processing the video data cached in the video cache region, and the audio data and the video data with equal time quantity are respectively taken out from the audio cache region and the video cache region to be output. Therefore, under the condition of not needing technical personnel to participate, the purpose of eliminating the difference of the audio and video equipment is achieved, the audio and video synchronization is realized, and the workload is small.

Drawings

Fig. 1 is a flowchart of a first embodiment of an audio/video signal synchronization method provided in the present application;

fig. 2 is a flowchart of a second embodiment of an audio/video signal synchronization method provided in the present application;

fig. 3 is a flowchart of a third embodiment of an audio/video signal synchronization method provided in the present application;

fig. 4 is a flowchart of a fourth embodiment of an audio/video signal synchronization method provided in the present application;

fig. 5 is a flowchart of a fifth embodiment of an audio/video signal synchronization method provided in the present application;

fig. 6 is a flowchart of a sixth embodiment of an audio/video signal synchronization method provided in the present application;

fig. 7 is a hardware structure diagram of a computer device where an audio/video signal synchronization apparatus according to an exemplary embodiment of the present application is located;

fig. 8 is a schematic structural diagram of a first embodiment of an audio/video signal synchronization apparatus provided in the present application;

fig. 9 is a schematic structure of a second embodiment of the audio/video signal synchronization apparatus provided in the present application.

Detailed Description

Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present application, as detailed in the appended claims.

The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in this application and the appended claims, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items.

It is to be understood that although the terms first, second, third, etc. may be used herein to describe various information, such information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope of the present application. The word "if" as used herein may be interpreted as "at … …" or "when … …" or "in response to a determination", depending on the context.

The application provides an audio and video signal synchronization method and device, which aim to reduce the workload of audio and video synchronization.

Several specific embodiments are given below to describe the technical solutions of the present application in detail, and these specific embodiments may be combined with each other, and details of the same or similar concepts or processes may not be repeated in some embodiments.

Fig. 1 is a flowchart of a first embodiment of an audio and video signal synchronization method provided in the present application. Referring to fig. 1, the method provided in this embodiment may include:

s101, respectively caching audio data and video data through an audio cache region and a video cache region.

Specifically, the method provided by the application is applied to computer equipment. In this step, the computer device may obtain the audio data from the audio device, obtain the video data from the video device, and buffer the audio data and the video data through the audio buffer area and the video buffer area, respectively.

S102, when the time length of the difference between the first playing time length of the video data cached in the video cache region and the second playing time length of the audio data cached in the audio cache region is not less than a specified value, the time length of the difference between the first playing time length and the second playing time length is shortened by processing the video data cached in the video cache region.

Specifically, the specified value is set according to actual needs, and in this embodiment, the specific value of the specified value is not limited. For example, in one embodiment, the specified value is equal to the playing duration of a frame of video data buffered in the video buffer.

It should be noted that, when the time length of the difference between the first playing time length and the second playing time length is not less than the specified value, at this time, the data amount of the audio data buffered in the audio buffer area is considered to be not equal to the data amount of the video data buffered in the video buffer area. For example, when the second playing duration is longer than the first playing duration, it indicates that the data volume of the audio data is greater than that of the video data, so that the audio and video are asynchronous, specifically, the phenomenon that the picture is seen after the sound is heard first is caused. For another example, when the second playing duration is shorter than the first playing duration, it indicates that the data volume of the audio data is smaller than that of the video data, so that the audio and video are asynchronous, specifically, the phenomenon that the sound is heard after the picture is seen is caused. In this step, the time length of the difference between the first playing time length and the second playing time length needs to be shortened, so as to realize audio and video synchronization.

In specific implementation, the video data cached in the video cache region can be processed to shorten the time length of the difference between the first playing time length and the second playing time length.

And S103, respectively obtaining the audio data and the video data with equal time quantity from the audio buffer area and the video buffer area and outputting the audio data and the video data.

Specifically, the amount of time for which audio data and video data are acquired may be determined based on the frame rate of the video data. For example, when the frame rate of the video data is 25Hz and the sampling rate of the audio data is 16k, at this time, the audio data and the video data of 40ms (i.e., the audio data of 640k and the video data of 1 frame) may be respectively extracted and output.

It should be noted that "fetching" here mainly refers to acquiring video data and audio data from a video buffer and an audio buffer, and deleting the acquired video data and audio data from the video buffer and the audio buffer, respectively, after the video data and audio data are acquired.

For example, in an embodiment, the extracted video data and audio data may be output to an encoding device for encoding, and then after encoding, the encoded data may be sent to a playing device for decoding and playing. Of course, in another embodiment, the extracted video data and audio data can be directly output to the playing device for playing.

In the audio and video signal synchronization method provided by this embodiment, first, audio data and video data are respectively cached through an audio cache region and a video cache region, and then when a difference between a first playing time length of the video data cached in the video cache region and a second playing time length of the audio data cached in the audio cache region is not less than a specified value, a difference between the first playing time length and the second playing time length is shortened by processing the video data cached in the video cache region, and the audio data and the video data with equal time duration are respectively taken out from the audio cache region and the video cache region and output. Therefore, under the condition of not needing technical personnel to participate, the purpose of eliminating the difference of the audio and video equipment is achieved, the audio and video synchronization is realized, and the workload is small.

Fig. 2 is a flowchart of a second embodiment of an audio/video signal synchronization method provided by the present application. Referring to fig. 2, the method provided in this embodiment, step S102, may include:

s201, when the second playing time length is longer than the first playing time length, frame supplementing processing is carried out on the video data cached in the video cache region.

Specifically, the frame complementing process is to add video data to the video buffer. For example, in an embodiment, any frame of video data cached in the video cache region may be copied to obtain copied frame video data, and the copied frame video data is inserted into the copied video data, so as to achieve the purpose of frame supplementing processing.

Optionally, in a possible implementation manner, a specific implementation process of this step may include:

and copying the second target frame video data newly cached in the video cache region to obtain copied frame video data, and inserting the copied frame video data into the second target frame video data to obtain processed video data after frame supplementing processing.

For example, in one embodiment, the video frames buffered in the video buffer include: t11, T12, T13, T14, T15, and T16 (named from the morning to the evening of the buffer time), in this step, T16 is copied to obtain copied frame video data T16, and then T16 is inserted into T16 to obtain processed video data after frame complementing processing (i.e., the video frames stored in the video buffer at this time have T11, T12, T13, T14, T15, T16, and T16).

S202, when the second playing time length is shorter than the first playing time length, frame extraction processing is carried out on the video data cached in the video cache region.

Specifically, the frame extraction processing refers to deleting video data from the video buffer. For example, in an embodiment, any frame of video data buffered in the video buffer may be deleted from the video buffer, so as to achieve the purpose of frame extraction processing.

Optionally, in a possible implementation manner, a specific implementation process of this step may include:

deleting the first target frame video data which is initially cached in the video cache region from the video cache region.

For example, in one embodiment, the video frames buffered in the video buffer include: t21, T22, T23, T24, T25, T26, … … (named from early to late in terms of buffering time), in this example, T21 is deleted from the video buffer.

In the method provided in this embodiment, when the second playing duration is longer than the first playing duration, frame supplementing processing is performed on the video data cached in the video cache region, and when the second playing duration is shorter than the first playing duration, frame extracting processing is performed on the video data cached in the video cache region. Therefore, the purpose of shortening the time difference between the first playing time and the second playing time can be achieved, and audio and video synchronization is achieved.

Fig. 3 is a flowchart of a third embodiment of an audio/video signal synchronization method provided in the present application. The audio and video synchronization method provided in this embodiment may perform a frame extraction process on video data cached in the video cache region, where the process includes:

s301, deleting the first target frame video data which is cached at the beginning of the video cache region from the video cache region.

And S302, when the time length of the difference between the playing time length of the video data left in the video cache area and the second playing time length is not less than the specified value, performing frame extraction processing on the video data left in the video cache area.

For example, in one embodiment, the video frames buffered in the video buffer include: t31, T32, T33, T34, T35, T36 and … … (named from morning to evening according to the buffering time), in this example, T31 is deleted from the video buffer (in this case, the video buffer has the rest of the video data, T32, T33, T34, T35, T36 and … …). Further, whether the time length of the difference between the playing time length of the video data left in the video cache area and the second playing time length is smaller than a specified value or not is judged, and the frame extraction processing is performed on the video data left in the video cache area when the time length of the difference between the playing time length of the video data left in the video cache area and the second playing time length is not smaller than the specified value. In a specific implementation, exemplarily, any frame of video data left in the video buffer area may be deleted, so as to achieve the purpose of performing frame extraction processing on the video data left in the video buffer area.

In the method provided by this embodiment, by deleting the first target frame video data, which is initially cached in the video cache region, from the video cache region, when the difference between the playing time length of the video data remaining in the video cache region and the second playing time length is not less than a specified value, the frame extraction processing is performed on the video data remaining in the video cache region. Therefore, the difference between the playing time of the video data finally cached in the video cache region and the playing time of the audio data cached in the audio cache region can be reduced, so that the video data and the audio data approach to each other, and the audio and video synchronization is realized.

Fig. 4 is a flowchart of a fourth embodiment of an audio/video signal synchronization method provided in the present application. Referring to fig. 4, the method provided in this embodiment, based on the foregoing embodiment, a process of performing frame extraction processing on the remaining video data in the video buffer area may include:

s401, when the time length of the difference between the first playing time length and the second playing time length is larger than a preset threshold value, starting from the second frame video data in the video data left in the video cache region, deleting the video data at intervals according to the sequence of the cache time from early to late until the time length of the difference between the playing time length of the video data left in the video cache region and the second playing time length is smaller than the specified value.

Specifically, the preset threshold is set according to actual needs. In this embodiment, the specific value of the preset threshold is not limited. The following description will be made by taking "the preset threshold is equal to 1 second" as an example.

With reference to the example of the third embodiment, for example, in the third embodiment, after the frame extraction processing is performed on the video data cached in the video cache area, the remaining video data in the video cache area has: t32, T33, T34, T35, T36 and … …, in this step, the video data are deleted at intervals in the order from the beginning to the end of the buffering time from T33 until the difference between the playing time length of the video data finally left in the video buffer and the second playing time length is smaller than the specified value.

In the specific implementation, the method can be implemented according to the following processes:

(1) and according to the sequence of the caching time from morning to evening, taking the second frame video data in the video data left in the video caching area as the video data to be deleted.

(2) And deleting the video data to be deleted from the video cache region to obtain first video data.

(3) And judging whether the time length of the difference between the playing time length of the first video data and the second playing time length is less than the specified value.

(4) If so, ending, if not, taking the target frame video data which is separated from the video data to be deleted by one frame video data as the video data to be deleted next time according to the sequence from morning to evening of the caching time, and returning to execute the step of deleting the video data to be deleted from the video caching area.

In connection with the above example, the remaining video data in the video buffer area includes: t32, T33, T34, T35, T36, and … …, in this example, first, T33 is used as the video data to be deleted, and T33 is deleted from the video buffer to obtain the first video data (i.e. the remaining video data in the video buffer at this time are T32, T34, T35, T36, and … …), and at this time, it is determined whether the duration of the difference between the play duration of the first video data and the second play duration is less than the specified value. For example, it is determined that the playing time length of the first video data is different from the second playing time length by a time length not less than the specified value, at this time, T35, which is separated from T33 by one frame of video data, is taken as the video data to be deleted, and the step of deleting the video data to be deleted from the video buffer is returned to be performed.

S402, when the time length of the difference between the first playing time length and the second playing time length is not larger than a preset threshold value, sequentially deleting the video data from the first frame of video data in the video data left in the video cache region according to the sequence from early to late of the cache time until the time length of the difference between the playing time length of the video data left in the video cache region and the second playing time length is smaller than the specified value.

In a specific implementation, the process may include the following steps:

(1) and according to the sequence of the caching time from morning to evening, taking the first frame video data in the remaining video data of the video caching area as the video data to be deleted.

(2) And deleting the video data to be deleted from the video cache region to obtain second video data.

(3) And judging whether the time length of the difference between the playing time length of the second video data and the second playing time length is less than the specified value.

(4) If so, ending, if not, taking the target frame video data which is separated from the video data to be deleted by zero frame video data as the video data to be deleted next time according to the sequence from morning to evening of the caching time, and returning to execute the step of deleting the video data to be deleted from the video caching area.

For example, in one embodiment, the video frames buffered in the video buffer include: t41, T42, T43, T44, T45, T46, … … (named from the morning to the evening according to the buffering time), and the difference between the first playing time length and the second playing time length is not more than the preset threshold. In this example, T41 is first deleted from the video buffer, and after deletion, the video data remaining in the video buffer includes: t42, T43, T44, T45, T46, … …, and the time length of the difference between the playing time length of the remaining video data in the video buffer and the second playing time length is still not less than the specified value, at this time, the frame extraction processing is performed on the remaining video data in the video buffer.

In this example, immediately after T42 is used as the video data to be deleted, T42 is deleted from the video buffer, and the second video data is obtained. And further determining whether the time length of the difference between the playing time length of the second video data and the second playing time length is smaller than the specified value, for example, at this time, determining that the time length of the difference between the second video data and the second playing time length is not smaller than the specified value by the determination, at this time, taking T43 as the video data to be deleted, and returning to execute the step of deleting the video data to be deleted from the video cache area.

The method provided by this embodiment provides a method for performing frame extraction processing on the video data remaining in the video cache region, and by this method, the duration of the difference between the playing duration of the video data remaining in the video cache region and the second playing duration can be made smaller than a specified value, so that the difference between the data volume of the video cache region and the data volume of the audio cache region can be reduced to the greatest extent, and audio and video synchronization can be achieved.

Fig. 5 is a flowchart of a fifth embodiment of an audio/video signal synchronization method provided by the present application. The method provided in this embodiment, on the basis of the foregoing embodiment, a process of performing frame interpolation processing on video data cached in a video cache region may include:

s501, copying second target frame video data which is newly cached in the video cache region to obtain copied frame video data, inserting the copied frame video data into the second target frame video data, and obtaining processed video data after frame supplementing processing.

And S502, when the time length of the difference between the playing time length of the processed video data and the second playing time length is not less than the specified value, performing frame supplementing processing on the processed video data.

For example, in one embodiment, the video frames buffered in the video buffer include: t51, T52, T53, T54, T55, and T56 (named from early to late in terms of buffering time), in this example, T56 is first copied to obtain copied frame video data T56, and T56 is inserted into T56 to obtain processed video data after frame interpolation processing (video frames included in the processed video data include T51, T52, T53, T54, T55, T56, and T56). Further, after judgment, the time length of the difference between the playing time length of the processed video data and the second playing time length is determined to be not less than the specified value, and at the moment, frame supplementing processing is carried out on the processed video data. In the concrete implementation, any frame of video data in the processed video data can be copied to obtain copied frame video data, and the copied frame video data is inserted into the copied video data, so that the purpose of frame supplementing processing on the processed video data is achieved.

In the method provided by this embodiment, the second target frame video data newly cached in the video cache region is copied to obtain copied frame video data, the copied frame video data is inserted into the second target frame video data to obtain processed video data after frame supplementing processing, and then when the time length of the difference between the playing time length of the processed video data and the second playing time length is not less than the specified value, the frame supplementing processing is performed on the processed video data. Therefore, the difference between the playing time of the video data finally cached in the video cache region and the playing time of the audio data cached in the audio cache region can be reduced, so that the video data and the audio data approach to each other, and the audio and video synchronization is realized.

Optionally, in an embodiment, the process of performing frame interpolation processing on the processed video data may include:

and according to the sequence of the caching time from late to early, copying the video data at intervals from third target frame video data which is separated from the second target frame video data by one frame of video data to obtain copied frame video data, and inserting the copied frame video data into the copied video data until the time length of the difference between the playing time length of the video data finally stored in the video cache area and the second playing time length is less than the specified value.

In a specific implementation, the process may include the following steps:

(1) according to the sequence from late to early of the caching time, taking third target frame video data separated from the second target frame video data by one frame of video data as to-be-copied video data to be copied;

(2) copying video data to be copied to obtain copied frame video data, and inserting the copied frame video data into the video data to be copied to obtain third video data;

(3) and judging whether the time length of the difference between the playing time length of the third video data and the second playing time length is less than the specified value.

(4) If so, ending, if not, taking the target frame video data which is separated from the video data to be copied by one frame video data as the video data to be copied next time according to the sequence from the late to the early of the caching time, and returning to the step of copying the video data to be copied.

For example, in one embodiment, the video frames buffered in the video buffer include: t61, T62, T63, T64, T65, and T66 (named from morning to evening according to the buffering time), in this example, first, T66 is copied to obtain copied frame video data T66, and then, T66 is inserted into T66 to obtain processed video data (that is, the video frames stored in the video buffer at this time have T61, T62, T63, T64, T65, T66, and T66), and the time length of the difference between the processed video data and the second playing time length is not less than the specified value, and at this time, the processed video data is subjected to frame supplementing processing.

In this example, according to the sequence from late to early of the buffering time, T64, which is one frame of video data away from T66, is used as the video data to be copied, T64 is copied to obtain copied frame video data T64, then T64 is inserted into T64 to obtain third video data (i.e., the video frames stored in the video buffer at this time have T61, T62, T63, T64, T64, T65, T66, and T66), and it is determined whether the time length of the difference between the playing time length of the third video data and the second playing time length is smaller than the specified value, for example, it is determined that the time length of the difference between the third video data and the second playing time length is not smaller than the specified value, T62 is used as the video data to be copied, and the step of executing copying the video data to be copied is returned.

The method provided by this embodiment provides a method for performing frame-filling processing on processed video data, and by this method, the duration of a difference between the playing duration of the video data finally stored in the video buffer and the second playing duration is smaller than a specified value, so that the difference between the data volume of the video buffer and the data volume of the audio buffer can be reduced to the greatest extent, and audio and video synchronization is achieved.

A more specific example is given below to illustrate the technical solution of the present application in detail. Fig. 6 is a flowchart of a sixth embodiment of an audio and video signal synchronization method provided in the present application, and referring to fig. 6, the audio and video signal synchronization method provided in this embodiment may include:

s601, respectively caching audio data and video data through an audio cache region and a video cache region.

Specifically, the specific implementation process and implementation principle of this step may refer to the description in step S101, and are not described herein again.

S602, determining whether a difference between a first playing time of the video data cached in the video cache region and a second playing time of the audio data cached in the audio cache region is smaller than a specified value, if not, performing step S603, and if so, performing step S611.

It should be noted that, when the time length of the difference between the first playing time length and the second playing time length is judged to be smaller than the specified value, step S611 is executed.

S603, determining whether the second playing time duration is greater than the first playing time duration, if not, performing step S604, and if so, performing step S608.

S604, deleting the first target frame video data which is cached at the beginning of the video cache region from the video cache region.

Specifically, the specific implementation process and implementation principle of this step may be referred to the description in step S301, and are not described herein again.

S605, determining whether a difference between the playing duration of the video data remaining in the video buffer and the second playing duration is smaller than the specified value, if not, performing step S606, and if so, performing step S611.

It should be noted that, when it is determined that the duration of the difference between the playing duration of the video data remaining in the video buffer and the second playing duration is smaller than the specified value, step S611 is executed.

And S606, when the time length of the difference between the first playing time length and the second playing time length is greater than a preset threshold value, deleting the video data at intervals from the second frame video data in the video data left in the video cache region according to the sequence of the cache time from early to late until the time length of the difference between the playing time length of the video data left in the video cache region and the second playing time length is smaller than the specified value.

S607, when the difference between the first playing time and the second playing time is not greater than the preset threshold, sequentially deleting the video data from the first frame of video data in the video data left in the video buffer area according to the sequence from the beginning to the end of the buffering time until the difference between the playing time of the video data left in the video buffer area and the second playing time is smaller than the specified value.

Specifically, the specific implementation process and implementation principle of step S606 and step S607 may refer to the description in the fourth embodiment, and are not described herein again.

Referring to fig. 6, it should be noted that after step S606 or step S607 is executed, step S611 is executed.

And S608, copying the second target frame video data newly cached in the video cache region to obtain copied frame video data, and inserting the copied frame video data into the second target frame video data to obtain processed video data after frame supplementing processing.

Specifically, the specific implementation process and implementation principle of this step may be referred to the description in step S501, and are not described herein again.

And S609, judging whether the time length of the difference between the playing time length of the processed video data and the second playing time length is smaller than the specified value, if not, executing a step S610, and if so, executing a step S611.

It should be noted that, when it is determined that the duration of the difference between the playing duration of the processed video data and the second playing duration is smaller than the specified value, step S611 is executed.

S610, according to the sequence from late to early of the caching time, copying the video data at intervals from the third target frame video data which is separated from the second target frame video data by one frame of video data to obtain copied frame video data, and inserting the copied frame video data into the copied video data until the time length of the difference between the playing time length of the video data finally stored in the video caching area and the second playing time length is smaller than the specified value.

Specifically, the specific implementation process and implementation principle of this step may refer to the description in the above embodiments, and are not described herein again.

Referring to fig. 6, it should be noted that, after step S610 is executed, step S611 is executed.

And S611, respectively taking out the audio data and the video data with equal time from the audio buffer area and the video buffer area for output.

Specifically, the specific implementation process and implementation principle of this step may refer to the description in step S103, and are not described herein again.

The method provided by the embodiment provides an audio and video signal synchronization method, and through the method, audio and video synchronization can be realized without the participation of technical personnel, and the workload is small.

Corresponding to the embodiment of the audio and video signal synchronization method, the application also provides an embodiment of an audio and video signal synchronization device.

The embodiment of the audio and video signal synchronization device can be applied to computer equipment. The device embodiments may be implemented by software, or by hardware, or by a combination of hardware and software. The software implementation is taken as an example, and is formed by reading corresponding computer program instructions in the memory into the memory for operation through the processor of the computer device where the software implementation is located as a logical means. In terms of hardware, as shown in fig. 7, a hardware structure diagram of a computer device where an audio/video signal synchronization apparatus is located according to an exemplary embodiment of the present application is shown, except for the storage 710, the processor 720, the memory 730, and the network interface 740 shown in fig. 7, the computer device where the apparatus is located in the embodiment may also include other hardware according to an actual function of the audio/video signal synchronization apparatus, which is not described again.

Fig. 8 is a schematic structural diagram of a first embodiment of an audio/video signal synchronization apparatus provided in the present application. Referring to fig. 8, the apparatus provided in this embodiment may include a cache module 810, a processing module 820, and an output module 830; wherein the content of the first and second substances,

the cache module 810 is configured to cache audio data and video data through an audio cache region and a video cache region, respectively;

the processing module 820 is configured to shorten a time length of a difference between a first playing time length of the video data cached in the video cache region and a second playing time length of the audio data cached in the audio cache region by processing the video data cached in the video cache region when the time length of the difference between the first playing time length and the second playing time length is not less than a specified value;

the output module 830 is configured to take out the equal amount of time of audio data and video data from the audio buffer and the video buffer, respectively, and output the equal amount of time of audio data and video data.

The apparatus of this embodiment may be used to implement the technical solution of the method embodiment shown in fig. 1, and the implementation principle and the technical effect are similar, which are not described herein again.

Further, fig. 9 is a schematic structural diagram of a second embodiment of the audio/video signal synchronization apparatus provided in the present application. Referring to fig. 9, based on the above embodiments, in the apparatus provided in the present embodiment, the processing module 820 includes a frame interpolation module 8201 and a frame extraction module 8202, wherein,

the frame supplementing module 8201 is specifically configured to perform frame supplementing processing on the video data cached in the video cache region when the second playing duration is longer than the first playing duration;

the frame extracting module 8202 is specifically configured to perform frame extracting processing on the video data cached in the video cache region when the second playing duration is shorter than the first playing duration.

Further, the frame extracting module 8202 is specifically configured to delete the first target frame video data, which is initially cached in the video cache region, from the video cache region.

Further, the frame extracting module 8202 is further specifically configured to, after the first target frame video data initially cached in the video cache region is deleted from the video cache region, perform frame extraction processing on the video data remaining in the video cache region when a difference between the playing time length of the video data remaining in the video cache region and the second playing time length is not less than the specified value.

Further, the frame extracting module 8202 is further specifically configured to:

when the difference between the first playing time length and the second playing time length is greater than a preset threshold value, deleting the video data at intervals from the second frame of video data in the video data left in the video cache region according to the sequence of the cache time from morning to evening until the difference between the playing time length of the video data left in the video cache region and the second playing time length is smaller than the specified value;

and when the difference between the first playing time length and the second playing time length is not greater than a preset threshold value, sequentially deleting the video data from the first frame of video data in the video data left in the video cache region according to the sequence from early to late of the caching time until the difference between the playing time length of the video data left in the video cache region and the second playing time length is smaller than the specified value.

Further, the frame complementing module 8201 is specifically configured to copy second target frame video data newly cached in the video cache region to obtain copied frame video data, and insert the copied frame video data into the second target frame video data to obtain processed video data after frame complementing processing.

Further, the frame complementing module 8201 is specifically configured to, after the processed video data after frame complementing processing is obtained, perform frame complementing processing on the processed video data when a time length of a difference between the playing time length of the processed video data and the second playing time length is not less than the specified value.

Further, the frame complementing module 8201 is specifically configured to copy video data at intervals from third target frame video data that is separated from the second target frame video data by one frame of video data according to a sequence from late to early at the buffering time to obtain copied frame video data, and insert the copied frame video data into the copied video data until a difference between a playing duration of the video data finally stored in the video buffer area and the second playing duration is less than the specified value.

The present application also provides a computer readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of any of the methods of audio video signal synchronization provided herein.

In particular, computer-readable media suitable for storing computer program instructions and data include all forms of non-volatile memory, media and memory devices, including by way of example semiconductor memory devices (e.g., EPROM, EEPROM, and flash memory devices), magnetic disks (e.g., internal hard disk or removable disks), magneto-optical disks, and CD ROM and DVD-ROM disks.

With reference to fig. 7, the present application further provides a computer device, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, wherein the processor implements the steps of any of the methods for synchronizing audio and video signals provided by the present application when executing the computer program.

The above description is only exemplary of the present application and should not be taken as limiting the present application, as any modification, equivalent replacement, or improvement made within the spirit and principle of the present application should be included in the scope of protection of the present application.

17页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:基于时间码同步的大型演艺控制系统

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!

技术分类