Method and device for removing video jitter

文档序号:1696995 发布日期:2019-12-10 浏览:33次 中文

阅读说明:本技术 一种去除视频抖动的方法及装置 (Method and device for removing video jitter ) 是由 陈睿智 于 2018-05-31 设计创作,主要内容包括:本申请提供一种去除视频抖动的方法,所述方法包括:根据压缩后的每对图像中特征点对的位置信息,确定每对原图像中特征点对的位置信息,其中,一个特征点对由每对图像中前后两个图像上对应的两个特征点构成,所述原图像为压缩前的图像;根据每对原图像中特征点对的位置信息,确定每对原图像中后一个图像相对于前一个图像的位置变换信息;根据n对原图像中后一个图像相对于前一个图像的位置变换信息,获取第m对原图像中前一个图像对应的变形信息,其中,n和m为正整数,m不大于n;根据第m对原图像中前一个图像对应的变形信息,对第m对原图像中前一个图像变形,得到去除抖动后的第m对原图像中前一个图像。通过以上步骤实时地去除视频抖动。(the application provides a method for removing video jitter, which comprises the following steps: determining the position information of the feature point pairs in each pair of original images according to the position information of the feature point pairs in each pair of compressed images, wherein one feature point pair consists of two corresponding feature points on a front image and a rear image in each pair of images, and the original images are images before compression; determining position transformation information of a subsequent image relative to a previous image in each pair of original images according to the position information of the feature point pairs in each pair of original images; according to the position transformation information of the latter image relative to the former image in the n pairs of original images, obtaining the corresponding deformation information of the former image in the mth pair of original images, wherein n and m are positive integers, and m is not more than n; and deforming the previous image in the mth pair of original images according to the deformation information corresponding to the previous image in the mth pair of original images to obtain the previous image in the mth pair of original images after the dithering is removed. The video jitter is removed in real time through the above steps.)

1. A method for removing video judder, the method comprising:

Determining the position information of the feature point pairs in each pair of original images according to the position information of the feature point pairs in each pair of compressed images, wherein one feature point pair consists of two corresponding feature points on a front image and a rear image in each pair of images, and the original images are images before compression;

determining position transformation information of a subsequent image relative to a previous image in each pair of original images according to the position information of the feature point pairs in each pair of original images;

according to the position transformation information of the latter image relative to the former image in the n pairs of original images, obtaining the corresponding deformation information of the former image in the mth pair of original images, wherein n and m are positive integers, and m is not more than n;

And deforming the previous image in the mth pair of original images according to the deformation information corresponding to the previous image in the mth pair of original images to obtain the previous image in the mth pair of original images after the dithering is removed.

2. The method of claim 1, further comprising: storing the original image in a first queue;

And storing the position conversion information of the latter image relative to the former image in each pair of original images in a second queue.

3. The method according to claim 2, wherein the step of obtaining the deformation information corresponding to the previous image in the mth pair of original images based on the position transformation information of the next image relative to the previous image in the nth pair of original images comprises:

And when the images stored in the first queue reach a first number and the position conversion information stored in the second queue reaches the first number, acquiring deformation information corresponding to the previous image in the mth pair of original images according to the position conversion information of the next image relative to the previous image in the n pairs of original images.

4. The method of claim 3, further comprising, after the step of obtaining deformation information corresponding to a previous image in the first pair of original images:

Before storing the image to the first queue again, taking out the image at the head of the first queue; and

And before the position conversion information is stored in the second queue again, the position conversion information of the head of the second queue is taken out.

5. The method of claim 1, further comprising:

Compressing each pair of original images by a first multiple;

Determining feature points on each image in each pair of compressed images;

determining two corresponding feature points on the front image and the rear image in each pair of compressed images as a feature point pair;

And determining the position information of the characteristic point pair in each pair of compressed images.

6. the method according to claim 5, wherein the step of determining the position information of the pairs of feature points in each pair of original images according to the compressed position information of the pairs of feature points in each pair of images comprises:

And expanding the position information of the feature point pairs in each pair of compressed images by a first multiple to obtain the position information of the feature point pairs in each pair of original images.

7. the method according to claim 3, wherein the step of determining the position information of the pairs of feature points in each pair of original images according to the compressed position information of the pairs of feature points in each pair of images comprises:

partitioning a front image and a rear image in each pair of original images;

Determining position transformation information of the corresponding partition of the next image in each pair of original images relative to the corresponding partition of the previous image according to the position information of the feature point pairs in the corresponding partitions in each pair of original images;

And determining the position transformation information of the latter image relative to the former image in each pair of original images according to the position transformation information of the corresponding partition of the latter image relative to the corresponding partition of the former image in each pair of original images.

8. the method of claim 7, wherein the position information is coordinates, the position transformation information is a transformation matrix, and the deformation information is a deformation matrix.

9. The method of claim 8, wherein the step of warping the previous image of the mth pair of original images based on the warping information corresponding to the previous image of the mth pair of original images comprises:

According to the deformation matrix corresponding to the previous image in the mth pair of original images, deforming the previous image in the mth pair of original images in a partition mode;

And cutting the edge of the previous image in the mth pair of original images after deformation.

10. an apparatus for removing video judder, the apparatus comprising:

a position information obtaining unit, configured to determine position information of a pair of feature points in each pair of original images according to position information of the pair of feature points in each pair of compressed images, where one feature point pair is formed by two corresponding feature points on two front and rear images in each pair of images, and the original image is an image before compression;

the position transformation information acquisition unit is used for determining the position transformation information of the next image relative to the previous image in each pair of original images according to the position information of the feature point pairs in each pair of original images;

The deformation information acquisition unit is used for acquiring deformation information corresponding to a previous image in an mth pair of original images according to position transformation information of a next image relative to the previous image in the nth pair of original images, wherein n and m are positive integers, and m is not more than n;

And the deformation processing unit is used for deforming the previous image in the mth pair of original images according to the deformation information corresponding to the previous image in the mth pair of original images to obtain the previous image in the mth pair of original images after the dithering is removed.

11. The apparatus of claim 10, further comprising:

The image storage unit is used for storing the original image in a first queue;

And the position conversion information storage unit is used for storing the position conversion information of the next image relative to the previous image in each pair of original images in the second queue.

12. The apparatus of claim 10, further comprising:

The compression unit is used for compressing each pair of original images by a first multiple;

A feature point determining unit for determining a feature point on each image in each pair of compressed images;

A feature point pair determining unit, configured to determine two feature points corresponding to two front and rear images in each pair of compressed images as a feature point pair;

And the position information acquisition second unit is used for determining the position information of the feature point pairs in each pair of compressed images.

13. The apparatus according to claim 10, wherein the position conversion information acquisition unit includes:

an image partition subunit, configured to partition each of front and rear images in each pair of original images;

The position conversion information acquisition unit is used for acquiring the position conversion information from the corresponding partition of the next image to the corresponding partition of the previous image in each pair of original images;

And the position transformation information acquisition second subunit is used for determining the position transformation information from the next image to the previous image in each pair of original images according to the position transformation information from the corresponding partition of the next image to the corresponding partition of the previous image in each pair of original images.

14. The apparatus according to claim 10, wherein the deformation processing unit comprises:

the deformation subunit is used for carrying out partition deformation on the previous image in the mth pair of original images according to the deformation matrix corresponding to the previous image in the mth pair of original images;

And the clipping subunit is used for clipping the edge of the previous image in the mth pair of original images after the deformation.

15. an electronic device, characterized in that the electronic device comprises:

A processor;

A memory for storing a program for removing video judder, which program when read and executed by the processor performs the following operations:

Determining the position information of the feature point pairs in each pair of original images according to the position information of the feature point pairs in each pair of compressed images, wherein one feature point pair consists of two corresponding feature points on a front image and a rear image in each pair of images, and the original images are images before compression;

determining position transformation information of a subsequent image relative to a previous image in each pair of original images according to the position information of the feature point pairs in each pair of original images;

According to the position transformation information of the latter image relative to the former image in the n pairs of original images, obtaining the corresponding deformation information of the former image in the mth pair of original images, wherein n and m are positive integers, and m is not more than n;

And deforming the previous image in the mth pair of original images according to the deformation information corresponding to the previous image in the mth pair of original images to obtain the previous image in the mth pair of original images after the dithering is removed.

16. A computer-readable storage medium having a program stored thereon for removing video judder, the program, when read and executed by a processor, performs operations comprising:

Determining the position information of the feature point pairs in each pair of original images according to the position information of the feature point pairs in each pair of compressed images, wherein one feature point pair consists of two corresponding feature points on a front image and a rear image in each pair of images, and the original images are images before compression;

determining position transformation information of a subsequent image relative to a previous image in each pair of original images according to the position information of the feature point pairs in each pair of original images;

according to the position transformation information of the latter image relative to the former image in the n pairs of original images, obtaining the corresponding deformation information of the former image in the mth pair of original images, wherein n and m are positive integers, and m is not more than n;

And deforming the previous image in the mth pair of original images according to the deformation information corresponding to the previous image in the mth pair of original images to obtain the previous image in the mth pair of original images after the dithering is removed.

Technical Field

the present application relates to the field of video processing, and in particular, to a method and an apparatus for removing video jitter. The application also relates to an electronic device and a computer readable storage medium.

background

A long duration video is formed from many frames of images that change rapidly and continuously. When a video is shot, the relative motion between the video acquisition equipment and the scene can cause large displacement between the shot images which change rapidly, and therefore the video can have a shaking phenomenon.

Disclosure of Invention

The application provides a method for removing video jitter, which aims to solve the technical problem that the jitter cannot be removed in real time in the prior art.

The application provides a method for removing video jitter, which comprises the following steps: determining the position information of the feature point pairs in each pair of original images according to the position information of the feature point pairs in each pair of compressed images, wherein one feature point pair consists of two corresponding feature points on a front image and a rear image in each pair of images, and the original images are images before compression; determining position transformation information of a subsequent image relative to a previous image in each pair of original images according to the position information of the feature point pairs in each pair of original images; according to the position transformation information of the latter image relative to the former image in the n pairs of original images, obtaining the corresponding deformation information of the former image in the mth pair of original images, wherein n and m are positive integers, and m is not more than n; and deforming the previous image in the mth pair of original images according to the deformation information corresponding to the previous image in the mth pair of original images to obtain the previous image in the mth pair of original images after the dithering is removed.

Optionally, the method further includes: storing the original image in a first queue; and storing the position conversion information of the latter image relative to the former image in each pair of original images in a second queue.

Optionally, the step of obtaining, according to the position conversion information of the next image relative to the previous image in the n pairs of original images, deformation information corresponding to the previous image in the mth pair of original images includes: and when the images stored in the first queue reach a first number and the position conversion information stored in the second queue reaches the first number, acquiring deformation information corresponding to the previous image in the mth pair of original images according to the position conversion information of the next image relative to the previous image in the n pairs of original images.

Optionally, after the step of obtaining the deformation information corresponding to the previous image in the first pair of original images, the method further includes: before storing the image to the first queue again, taking out the image at the head of the first queue; and extracting the position conversion information of the head of the second queue before storing the position conversion information to the second queue again.

Optionally, the method further includes: compressing each pair of original images by a first multiple; determining feature points on each image in each pair of compressed images; determining two corresponding feature points on the front image and the rear image in each pair of compressed images as a feature point pair; and determining the position information of the characteristic point pair in each pair of compressed images.

optionally, the step of determining, according to the position information of the feature point pair in each compressed pair of images, the position information of the feature point pair in each pair of original images includes: and expanding the position information of the feature point pairs in each pair of compressed images by a first multiple to obtain the position information of the feature point pairs in each pair of original images.

optionally, the step of determining, according to the position information of the feature point pair in each compressed pair of images, the position information of the feature point pair in each pair of original images includes: partitioning a front image and a rear image in each pair of original images; determining position transformation information of the corresponding partition of the next image in each pair of original images relative to the corresponding partition of the previous image according to the position information of the feature point pairs in the corresponding partitions in each pair of original images; and determining the position transformation information of the latter image relative to the former image in each pair of original images according to the position transformation information of the corresponding partition of the latter image relative to the corresponding partition of the former image in each pair of original images.

optionally, the position information is a coordinate, the position transformation information is a transformation matrix, and the deformation information is a deformation matrix.

Optionally, the step of deforming the previous image in the mth pair of original images according to the deformation information corresponding to the previous image in the mth pair of original images includes: according to the deformation matrix corresponding to the previous image in the mth pair of original images, deforming the previous image in the mth pair of original images in a partition mode; and cutting the edge of the previous image in the mth pair of original images after deformation.

The present application further provides an apparatus for removing video jitter, the apparatus comprising: a position information obtaining unit, configured to determine position information of a pair of feature points in each pair of original images according to position information of the pair of feature points in each pair of compressed images, where one feature point pair is formed by two corresponding feature points on two front and rear images in each pair of images, and the original image is an image before compression; the position transformation information acquisition unit is used for determining the position transformation information of the next image relative to the previous image in each pair of original images according to the position information of the feature point pairs in each pair of original images; the deformation information acquisition unit is used for acquiring deformation information corresponding to a previous image in an mth pair of original images according to position transformation information of a next image relative to the previous image in the nth pair of original images, wherein n and m are positive integers, and m is not more than n; and the deformation processing unit is used for deforming the previous image in the mth pair of original images according to the deformation information corresponding to the previous image in the mth pair of original images to obtain the previous image in the mth pair of original images after the dithering is removed.

Optionally, the apparatus further comprises: the image storage unit is used for storing the original image in a first queue; and the position conversion information storage unit is used for storing the position conversion information of the next image relative to the previous image in each pair of original images in the second queue.

optionally, the apparatus further comprises: the compression unit is used for compressing each pair of original images by a first multiple; a feature point determining unit for determining a feature point on each image in each pair of compressed images; a feature point pair determining unit, configured to determine two feature points corresponding to two front and rear images in each pair of compressed images as a feature point pair; and the position information acquisition second unit is used for determining the position information of the feature point pairs in each pair of compressed images.

optionally, the position conversion information obtaining unit includes: an image partition subunit, configured to partition each of front and rear images in each pair of original images; the position conversion information acquisition unit is used for acquiring the position conversion information from the corresponding partition of the next image to the corresponding partition of the previous image in each pair of original images; and the position transformation information acquisition second subunit is used for determining the position transformation information from the next image to the previous image in each pair of original images according to the position transformation information from the corresponding partition of the next image to the corresponding partition of the previous image in each pair of original images.

optionally, the deformation processing unit includes: the deformation subunit is used for carrying out partition deformation on the previous image in the mth pair of original images according to the deformation matrix corresponding to the previous image in the mth pair of original images; and the clipping subunit is used for clipping the edge of the previous image in the mth pair of original images after the deformation.

The present application further proposes an electronic device, which includes: a processor; a memory for storing a program for removing video judder, which program when read and executed by the processor performs the following operations: determining the position information of the feature point pairs in each pair of original images according to the position information of the feature point pairs in each pair of compressed images, wherein one feature point pair consists of two corresponding feature points on a front image and a rear image in each pair of images, and the original images are images before compression; determining position transformation information of a subsequent image relative to a previous image in each pair of original images according to the position information of the feature point pairs in each pair of original images; according to the position transformation information of the latter image relative to the former image in the n pairs of original images, obtaining the corresponding deformation information of the former image in the mth pair of original images, wherein n and m are positive integers, and m is not more than n; and deforming the previous image in the mth pair of original images according to the deformation information corresponding to the previous image in the mth pair of original images to obtain the previous image in the mth pair of original images after the dithering is removed.

The present application also provides a computer-readable storage medium having a program stored thereon for removing video jitter, the program, when read and executed by a processor, performs the following operations: determining the position information of the feature point pairs in each pair of original images according to the position information of the feature point pairs in each pair of compressed images, wherein one feature point pair consists of two corresponding feature points on a front image and a rear image in each pair of images, and the original images are images before compression; determining position transformation information of a subsequent image relative to a previous image in each pair of original images according to the position information of the feature point pairs in each pair of original images; according to the position transformation information of the latter image relative to the former image in the n pairs of original images, obtaining the corresponding deformation information of the former image in the mth pair of original images, wherein n and m are positive integers, and m is not more than n; and deforming the previous image in the mth pair of original images according to the deformation information corresponding to the previous image in the mth pair of original images to obtain the previous image in the mth pair of original images after the dithering is removed.

according to the technical scheme for removing the video jitter, firstly, the position information of the feature point pairs in each pair of original images is determined according to the position information of the feature point pairs in each pair of compressed images, and the electronic equipment can perform various processing faster due to the fact that the original images can be reduced after being compressed, so that the technical means can be adopted to obtain the position information of each feature point pair on each acquired image in real time. After the position information of the feature point pairs on each pair of images is acquired in real time, correspondingly, the position conversion information from the next image to the previous image in each pair of original images is determined in real time according to the position information of the feature point pairs in each pair of original images. And after the position conversion information from the next image to the previous image in the n pairs of original images is obtained, obtaining the deformation information corresponding to the previous image in the mth pair of original images, and deforming the previous image according to the deformation information corresponding to the previous image in the mth pair of original images to obtain the previous image without jitter. And by analogy, other images after the previous image are sequentially deformed and subjected to debouncing, so that real-time debouncing is achieved. Meanwhile, the technical scheme does not depend on other auxiliary equipment while removing the trembles in real time, has great convenience and solves the technical problem that the external gyroscope is required to move when the trembles cannot be removed in real time or are removed in real time in the prior art.

Drawings

FIG. 1 is a flow chart of one embodiment of a method for removing video jitter as provided herein;

FIG. 2 is a schematic diagram of feature points involved in the method for removing video jitter provided by the present application;

FIG. 3 is a schematic diagram of a partition transformation matrix involved in the method for removing video jitter provided by the present application;

FIG. 4 is a schematic diagram of the mapping relationship between each image and the corresponding partition transformation matrix involved in the method for removing video jitter according to the present application;

FIG. 5 is a schematic diagram of various matrices applied in acquiring a deformation matrix involved in the method for removing video jitter provided by the present application;

FIG. 6 is a schematic diagram of an image warping process involved in the method for removing video jitter according to the present application;

fig. 7 is a schematic diagram of an embodiment of an apparatus for removing video jitter provided in the present application.

Detailed Description

in the following description, numerous specific details are set forth in order to provide a thorough understanding of the present application. This application is capable of implementation in many different ways than those herein set forth and of similar import by those skilled in the art without departing from the spirit of this application and is therefore not limited to the specific implementations disclosed below.

The present application provides a method for removing video jitter, and fig. 1 is a flowchart of an embodiment of the method for removing video jitter provided by the present application. The following describes a technical solution of the method for removing video jitter according to the present application with reference to a flowchart of an embodiment of the method for removing video jitter shown in fig. 1.

A long duration video is formed from many frames of images that change rapidly and continuously. When a video is taken, the video may "jitter" because relative motion between the video capture device and the scene may cause large displacements between the captured rapidly changing images. The present application aims to solve the problem of removing video jitter in real time.

The method for removing video jitter shown in fig. 1 includes:

And step S101, determining the position information of the feature point pairs in each pair of original images according to the position information of the feature point pairs in each pair of compressed images, wherein one feature point pair is formed by two corresponding feature points on the front image and the rear image in each pair of images.

In step S101, the position information of the feature point pair in each pair of original images before compression is determined from the position information of the feature point pair in each pair of compressed images. Therefore, before step S101, step S100 may be included: and acquiring the position information of the feature point pairs in each pair of compressed images.

Step S100 may specifically include the following steps:

Step S100-1, storing the original images into a first queue.

When a video shooting device is used for shooting multiple frames of images within a period of time, the multiple frames of images are arranged into a first queue according to the sequence, every two adjacent frames of images are a pair of images, wherein the image with the sequence first is the previous image, and the image with the sequence later is the next image. The queue may be embodied in an image buffer. The image buffer refers to a memory dedicated to storing an image being synthesized or displayed in a computer system, and fig. 4 illustrates a schematic diagram of the image buffer.

And S100-2, compressing each pair of original images by a first multiple.

in the process of rapidly removing jitter of a plurality of frames of images in a video with a certain duration, an original image can be compressed by a first multiple, wherein the multiple can be a preset value. The compressed image is smaller by a first factor than the image before compression, and the electronic device can process the image faster, so that each time a new image is acquired and compressed, the following steps can be performed quickly, for example: feature points on the new image and position information for each feature point are determined. Referring to fig. 2, the two images on the right side in fig. 2 are the compressed previous frame image and the compressed current frame image. The width and height of the previous frame and the current frame image before the left compression are smaller than those of the current frame image and the previous frame image after the right compression by a first multiple.

and step S100-3, determining the feature points on each image before and after each pair of compressed images.

The characteristic points refer to a series of pixel points which can represent the characteristics of the outline, the shape and the like of the shot scenery on the image. Generally, the series of points have more obvious features, such as a larger gray value, that is, the color of the image at the point is darker, and the point can be determined as the feature point. For example, if the point P on the current frame after compression in fig. 2 can characterize the feature of the shot scene, the point P can be regarded as a feature point on the current frame image after compression.

and S100-4, determining two corresponding characteristic points on each image before and after each pair of compressed images as a characteristic point pair.

each of the front and back images has a respective series of feature points, wherein a feature point in the front image may have a corresponding feature point in the back image. For example, if the two corresponding feature points both represent a certain point of the scene captured on the image, the two corresponding feature points constitute a feature point pair. Referring to fig. 2, the feature point P on the compressed current frame and the feature point P on the compressed previous frame both represent a same feature point of the captured scene, and then these two corresponding feature points constitute a feature point pair.

And step S100-5, determining the position information of the feature point pairs in each pair of compressed images.

the position information of the feature point pair refers to the relative positions of the corresponding two feature point pairs in the corresponding image, and may be the coordinates of the feature points on the corresponding image. For example, the position coordinates of the feature point P on the current frame after compression in fig. 2 are (u, v). In addition, the corresponding feature point P on the compressed previous frame image also has coordinate values. The position information of the two feature points on the respective images is the position information of a feature point pair on the pair of images. Since a plurality of pairs of feature points exist in two compressed images adjacent to each other in the front-rear direction, the position information of the plurality of pairs of feature points in the images adjacent to each other in the front-rear direction can be acquired.

After step S100 is executed, that is, after the step of acquiring the position information of the feature point pair in each pair of compressed images, step S101 may be executed: and determining the position information of the characteristic point pairs in each pair of original images according to the position information of the characteristic point pairs in each pair of compressed images.

After the position information of the feature points on each image of each pair of compressed images is obtained, that is, after the position information of the feature points on each image of each pair of compressed images is obtained, the position information of the feature points on each image of each pair of compressed images is expanded by the first multiple, the position information of the feature points on each image of each pair of compressed images can be obtained, that is, the position information of the feature points formed by the feature points in each pair of compressed images can be obtained. For example, in fig. 2, the coordinates (u, v) of the feature point P on the compressed current frame image are enlarged by a first multiple, so as to obtain the coordinates (su, sv) of the feature point P on the current frame image before compression. Similarly, the coordinates of the feature point P on the upper 1 frame image after compression are expanded by the first multiple, and the coordinates of the feature point P on the upper 1 frame image before compression can also be obtained. And two corresponding characteristic points P in the compressed current frame and the compressed last 1 frame form a characteristic point pair P in the compressed current frame and the last 1 frame image. Two corresponding feature points P in the current frame before compression and the previous frame before compression 1 form a feature point pair P in the current frame before compression and the previous frame 1.

And step S102, determining position conversion information of a subsequent image relative to a previous image in each pair of original images according to the position information of the feature point pairs in each pair of original images.

in this application, step S102 may be implemented by dividing each pair of original images into a plurality of partitions, and after determining the position transformation information of a partition on the current frame image to a corresponding partition on the previous frame image, combining the position transformation information of the divided corresponding partitions to obtain the position transformation information of the current frame image to the previous frame image in each pair of images.

Specifically, step S102 may include the steps of:

and S102-1, partitioning each front image and each rear image in each pair of original images, and dividing the current frame image and the previous frame image into six partitions as shown in the example of FIG. 3, wherein 4 feature points C 0, C 1, C 2 and C 3 are illustrated in the upper left partition of the current frame image, and 4 corresponding feature points P 0, P 1, P 2 and P 3 are also illustrated in the previous 1 frame image.

s102-2, determining position conversion information from the corresponding partition of the next image to the corresponding partition of the previous image in each pair of original images according to the position information of each characteristic point pair in the corresponding partition of each pair of original images;

the position information of feature points on a subsequent image is different from the position information of corresponding feature points on a previous image due to relative movement of the previous and subsequent images, and the difference between the position information of feature points on the subsequent image and the position information of corresponding feature points on the previous image is the position transformation information from the feature points on the subsequent image to the corresponding feature points on the previous image, i.e., the position transformation information from the feature points on the subsequent image to the corresponding feature points on the previous image is the position transformation information from the feature points on the subsequent image to the corresponding feature points on the previous image, i.e., the position transformation information from the feature points on the previous image to the corresponding partitions on the previous image is the position transformation information from the feature points on the previous image to the corresponding partitions on the previous image, i.e., the position transformation information from the feature points on the previous image to the corresponding partitions on the previous image to the previous image, i.e., the position transformation information from the corresponding partitions on the previous image to the partitions on the previous image, i.e., the previous image has 4 feature points P 0, P 1, P4629, P7, P4684, P 3, which correspond to the corresponding partitions on the current frame, and the corresponding partitions of the position transformation information on the previous image, i.e., the corresponding partitions on the previous frame, or corresponding partitions of the corresponding partitions on the previous frame, i.e., the corresponding partitions of the image, i.e., the corresponding partitions of the previous frame, the corresponding partitions of the image, the corresponding partitions on the position transformation information on the previous frame, i.e., the corresponding partitions of the previous frame, the corresponding partitions of the corresponding partitions on the current frame, the corresponding partitions of the position transformation information on the corresponding partitions of the position transformation information on the previous frame, or corresponding partitions of the previous frame, i.e., the corresponding partitions of the previous frame, the corresponding partitions of the image, the previous frame, the corresponding partitions of the image, the corresponding partitions of the image, the position transformation information of the previous frame, the position transformation information of the previous frame, the corresponding partitions of the current frame, the previous frame, the corresponding partitions of the previous frame, the position transformation information of the previous frame, the previous.

And step S102-3, determining the position transformation information of the next image relative to the previous image in each pair of original images according to the position transformation information of the corresponding partition of the next original image relative to the corresponding partition of the previous original image in each pair of original images.

Based on the position transformation information H 00, H 01, H 02, H 10, H 11, and H 12 obtained in step S102-2 from each partition of the current frame image to each corresponding partition of the previous 1 frame image, the position transformation information corresponding to each partition is combined to represent the position transformation information from the current frame image to the previous frame image, and the partition transformation matrix from the current frame to the previous 1 frame illustrated in fig. 3 is the position transformation information from the current frame image to the previous 1 frame image.

And step S102-4, storing the position conversion information of the next image relative to the previous image in each pair of original images into a second queue.

After obtaining the position conversion information of the current frame image to the previous 1 frame image based on step S102-3, the position conversion information between the pair of images may be stored in a queue, which may be named a second queue. The queue may specifically be stored by a partitioned transform array buffer. The partitioned transform matrix buffer may be a memory dedicated to storing transform matrices in a computer system, and fig. 4 illustrates a schematic diagram of the partitioned transform matrix buffer.

And step S103, acquiring deformation information corresponding to the previous image in the mth pair of original images according to the position transformation information of the next image relative to the previous image in the n pairs of original images, wherein n and m are positive integers, and m is not more than n.

Next, how to implement step S103 is illustrated, and we take the case where m is 1 as an example, that is, how to obtain deformation information corresponding to the previous image in the 1 st pair of original images. To obtain the 1 st deformation information corresponding to the previous image in the original image, the position information stored in the original path buffer, the optimized path register, and the optimized path buffer in the deformation matrix iterative optimizer needs to be utilized for processing, and the functions of each buffer in this step are described below.

referring to fig. 5, the divisional transformation matrix buffer of fig. 5 stores the positional transformation information of the subsequent image with respect to the previous image. The partitioned transform matrix buffer is capable of storing position transform information between a certain number of images. The position conversion information among the images of a certain number is stored according to the generated sequence, and the position conversion information among the generated images is arranged at the tail of the subarea conversion array buffer. The divisional transformation matrix buffer illustrated in fig. 5 can store corresponding positional transformation information between n pairs of images, that is, n pieces of positional transformation information or positional transformation matrices. Wherein, the rightmost group of partition transformation matrixes in fig. 5 represents the position transformation matrix between the 1 st image and the 2 nd image collected by the image collector first, and so on, and the leftmost group of partition transformation matrixes in fig. 5 represents the position transformation matrix between the last image and the previous 1 image.

The partitioned transform matrix buffer shown in fig. 5 has a fixed length, that is, can store n pieces of position transform information at most. Accordingly, the image buffer in fig. 4 also has a fixed length, and the length of the image buffer is the same as that of the partitioned transform array buffer, that is, the image buffer can store n images at most. When the partitioned transform array buffer is full of n position transform information or position transform matrices and the image buffer is full of n images, triggering the following steps: and acquiring deformation information corresponding to the previous image in the first pair of original images. For example, the first queue in the image buffer illustrated in FIG. 4 may store n images, the first acquired 1 st image and the first acquired second image being a 1 st pair of images, the 1 st pair of images having sequence numbers n-1 and n-2 in the image buffer. Deformation information corresponding to the previous image in the first pair of original images is obtained, namely deformation information corresponding to the frame of image with the sequence number of n-1 in the image buffer is obtained. After the step of obtaining the deformation information corresponding to the previous image in the first pair of original images, the following steps may be further performed: before storing the image to the first queue again, taking out the image at the head of the first queue; and extracting the position conversion information of the head of the second queue before storing the position conversion information to the second queue again. By taking the head-of-line image out of the image buffer and the head-of-line position conversion information out of the partition conversion matrix buffer, positions can be made free for new image storage and storage of new position conversion information.

In fig. 5, H n-1,0 represents the 1 st precinct position conversion information in the head-of-line position conversion information in the second queue storing position conversion information, H n-1,1 represents the 2 nd precinct position conversion information, and by analogy H n-1,5 represents the 6 th precinct position conversion information, similarly H 0,0 represents the 1 st precinct position conversion information in the tail-of-line position conversion information in the second queue storing position conversion information, H 0,1 represents the 2 nd precinct position conversion information, and by analogy H 0,5 represents the 6 th precinct position conversion information.

In fig. 5, the original path buffer stores the product of the position transform information of a certain partition in the position transform information stored latest in the second queue and the position transform information of the corresponding partition in each position transform information stored previously, that is, C i,j ═ H 0,j × H l,j.... times.h i-l,j ×. H i,j, where C i,j represents the product of the j-th partition position transform information in the position transform information with sequence number (i +1) in the second queue and the j-th partition position transform information in the position transform information with sequence number i.

in fig. 5, the optimized path register stores a weighted average Q i,j, the weighted average Q i,j is obtained by taking a weighted average of the position information of the partition adjacent to the jth partition on the image with the sequence number i in the image queue, the position information of the jth partition on the image adjacent to the image with the sequence number i, and C i,j in the original path buffer, the weighted average is represented by Q i,j, and each time the weighted average is obtained, the Q i,j is temporarily stored in the optimized path buffer, then is overlaid into the optimized path buffer, and is denoted as P i,j. obviously, when i is n-1, the P n-1,j represents that the value is obtained by weighted average of the position information of the partition adjacent to the jth partition on the first image in the queue 1, the position information of the jth partition on the first image in the first frame image in the queue, and C n-1,j in the original path buffer.

For example, when j is 1, B 0 represents deformation information corresponding to the 1 st partition of the head image, and similarly, B 1 represents deformation information corresponding to the 2 nd partition of the head image, and if the head image is divided into 6 partitions, B 5 represents deformation information corresponding to the 6 th partition of the head image, B 0, B 1, B 2, B 3, B 4, and B 5 are combined to form deformation information corresponding to the head image in the image buffer, as shown in fig. 6, deformation information corresponding to the head image obtained by the deformation matrix optimization iterator.

After the deformation information of the previous image in the first pair of images is acquired through step S103, the previous image may be subjected to deformation processing using the deformation information, see step S104.

And step S104, according to the deformation information corresponding to the previous image in the mth pair of original images, deforming the previous image in the mth pair of original images to obtain the previous image in the mth pair of original images after the dithering is removed.

Continuing with the previous image in the first pair of images as an example, after obtaining the deformation information corresponding to the previous 1 image in the first pair of images based on the step illustrated in step S103, and then, when the deformation information is represented by the deformation matrix, the previous image is deformed in a partition manner according to the deformation matrix corresponding to the previous image in the first pair of original images, that is, the image is adjusted in the position information by using the deformation information obtained in step S103. For example, in fig. 6, the position information of the feature point P exists in the deformation matrix of the 3 rd partition of the head image of the team, and the position information of the feature point P existing in the deformation matrix is different from the position information of the feature point P on the 3 rd partition of the head image of the team. In order to eliminate the positional difference, the point P on the top image of the line is adjusted to coincide with the position of the feature point P in the 3 rd divisional deformation information, that is, the positional difference can be eliminated. Similarly, the positions of the feature points of other regions on the head image of the team should be adjusted to the positions of the corresponding feature points in the deformation information, so as to obtain the adjusted image shown in fig. 6. After the position information of the image at the head of the team is adjusted, the image except the deformation information is cut off, so that the effect of removing the position difference can be achieved.

According to the technical scheme for removing the video jitter, firstly, the position information of the feature point pairs in each pair of original images is determined according to the position information of the feature point pairs in each pair of compressed images, and the electronic equipment can perform various processing faster due to the fact that the original images can be reduced after being compressed, so that the technical means can be adopted to obtain the position information of each feature point pair on each acquired image in real time. After the position information of the feature point pair on each image is acquired in real time, the position conversion information from the next image to the previous image in each pair of original images is correspondingly determined in real time according to the position information of the feature point pair in each pair of original images. And after the position conversion information from the next image to the previous image in the n pairs of original images is obtained, acquiring deformation information corresponding to the previous image in the first pair of original images, and deforming the previous image according to the deformation information corresponding to the previous image in the first pair of original images to obtain the previous image without jitter. And by analogy, other images after the previous image are sequentially deformed and subjected to debouncing, so that real-time debouncing is achieved. Meanwhile, the technical scheme does not depend on other auxiliary equipment while removing the trembles in real time, has great convenience and solves the technical problem that the external gyroscope is required to move when the trembles cannot be removed in real time or are removed in real time in the prior art.

The present application further provides an apparatus for removing video jitter, and fig. 7 is a schematic structural diagram of an embodiment of the apparatus for removing video jitter provided by the present application. The embodiment of the apparatus corresponds to the embodiment of the method shown in fig. 1, so that the description is relatively simple, and the relevant portions only need to refer to the corresponding description of the embodiment of the method provided above. The device can be applied to various electronic equipment. The device embodiments described below are merely illustrative.

Fig. 7 shows an apparatus for removing video judder, including: a position information obtaining first unit 701, configured to determine position information of a pair of feature points in each pair of original images according to position information of the pair of feature points in each pair of compressed images, where a pair of feature points is formed by two corresponding feature points on two front and rear images in each pair of images, and the original image is an image before compression; a position transformation information obtaining unit 702, configured to determine, according to position information of a feature point pair in each pair of original images, position transformation information of a subsequent image relative to a previous image in each pair of original images; a deformation information obtaining unit 703, configured to obtain, according to position transformation information of a subsequent image relative to a previous image in n pairs of original images, deformation information corresponding to a previous image in an mth pair of original images, where n and m are positive integers, and m is not greater than n; a deformation processing unit 704, configured to deform the previous image in the mth pair of original images according to the deformation information corresponding to the previous image in the mth pair of original images, and obtain the previous image in the mth pair of original images from which the dither is removed.

optionally, the apparatus further comprises: an image storage unit for storing the original image in a first queue: and the position conversion information storage unit is used for storing the position conversion information of the next image relative to the previous image in each pair of original images in the second queue.

Optionally, the apparatus further comprises: the compression unit is used for compressing each pair of original images by a first multiple; a feature point determining unit for determining a feature point on each image in each pair of compressed images; a feature point pair determining unit, configured to determine two feature points corresponding to two front and rear images in each pair of compressed images as a feature point pair; and the position information acquisition second unit is used for determining the position information of the feature point pairs in each pair of compressed images.

optionally, the position conversion information obtaining unit 702 includes: an image partition subunit, configured to partition each of front and rear images in each pair of original images; the position conversion information acquisition unit is used for acquiring the position conversion information from the corresponding partition of the next image to the corresponding partition of the previous image in each pair of original images; and the position transformation information acquisition second subunit is used for determining the position transformation information from the next image to the previous image in each pair of original images according to the position transformation information from the corresponding partition of the next image to the corresponding partition of the previous image in each pair of original images.

Optionally, the deformation processing unit 704 includes: the deformation subunit is used for carrying out partition deformation on the previous image in the mth pair of original images according to the deformation matrix corresponding to the previous image in the mth pair of original images; and the clipping subunit is used for clipping the edge of the previous image in the mth pair of original images after the deformation.

the present application further provides an embodiment of an electronic device for removing video jitter, where the electronic device in this embodiment includes: a processor; a memory for storing a program for removing video judder, which program when read and executed by the processor performs the following operations: determining the position information of the feature point pairs in each pair of original images according to the position information of the feature point pairs in each pair of compressed images, wherein one feature point pair consists of two corresponding feature points on a front image and a rear image in each pair of images, and the original images are images before compression; determining position transformation information of a subsequent image relative to a previous image in each pair of original images according to the position information of the feature point pairs in each pair of original images; according to the position transformation information of the latter image relative to the former image in the n pairs of original images, obtaining the corresponding deformation information of the former image in the mth pair of original images, wherein n and m are positive integers, and m is not more than n; and deforming the previous image in the mth pair of original images according to the deformation information corresponding to the previous image in the mth pair of original images to obtain the previous image in the mth pair of original images after the dithering is removed. The related technical features can refer to the method embodiment, and are not described herein again.

The present application further provides a computer-readable storage medium, which is substantially similar to the method embodiment and therefore is described more simply, and the relevant portions please refer to the corresponding descriptions of the method embodiments provided above. The computer-readable storage medium embodiments described below are merely illustrative.

the present application also provides a computer-readable medium, which may be contained in the apparatus described in the above embodiments; or may be present separately and not assembled into the device. The computer readable medium carries one or more programs which, when executed by the apparatus, cause the apparatus to: determining the position information of the feature point pairs in each pair of original images according to the position information of the feature point pairs in each pair of compressed images, wherein one feature point pair consists of two corresponding feature points on a front image and a rear image in each pair of images, and the original images are images before compression; determining position transformation information of a subsequent image relative to a previous image in each pair of original images according to the position information of the feature point pairs in each pair of original images; according to the position transformation information of the latter image relative to the former image in the n pairs of original images, obtaining the corresponding deformation information of the former image in the mth pair of original images, wherein n and m are positive integers, and m is not more than n; and deforming the previous image in the mth pair of original images according to the deformation information corresponding to the previous image in the mth pair of original images to obtain the previous image in the mth pair of original images after the dithering is removed. The related technical features can refer to the method embodiment, and are not described herein again.

although the present application has been described with reference to the preferred embodiments, it is not intended to limit the present application, and those skilled in the art can make variations and modifications without departing from the spirit and scope of the present application, therefore, the scope of the present application should be determined by the claims that follow.

in a typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.

The memory may include forms of volatile memory in a computer readable medium, Random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium.

1. Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, computer readable media does not include non-transitory computer readable media (transient media), such as modulated data signals and carrier waves.

2. As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.

17页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:摄像模组阵列及其组装方法

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!

技术分类