Method and device for adjusting homography matrix parameters

文档序号:192489 发布日期:2021-11-02 浏览:23次 中文

阅读说明:本技术 调整单应性矩阵参数的方法和装置 (Method and device for adjusting homography matrix parameters ) 是由 陈腾 隋伟 谢佳锋 张骞 黄畅 于 2021-07-28 设计创作,主要内容包括:本公开实施例公开了一种调整单应性矩阵参数的方法和装置,其中,该方法包括:获取参考图像与目标图像的参考平面的单应性矩阵参数;基于所述单应性矩阵参数和所述参考图像,生成所述参考图像的重建图像;基于所述单应性矩阵参数和所述参考图像的参考平面掩码,确定所述重建图像的参考平面掩码;基于所述重建图像、所述重建图像的参考平面掩码、所述目标图像和所述目标图像的参考平面掩码,确定所述重建图像与所述目标图像之间在参考平面上的图像误差;基于所述图像误差,调整所述单应性矩阵参数。本公开实施例可以降低成像噪声对平面视差法的影响,进而可以适用于自动驾驶领域通过平面视差法进行三维场景重现。(The embodiment of the disclosure discloses a method and a device for adjusting homography matrix parameters, wherein the method comprises the following steps: acquiring homography matrix parameters of reference planes of a reference image and a target image; generating a reconstructed image of the reference image based on the homography matrix parameters and the reference image; determining a reference plane mask for the reconstructed image based on the homography matrix parameters and a reference plane mask for the reference image; determining an image error between the reconstructed image and the target image on a reference plane based on the reconstructed image, a reference plane mask of the reconstructed image, the target image, and a reference plane mask of the target image; based on the image error, adjusting the homography matrix parameters. The embodiment of the disclosure can reduce the influence of imaging noise on the plane parallax method, and further can be suitable for three-dimensional scene reappearance through the plane parallax method in the field of automatic driving.)

1. A method of adjusting homography matrix parameters, comprising:

acquiring homography matrix parameters of reference planes of a reference image and a target image;

generating a reconstructed image of the reference image based on the homography matrix parameters and the reference image;

determining a reference plane mask for the reconstructed image based on the homography matrix parameters and a reference plane mask for the reference image;

determining an image error between the reconstructed image and the target image on a reference plane based on the reconstructed image, a reference plane mask of the reconstructed image, the target image, and a reference plane mask of the target image;

based on the image error, adjusting the homography matrix parameters.

2. The method of adjusting homography matrix parameters of claim 1, wherein the determining an image error between the reconstructed image and the target image on a reference plane based on the reconstructed image, a reference plane mask of the reconstructed image, the target image, and a reference plane mask of the target image comprises:

obtaining an IOU loss error based on the intersection IOU loss coefficient, the reconstructed image, the reference plane mask of the reconstructed image, the target image and the reference plane mask of the target image;

deriving a luminosity loss error based on the luminosity loss coefficient, the reconstructed image, a reference plane mask of the reconstructed image, the target image, and a reference plane mask of the target image;

deriving an edge loss error based on the edge loss coefficient, the reconstructed image, a reference plane mask of the reconstructed image, the target image, and a reference plane mask of the target image;

obtaining the image error based on the IOU loss error, the luminosity loss error, and the edge loss error.

3. The method for adjusting homography matrix parameters of claim 1, wherein the obtaining homography matrix parameters of a reference plane of a reference image and a target image comprises:

performing feature extraction on the reference image to obtain a first image feature;

performing feature extraction on the target image to obtain a second image feature;

removing the corresponding features outside the reference plane in the first image features and the second image features to obtain first interest features and second interest features;

obtaining a homography matrix based on the first interest characteristics and the second interest characteristics;

determining camera rotation information, camera translation information, road surface normal information and camera height based on the homography matrix;

the homography matrix parameters comprise camera rotation information, camera translation information, road surface normal information and camera height.

4. The method of adjusting homography matrix parameters of claim 3, wherein said removing features corresponding to points outside of a reference plane in the first and second image features resulting in first and second features of interest comprises:

based on the first image characteristic and the second image characteristic, obtaining a matching point pair of the reference image and the target image by adopting a characteristic matching method;

determining, using a semantic recognition model, reference plane features in the reference image and reference plane features in the target image based on the first image features and the second image features;

determining the first interest feature and the second interest feature based on the matching point pairs and reference plane features in the reference image and reference plane features in the target image.

5. The method for adjusting homography matrix parameters of claim 1, wherein the obtaining homography matrix parameters of a reference plane of a reference image and a target image comprises:

acquiring camera rotation information and camera translation information;

acquiring first radar point cloud data corresponding to the reference image, and acquiring second radar point cloud data corresponding to the target image;

obtaining a third interest feature based on the first radar point cloud data and the reference image;

obtaining a fourth interest feature based on the second radar point cloud data and the target image;

determining road surface normal information and camera height based on the third interest feature and the fourth interest feature;

wherein the homography matrix parameters include the camera rotation information, the camera translation information, the road surface normal information, and the camera height.

6. The method of adjusting homography matrix parameters of claim 5, wherein said deriving a third feature of interest based on the first radar point cloud data and the reference image comprises:

mapping the residual pixel points in the reference image after the pixel points outside the reference plane in the reference image are removed to a radar coordinate system to obtain a first radar coordinate point set;

determining a set of radar coordinate points of a reference plane of the reference image by performing random sampling consistency processing on the first set of radar coordinate points;

obtaining a camera coordinate point set corresponding to the radar coordinate point set based on the radar coordinate point set of the reference plane of the reference image;

obtaining the third interest feature based on the set of camera coordinate points.

7. The method for adjusting homography matrix parameters of claim 1, wherein the reference image and the target image are both captured by a camera during vehicle driving, and the reference image and the target image are separated by N frames, where N is a natural number greater than or equal to 1.

8. An apparatus for adjusting homography matrix parameters, comprising:

the parameter acquisition module is used for acquiring homography matrix parameters of reference planes of the reference image and the target image;

an image reconstruction module for generating a reconstructed image of the reference image based on the homography matrix parameters and the reference image;

a mask processing module for determining a reference plane mask of the reconstructed image based on the homography matrix parameters and the reference plane mask of the reference image;

an error determination module to determine an image error between the reconstructed image and the target image on a reference plane based on the reconstructed image, a reference plane mask of the reconstructed image, the target image, and a reference plane mask of the target image;

and the parameter adjusting module is used for adjusting the homography matrix parameters based on the image errors.

9. A computer-readable storage medium, in which a computer program is stored, the computer program being adapted to perform the method of adjusting homography matrix parameters of any of the preceding claims 1-7.

10. An electronic device, the electronic device comprising:

a processor;

a memory for storing the processor-executable instructions;

the processor is configured to read the executable instructions from the memory and execute the instructions to implement the method for adjusting the homography matrix parameters of any of the preceding claims 1-7.

Technical Field

The present disclosure relates to the field of computer vision technology and the field of vehicle technology, and in particular, to a method and apparatus for adjusting homography matrix parameters.

Background

The planar parallax is originally used for planar motion modeling, and the core idea is to find a suitable reference plane, distort two images by using the reference plane, align points on the reference plane in the two distorted images, and closely relate the motion of a residual image which is not aligned to a three-dimensional structure.

The planar parallax method can eliminate the influence of the rotation of the camera, but is easily affected by imaging noise, and is difficult to be widely adopted. How to reduce the influence of imaging noise on the plane parallax method is a problem to be solved urgently.

Disclosure of Invention

The present disclosure is proposed to solve the above technical problems. The embodiment of the disclosure provides a method and a device for adjusting homography matrix parameters.

According to a first aspect of the embodiments of the present disclosure, there is provided a method for adjusting homography matrix parameters, including:

acquiring homography matrix parameters of reference planes of a reference image and a target image;

generating a reconstructed image of the reference image based on the homography matrix parameters and the reference image;

determining a reference plane mask for the reconstructed image based on the homography matrix parameters and a reference plane mask for the reference image;

determining an image error between the reconstructed image and the target image on a reference plane based on the reconstructed image, a reference plane mask of the reconstructed image, the target image, and a reference plane mask of the target image;

based on the image error, adjusting the homography matrix parameters.

According to a second aspect of the embodiments of the present disclosure, there is provided an apparatus for adjusting homography matrix parameters, including:

the parameter acquisition module is used for acquiring homography matrix parameters of reference planes of the reference image and the target image;

an image reconstruction module for generating a reconstructed image of the reference image based on the homography matrix parameters and the reference image;

a mask processing module for determining a reference plane mask of the reconstructed image based on the homography matrix parameters and the reference plane mask of the reference image;

an error determination module to determine an image error between the reconstructed image and the target image based on the reconstructed image, a reference plane mask of the reconstructed image, the target image, and a reference plane mask of the target image;

and the parameter adjusting module is used for adjusting the homography matrix parameters based on the image errors.

According to a third aspect of embodiments of the present disclosure, there is provided a computer-readable storage medium storing a computer program for executing the method for adjusting homography matrix parameters of the first aspect.

According to a fourth aspect of the embodiments of the present disclosure, there is provided an electronic apparatus including: a processor; a memory for storing the processor-executable instructions; the processor is configured to read the executable instruction from the memory and execute the instruction to implement the method for adjusting the homography matrix parameters according to the first aspect.

Based on the method and the device for adjusting the homography matrix parameters provided by the above embodiments of the present disclosure, a reference image and a target image which are captured by a camera and spaced by N frames and both include a reference plane (e.g., a road surface) are acquired. And image mapping and mask mapping are carried out on the reference image based on the homography matrix parameters to obtain a reference plane mask of the reference image and the reference image, further, an image error between the reconstructed image and the target image is determined based on the reconstructed image, the reference plane mask of the reconstructed image and the reference plane mask of the target image, and the homography matrix parameters are adjusted according to the image error until the image error meets the preset error requirement. The method and the device can reduce the influence of imaging noise on the plane parallax method, and further can be suitable for three-dimensional scene reappearance through the plane parallax method in the field of automatic driving.

The technical solution of the present disclosure is further described in detail by the accompanying drawings and examples.

Drawings

The above and other objects, features and advantages of the present disclosure will become more apparent by describing in more detail embodiments of the present disclosure with reference to the attached drawings. The accompanying drawings are included to provide a further understanding of the embodiments of the disclosure and are incorporated in and constitute a part of this specification, illustrate embodiments of the disclosure and together with the description serve to explain the principles of the disclosure and not to limit the disclosure. In the drawings, like reference numbers generally represent like parts or steps.

Fig. 1 is a schematic flowchart of a method for adjusting homography matrix parameters according to an embodiment of the disclosure.

Fig. 2 is a reference image in one example of the present disclosure.

Fig. 3 is a target image in one example of the present disclosure.

Fig. 4 is a schematic view of the road surface mask of fig. 2.

Fig. 5 is a schematic view of the road surface mask of fig. 3.

Fig. 6 is a block diagram of an apparatus for optimizing homography matrix parameters according to an embodiment of the present disclosure.

Fig. 7 is a block diagram illustrating the structure of an error determination module in an example disclosed herein.

Fig. 8 is a block diagram illustrating the structure of an error determination module in one example of the disclosure.

Fig. 9 is a block diagram illustrating the structure of an error determination module in another example.

Fig. 10 is a schematic structural diagram of an electronic device provided in an embodiment of the present disclosure.

Detailed Description

Hereinafter, example embodiments according to the present disclosure will be described in detail with reference to the accompanying drawings. It is to be understood that the described embodiments are merely a subset of the embodiments of the present disclosure and not all embodiments of the present disclosure, with the understanding that the present disclosure is not limited to the example embodiments described herein.

It should be noted that: the relative arrangement of the components and steps, the numerical expressions, and numerical values set forth in these embodiments do not limit the scope of the present disclosure unless specifically stated otherwise.

It will be understood by those of skill in the art that the terms "first," "second," and the like in the embodiments of the present disclosure are used merely to distinguish one element from another, and are not intended to imply any particular technical meaning, nor is the necessary logical order between them.

It is also understood that in embodiments of the present disclosure, "a plurality" may refer to two or more and "at least one" may refer to one, two or more.

It is also to be understood that any reference to any component, data, or structure in the embodiments of the disclosure, may be generally understood as one or more, unless explicitly defined otherwise or stated otherwise.

In addition, the term "and/or" in the present disclosure is only one kind of association relationship describing an associated object, and means that three kinds of relationships may exist, for example, a and/or B may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" in the present disclosure generally indicates that the former and latter associated objects are in an "or" relationship.

It should also be understood that the description of the various embodiments of the present disclosure emphasizes the differences between the various embodiments, and the same or similar parts may be referred to each other, so that the descriptions thereof are omitted for brevity.

Meanwhile, it should be understood that the sizes of the respective portions shown in the drawings are not drawn in an actual proportional relationship for the convenience of description.

The following description of at least one exemplary embodiment is merely illustrative in nature and is in no way intended to limit the disclosure, its application, or uses.

Techniques, methods, and apparatus known to those of ordinary skill in the relevant art may not be discussed in detail but are intended to be part of the specification where appropriate.

It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, further discussion thereof is not required in subsequent figures.

The disclosed embodiments may be applied to electronic devices such as terminal devices, computer systems, servers, etc., which are operational with numerous other general purpose or special purpose computing system environments or configurations. Examples of well known terminal devices, computing systems, environments, and/or configurations that may be suitable for use with electronic devices, such as terminal devices, computer systems, servers, and the like, include, but are not limited to: personal computer systems, server computer systems, thin clients, thick clients, hand-held or laptop devices, microprocessor-based systems, set top boxes, programmable consumer electronics, network pcs, minicomputer systems, mainframe computer systems, distributed cloud computing environments that include any of the above systems, and the like.

Electronic devices such as terminal devices, computer systems, servers, etc. may be described in the general context of computer system-executable instructions, such as program modules, being executed by a computer system. Generally, program modules may include routines, programs, objects, components, logic, data structures, etc. that perform particular tasks or implement particular abstract data types. The computer system/server may be practiced in distributed cloud computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed cloud computing environment, program modules may be located in both local and remote computer system storage media including memory storage devices.

Summary of the application

In implementing the present disclosure, the inventors found that, when performing homography estimation from an image, at least the following problems exist:

the homography estimation is performed using a feature-based approach, with keypoint detection and matching first, and then the best homography is found by the estimator. However, when the feature points are insufficient, the accuracy is low, and the application to the non-texture region is difficult.

Homography estimation is performed using direct methods, finding the best homography by minimizing the alignment error between the two input images. But if the motion between the two input images is too large, the effect is poor.

Homography estimation is performed using a deep learning method, using a pair of images as inputs, and an angular displacement vector is generated to estimate homography. But the calculation amount is large, a supervision scheme depends on a true value, and a real scene is difficult to acquire.

Exemplary method of adjusting homography matrix parameters

Fig. 1 is a schematic flowchart of a method for adjusting homography matrix parameters according to an embodiment of the disclosure. As shown in fig. 1, a method for adjusting homography matrix parameters provided in an embodiment of the present disclosure includes:

s1: and acquiring homography matrix parameters of reference planes of the reference image and the target image. The reference image and the target image have corresponding reference planes, for example, in an automatic driving scene, the reference image and the target image are both obtained by shooting through a camera on a vehicle during the driving of the vehicle, the reference image and the target image are separated by N frames, and N is a natural number greater than or equal to 1. In the present embodiment, the reference image and the target image each include a road surface, and therefore the road surface is used as a reference plane for the reference image and the target image. Preferably, N is a natural number between 1 and 10.

The following part of the embodiments of the present disclosure will be described in terms of an automatic driving scenario, but those skilled in the art will understand that any scenario that is suitable for the technical solution of the present disclosure, other than the automatic driving scenario, may implement the technical solution of the present disclosure. For example, when a person walks on the sea, a video is shot through a mobile phone, and two frames of images in the video are selected as a reference image and a target image respectively. Wherein the reference plane may be a beach near the sea, or a combination of a beach and a tree.

S2: and generating a reconstructed image of the reference image based on the homography matrix parameters and the reference image.

Fig. 2 is a reference image in one example of the present disclosure, and fig. 3 is a target image in one example of the present disclosure. As shown in fig. 2 and 3, in this example, the reference plane of the reference image and the reference plane of the target image are road surfaces, and the road surface pixels in the reference image and the target image are not aligned, and after a homography matrix needs to be generated based on the homography matrix parameters, the reference image is reversely mapped based on the homography matrix, and a reconstructed image of the reference image is generated.

S3: a reference plane mask of the reconstructed image is determined based on the homography matrix parameters and the reference plane mask of the reference image.

Fig. 4 is a schematic view of the road surface mask of fig. 2. As shown in fig. 4, a road surface image in the reference image may be obtained based on the road surface mask of the reference image. And based on the homography matrix parameters, the road surface mask of the reference image is reversely mapped, so that the road surface mask of the reconstructed image can be determined.

S4: an image error between the reconstructed image and the target image is determined based on the reconstructed image, the reference plane mask of the reconstructed image, and the reference plane mask of the target image and the target image.

Fig. 5 is a schematic view of the road surface mask of fig. 3. As shown in fig. 2 to 5, the road surface image in the reconstructed image may be obtained based on the reconstructed image and the reference plane mask of the reconstructed image. And obtaining a road surface image in the target image based on the target image and the reference plane mask of the target image. Based on the error between the road surface image in the reconstructed image and the road surface image in the target image. The error can be obtained by means of pixel comparison.

S5: based on the image error, the homography matrix parameters are adjusted.

Specifically, based on the image error, the homography matrix parameters are adjusted in a back propagation manner, that is, the camera rotation information, the camera translation information, the ground normal information and the camera height are adjusted.

In the present embodiment, a reference image and a target image, which are captured by a camera and are spaced by N frames, and each of which includes a reference plane (e.g., a road surface), are acquired. The method comprises the steps of carrying out image mapping and mask mapping on a reference image based on a homography matrix parameter to obtain a reference plane mask of the reference image and a reference image, further determining an image error between a reconstructed image and a target image on a reference plane based on the reconstructed image, the reference plane mask of the reconstructed image and the reference plane mask of the target image, and adjusting the homography matrix parameter according to the image error, so that the influence of imaging noise on a plane parallax method can be reduced, and the method is further suitable for carrying out three-dimensional scene reproduction in the field of automatic driving through the plane parallax method.

In one embodiment of the present disclosure, when the radar point cloud data of the reference image and the target image cannot be acquired, step S1 includes:

S1-A-1: and performing feature extraction on the reference image to obtain a first image feature. The first image feature includes feature point information (e.g., pixel point coordinates and pixel values) in a reference plane, and feature point information outside the reference plane.

S1-A-2: and performing feature extraction on the target image to obtain a second image feature. Wherein, when extracting the first image characteristic and the second image characteristic, the same characteristic extraction method is used. Illustratively, Scale-invariant feature transform (SIFT) features are extracted, resulting in a series of corners.

S1-A-3: and removing the corresponding features outside the reference plane in the first image features and the second image features to obtain first interest features and second interest features. In the first image feature, a pre-trained semantic segmentation model may be used to segment a feature corresponding to a pixel point in the reference plane (i.e., a first interest feature) and a feature corresponding to a pixel point outside the reference plane. And acquiring the second interest characteristics in the same way as the first interest characteristics.

S1-A-4: obtaining homographies based on the first interest features and the second interest featuresAnd (4) matrix. Using the formula p2=Hp1A homography matrix H is calculated. Wherein p is1And p2Respectively, a set of corresponding feature point coordinates in the reference image and the target image.

S1-A-5: based on the homography matrix, camera rotation information, camera translation information, road surface normal information, and camera height are determined. The homography matrix parameters comprise camera rotation information, camera translation information, road surface normal information and camera height.

Wherein K represents camera internal parameters, K-1An inverse matrix representing K represents camera rotation information, N represents ground normal information, t represents camera translation information, and d represents camera height (i.e., the distance of the camera from the road surface). In this example, the homography matrix H is 3x3 and H [3 [ ]][3]1 matrix.

Using 4 pairs of characteristic points to form an equation set, and solving a homography matrix H; then, H is decomposed into camera rotation information R, camera translation information t, ground normal information N and camera height d by using a matrix decomposition method. Where R is a matrix of 3x3, t and N are both vectors of 1x3, and d is a scalar value.

In this embodiment, under the condition that radar point cloud data of a reference image and a target image cannot be acquired, interest features of the reference image and the target image on a reference plane can be acquired in an image processing mode, and a homography matrix is constructed and decomposed based on the interest features of the reference image and the target image on the reference plane, so that an inaccurate homography matrix parameter can be acquired, and the method has the advantages of high processing speed and few dependence conditions. The subsequent steps need to adjust camera rotation information, camera translation information, road surface normal information and camera height in the homography matrix parameters.

In one embodiment of the present disclosure, step S1-a-3 includes:

S1-A-3-1: and obtaining a matching point pair of the reference image and the target image by adopting a feature matching method based on the first image feature and the second image feature. The method comprises the steps of firstly obtaining feature information of all feature points in a first image feature and a second image feature, matching the feature points based on the feature information of the feature points, obtaining all successfully matched feature points in the feature points based on the feature points of the first image feature and the second image feature, and constructing matched point pairs.

S1-A-3-2: based on the first image features and the second image features, reference plane features in the reference image and reference plane features in the target image are determined using a semantic recognition model. The semantic segmentation model is trained in advance and used for segmenting specified features and other features of the model input image. In this example, a semantic segmentation model is used to segment road surface features in the image and features outside the road surface. Namely, the road surface features in the reference image and the road surface features in the target image are segmented by using a semantic segmentation model.

S1-A-3-3: the first interest feature and the second interest feature are determined based on the matching point pairs and the reference plane feature in the reference image and the reference plane feature in the target image. And screening the features in the matching point pairs based on the road surface features in the reference image and the road surface features in the target image segmented by the semantic segmentation model to obtain a first interest feature and a second interest feature.

In this embodiment, by obtaining the feature point pairs, combining with the semantic segmentation model, the interest features in the reference image and the target image, that is, the road surface pixel features in the reference image and the target image, can be quickly and accurately obtained.

In another embodiment of the present disclosure, when the radar point cloud data of the reference image and the target image can be acquired, step S1 includes:

S1-B-1: camera rotation information and camera translation information are acquired. The camera rotation information and the camera translation information may be obtained by an Inertial Measurement Unit (IMU) and a multi-sensor fusion technique.

S1-B-2: and acquiring first radar point cloud data corresponding to the reference image, and acquiring second radar point cloud data corresponding to the target image. The reference image and the target image both have image shooting time, and radar point cloud data corresponding to the image shooting time of the reference image and the image shooting time of the target image can be obtained according to the image shooting time of the reference image and the image shooting time of the target image and serve as first radar point cloud data and second radar point cloud data.

S1-B-3: and obtaining a third interest feature based on the first radar point cloud data and the reference image. Based on the semantic segmentation model, road surface features in the reference image and object features outside the road surface can be segmented. And obtaining the characteristics of the objects outside the road surface based on the first radar point cloud data. And removing the object features outside the road surface in the reference image to obtain a third interest feature.

S1-B-4: and obtaining a fourth interest characteristic based on the second radar point cloud data and the target image. Wherein the fourth interest feature is obtained in the same manner as the third interest feature.

S1-B-5: based on the third interest feature and the fourth interest feature, road surface normal information and a camera height are determined. The third interest feature and the fourth interest feature each include a feature of the road surface normal, so that road surface normal information and camera height can be obtained based on a deviation between the features of the road surface normal of the third interest feature and the road surface normal of the fourth interest feature and the internal and external parameters of the camera.

In this embodiment, the road surface normal information and the camera height in the homography matrix parameter can be accurately obtained by using the radar point cloud data of the reference image and the target image, and the subsequent steps only need to adjust the less accurate camera rotation information and camera translation information in the homography matrix parameter.

In one embodiment of the present disclosure, step S1-B-3 includes:

S1-B-3-1: and mapping the residual pixel points in the reference image after removing the pixel points outside the reference plane in the reference image to a radar coordinate system to obtain a first radar coordinate point set. And obtaining the pixels of the road surface image in the reference image and the pixels outside the road surface image by using the semantic recognition model. The pixels of the road surface image in the reference image can be mapped to a radar coordinate system through the image surface image in the correlation technique, and a first radar coordinate point set is obtained.

S1-B-3-2: and determining a radar coordinate point set of a reference plane of the reference image by performing RANSAC processing on the first radar coordinate point set. Giving an interior point rate threshold value of RANSAC processing, and if the interior point rate of the RANSAC processing result is smaller than the interior point rate threshold value, rejecting the result to ensure accuracy; and if the interior point rate of the RANSAC processing result is greater than or equal to the interior point rate threshold value, accepting the RANSAC processing result.

S1-B-3-3: and obtaining a camera coordinate point set corresponding to the radar coordinate point set based on the radar coordinate point set of the reference plane of the reference image. And obtaining a corresponding radar coordinate point set and a camera coordinate point set according to the mapping relation between the radar coordinates and the camera coordinates.

S1-B-3-4: and obtaining a third interest characteristic, namely the road surface image characteristic of the reference image, based on the camera coordinate point set.

In the embodiment, the road surface image features of the reference image can be accurately acquired based on the mapping relationship between the radar coordinates and the camera coordinates and the radar point cloud data.

In one embodiment of the present disclosure, step S4 includes:

s4-1: and obtaining the IOU loss error based on the IOU loss coefficient, the reconstructed image, the reference plane mask of the reconstructed image, the target image and the reference plane mask of the target image.

In this embodiment, the Intersection Over Union (IOU) loss coefficient is obtained by the following formula:

wherein L isIOURepresenting the IOU loss factor, a and B are at the road surface pixels representing the reference image and the road surface pixels of the target video.

IOU loss error LIOU(It*Mt,Isw*Msw)

Where It represents the target image, Mt represents the road surface mask of the target image, Isw represents the reconstructed image, and Msw represents the road surface mask of the reconstructed image.

S4-2: and obtaining the luminosity loss error based on the luminosity loss coefficient, the reconstructed image, the reference plane mask of the reconstructed image, the target image and the reference plane mask of the target image.

In the present embodiment, the photometric loss coefficient is obtained by the following formula:

wherein L ispDenotes a luminosity loss coefficient, SSIM (It, Isw) denotes structural similarity parameters of the reference image and the reconstructed image, α denotes a weight, and α is a constant.

Loss error of luminosity (L)p(It*Mt,Isw*Msw)。

S4-3: and obtaining an edge loss error based on the edge loss coefficient, the reconstructed image, the reference plane mask of the reconstructed image, the target image and the reference plane mask of the target image.

In the present embodiment, the edge loss coefficient is obtained by the following formula:

Le=|Et-Esw|

wherein L iseDenotes the edge loss coefficient, EtEdge information representing the target image, EswRepresenting edge information of the reconstructed image.

Edge loss error of Le(It*Mt,ISw*MSw)。

S4-4: an image error is derived based on the IOU loss error, the luminosity loss error, and the edge loss error.

Image error is IOU loss error + photometric loss error + edge loss error

=LIOU(It*Mt,Isw*Msw)+Lp(It*Mt,Isw*Msw)

+Le(It*Mt,Isw*Msw)。

In this embodiment, the image error on the reference plane between the reconstructed image and the target image can be accurately reflected by the IOU loss error, the luminosity loss error, and the edge loss error.

Exemplary devices

Fig. 6 is a block diagram of an apparatus for optimizing homography matrix parameters according to an embodiment of the present disclosure. As shown in fig. 6, an apparatus for optimizing homography matrix parameters provided in an embodiment of the present disclosure includes: a parameter acquisition module 610, an image reconstruction module 620, a mask processing module 630, an error determination module 640, and a parameter adjustment module 650.

The parameter obtaining module 610 is configured to obtain homography matrix parameters of reference planes of the reference image and the target image. The image reconstruction module 620 is configured to generate a reconstructed image of the reference image based on the homography matrix parameters and the reference image. The mask processing module 630 is configured to determine a reference plane mask of the reconstructed image based on the homography matrix parameters and the reference plane mask of the reference image. The error determination module 640 is configured to determine an image error between the reconstructed image and the target image on a reference plane based on the reconstructed image, the reference plane mask of the reconstructed image, the target image, and the reference plane mask of the target image. The parameter adjusting module 650 is configured to adjust the homography matrix parameters based on the image error.

Fig. 7 is a block diagram illustrating the structure of an error determination module in an example disclosed herein. As shown in fig. 7, in one embodiment of the present disclosure, the error determination module 640 includes: a first error determination unit 6401 for obtaining an IOU loss error based on the IOU loss coefficient, the reconstructed image, the reference plane mask of the reconstructed image, the target image, and the reference plane mask of the target image; a second error determination unit 6402 for obtaining a luminosity loss error based on the luminosity loss coefficient, the reconstructed image, the reference plane mask of the reconstructed image, the target image, and the reference plane mask of the target image; a third error determination unit 6403 configured to obtain an edge loss error based on an edge loss coefficient, the reconstructed image, a reference plane mask of the reconstructed image, the target image, and a reference plane mask of the target image; an error summarization unit 6404 configured to obtain the image error based on the IOU loss error, the luminosity loss error, and the edge loss error.

Fig. 8 is a block diagram illustrating the structure of an error determination module in one example of the disclosure. As shown in fig. 8, in an embodiment of the present disclosure, the parameter obtaining module 610 includes an image feature extracting unit 6101, configured to perform feature extraction on the reference image to obtain a first image feature, and perform feature extraction on the target image to obtain a second image feature; an interest feature obtaining unit 6102, configured to remove features corresponding to the points outside the reference plane in the first image feature and the second image feature, so as to obtain a first interest feature and a second interest feature; a homography matrix obtaining unit 6103, configured to obtain a homography matrix based on the first interest feature and the second interest feature; a matrix parameter obtaining unit 6104 for determining camera rotation information, camera translation information, road surface normal information, and camera height based on the homography matrix. The homography matrix parameters comprise camera rotation information, camera translation information, road surface normal information and camera height.

In an embodiment of the present disclosure, the interest feature obtaining unit 6102 is configured to obtain matching point pairs of the reference image and the target image by using a feature matching method based on the first image feature and the second image feature; the interest feature obtaining unit 6102 is further configured to determine a reference plane feature in the reference image and a reference plane feature in the target image using a semantic recognition model based on the first image feature and the second image feature; the interest feature obtaining unit 6103 is further configured to determine the first interest feature and the second interest feature based on the matching point pairs and a reference plane feature in the reference image and a reference plane feature in the target image.

Fig. 9 is a block diagram illustrating the structure of an error determination module in another example. As shown in fig. 9, in one embodiment of the present disclosure, the parameter obtaining module 610 includes: a first homography matrix parameter acquisition unit 6105 for acquiring camera rotation information and camera translation information; a radar data obtaining unit 6106, configured to obtain first radar point cloud data corresponding to the reference image, and obtain second radar point cloud data corresponding to the target image; an interest feature determining unit 6107, configured to obtain a third interest feature based on the first radar point cloud data and the reference image, and obtain a fourth interest feature based on the second radar point cloud data and the target image; a second homography matrix parameter obtaining unit 6108, configured to determine road surface normal information and a camera height based on the third interest feature and the fourth interest feature. Wherein the homography matrix parameters include the camera rotation information, the camera translation information, the road surface normal information, and the camera height.

In an embodiment of the present disclosure, the interest characteristic determining unit 6107 is configured to map remaining pixel points in the reference image after removing pixel points outside the reference plane in the reference image into a radar coordinate system, so as to obtain a first radar coordinate point set; the interest feature determining unit 6107 is further configured to determine a radar coordinate point set of a reference plane of the reference image by performing random sampling consistency processing on the first radar coordinate point set; the interest feature determining unit 6107 is further configured to obtain a camera coordinate point set corresponding to the radar coordinate point set based on the radar coordinate point set of the reference plane of the reference image; the feature of interest determination unit 6107 is also configured to derive the third feature of interest based on the set of camera coordinate points.

In one embodiment of the disclosure, the reference image and the target image are both obtained by shooting through a camera during the running of a vehicle, and the reference image and the target image are separated by N frames, wherein N is a natural number greater than or equal to 1.

It should be noted that a specific implementation of the apparatus for adjusting homography matrix parameters in the embodiment of the present disclosure is similar to a specific implementation of the method for adjusting homography matrix parameters in the embodiment of the present disclosure, and reference is specifically made to a method portion for adjusting homography matrix parameters, and details are not described here in order to reduce redundancy.

Exemplary ElectricitySub-device

Next, an electronic apparatus according to an embodiment of the present disclosure is described with reference to fig. 10. FIG. 10 illustrates a block diagram of an electronic device in accordance with an embodiment of the disclosure.

As shown in fig. 10, the electronic device includes one or more processors 1001 and memory 1002.

The processor 1001 may be a Central Processing Unit (CPU) or other form of processing unit having data processing capabilities and/or instruction execution capabilities, and may control other components in the electronic device to perform desired functions.

Memory 1002 may include one or more computer program products that may include various forms of computer-readable storage media, such as volatile memory and/or non-volatile memory. The volatile memory may include, for example, Random Access Memory (RAM), cache memory (cache), and/or the like. The non-volatile memory may include, for example, Read Only Memory (ROM), hard disk, flash memory, etc. One or more computer program instructions may be stored on the computer-readable storage medium and executed by the processor 1001 to implement the methods of adjusting homography matrix parameters of the various embodiments of the present disclosure described above and/or other desired functions. Various contents such as an input signal, a signal component, a noise component, etc. may also be stored in the computer-readable storage medium.

In one example, the electronic device may further include: an input device 1003 and an output device 1004, which are interconnected by a bus system and/or other form of connection mechanism (not shown).

The input device 1003 may include, for example, a keyboard, a mouse, or the like.

Of course, for simplicity, only some of the components of the electronic device relevant to the present disclosure are shown in fig. 10, omitting components such as buses, input/output interfaces, and the like. In addition, the electronic device may include any other suitable components, depending on the particular application.

Exemplary computer program product and computer-readable storage Medium

In addition to the above-described methods and apparatus, embodiments of the present disclosure may also be a computer program product comprising computer program instructions that, when executed by a processor, cause the processor to perform the steps in the method of adjusting homography matrix parameters according to various embodiments of the present disclosure described in the "exemplary methods" section above of this specification.

The computer program product may write program code for carrying out operations for embodiments of the present disclosure in any combination of one or more programming languages, including an object oriented programming language such as Java, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device and partly on a remote computing device, or entirely on the remote computing device or server.

Furthermore, embodiments of the present disclosure may also be a computer-readable storage medium having stored thereon computer program instructions that, when executed by a processor, cause the processor to perform steps in a method of adjusting homography matrix parameters according to various embodiments of the present disclosure described in the "exemplary methods" section above of this specification.

The computer-readable storage medium may take any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. A readable storage medium may include, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium include: an electrical connection having one or more wires, a portable disk, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.

The foregoing describes the general principles of the present disclosure in conjunction with specific embodiments, however, it is noted that the advantages, effects, etc. mentioned in the present disclosure are merely examples and are not limiting, and they should not be considered essential to the various embodiments of the present disclosure. Furthermore, the foregoing disclosure of specific details is for the purpose of illustration and description and is not intended to be limiting, since the disclosure is not intended to be limited to the specific details so described.

In the present specification, the embodiments are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same or similar parts in the embodiments are referred to each other. For the system embodiment, since it basically corresponds to the method embodiment, the description is relatively simple, and for the relevant points, reference may be made to the partial description of the method embodiment.

The block diagrams of devices, apparatuses, systems referred to in this disclosure are only given as illustrative examples and are not intended to require or imply that the connections, arrangements, configurations, etc. must be made in the manner shown in the block diagrams. These devices, apparatuses, devices, systems may be connected, arranged, configured in any manner, as will be appreciated by those skilled in the art. Words such as "including," "comprising," "having," and the like are open-ended words that mean "including, but not limited to," and are used interchangeably therewith. The words "or" and "as used herein mean, and are used interchangeably with, the word" and/or, "unless the context clearly dictates otherwise. The word "such as" is used herein to mean, and is used interchangeably with, the phrase "such as but not limited to".

The methods and apparatus of the present disclosure may be implemented in a number of ways. For example, the methods and apparatus of the present disclosure may be implemented by software, hardware, firmware, or any combination of software, hardware, and firmware. The above-described order for the steps of the method is for illustration only, and the steps of the method of the present disclosure are not limited to the order specifically described above unless specifically stated otherwise. Further, in some embodiments, the present disclosure may also be embodied as programs recorded in a recording medium, the programs including machine-readable instructions for implementing the methods according to the present disclosure. Thus, the present disclosure also covers a recording medium storing a program for executing the method according to the present disclosure.

It is also noted that in the devices, apparatuses, and methods of the present disclosure, each component or step can be decomposed and/or recombined. These decompositions and/or recombinations are to be considered equivalents of the present disclosure.

The previous description of the disclosed aspects is provided to enable any person skilled in the art to make or use the present disclosure. Various modifications to these aspects will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other aspects without departing from the scope of the disclosure. Thus, the present disclosure is not intended to be limited to the aspects shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

The foregoing description has been presented for purposes of illustration and description. Furthermore, this description is not intended to limit embodiments of the disclosure to the form disclosed herein. While a number of example aspects and embodiments have been discussed above, those of skill in the art will recognize certain variations, modifications, alterations, additions and sub-combinations thereof.

17页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:基于图像内容外扩和美学引导的构图方法

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!